AI-Powered Vibe Coding Overwhelms Apple's App Store Review Process

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The use of AI-driven "vibe coding" tools led to an 84% surge in App Store submissions, overwhelming Apple's review infrastructure and causing approval delays of up to 30 days. Apple responded by removing non-compliant apps, highlighting the disruptive impact of AI-generated software on critical digital infrastructure.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (large language model-based vibe coding tools) whose use has directly led to a substantial increase in app submissions that strain and disrupt Apple's app review infrastructure, a form of critical infrastructure. The disruption includes delays from 24 hours to up to 30 days and removal of apps that violate guidelines, impacting the operation and management of the App Store. This meets the definition of an AI Incident under category (b) - disruption of critical infrastructure management and operation. The event is not merely a potential hazard or complementary information but a realized harm caused by AI system use. The economic and regulatory tensions further underscore the significance of the harm.[AI generated]
AI principles
AccountabilityRobustness & digital security

Industries
IT infrastructure and hostingConsumer services

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyPublic interestReputational

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Vibe coding drove an 84% jump in App Store submissions. Apple is cracking down.

2026-04-05
The Next Web
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language model-based vibe coding tools) whose use has directly led to a substantial increase in app submissions that strain and disrupt Apple's app review infrastructure, a form of critical infrastructure. The disruption includes delays from 24 hours to up to 30 days and removal of apps that violate guidelines, impacting the operation and management of the App Store. This meets the definition of an AI Incident under category (b) - disruption of critical infrastructure management and operation. The event is not merely a potential hazard or complementary information but a realized harm caused by AI system use. The economic and regulatory tensions further underscore the significance of the harm.
Thumbnail Image

The Developer Who Took on Apple's Walled Garden -- and Ended Up in Federal Court

2026-04-04
WebProNews
Why's our monitor labelling this an incident or hazard?
The article centers on a lawsuit challenging Apple's App Store review and removal process for AI-powered apps, highlighting issues of transparency, fairness, and market control. While AI systems are involved (the developer's AI apps), the event does not describe any realized harm caused by the AI systems or plausible future harm from their use or malfunction. The focus is on legal and regulatory scrutiny and developer pushback, which fits the definition of Complementary Information as it provides context and updates on governance and societal responses related to AI systems and their distribution. There is no direct or indirect AI Incident or AI Hazard described.
Thumbnail Image

The Vibe Coding Flood: How AI-Assisted App Development Is Overwhelming Apple's App Store Review Pipeline

2026-04-05
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI coding assistants generating app code) and their use has directly led to harms: operational disruption of Apple's review process, proliferation of low-quality and potentially insecure apps, and negative impacts on the app ecosystem and user experience. These harms are materialized and ongoing, not merely potential. The article details how AI-generated apps often have security vulnerabilities and poor quality, which can harm users and the platform community. The operational strain on Apple's review team also constitutes disruption of critical infrastructure management. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.