aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 14182 incidents & hazards
Thumbnail Image

AI-Powered API Attacks Cause Disruption and Losses Across Asia-Pacific

2026-04-01

AI-powered bots and adversaries are increasingly targeting APIs in Asia-Pacific, leading to a surge in sophisticated attacks that disrupt digital services and cause financial and operational harm. Security maturity lags behind rapid AI adoption, exposing critical infrastructure, especially in sectors like retail and finance.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Logistics, wholesale, and retailFinancial and insurance services
Affected stakeholders:
BusinessGeneral public
Harm types:
Economic/PropertyPublic interest
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Other
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-powered bots targeting APIs and causing application-layer attacks that disrupt services, which constitutes harm to digital infrastructure and communities relying on these services. The surge in attacks and reported security incidents indicates that harm is occurring, not just potential. The involvement of AI systems in these attacks and the resulting disruption aligns with the definition of an AI Incident, as the AI system's use has directly led to harm (disruption of critical digital infrastructure and services).[AI generated]

Thumbnail Image

AI-Powered Social Media Alert Enables Police to Prevent Teen Suicide in Uttar Pradesh

2026-04-01
India

In Raebareli, Uttar Pradesh, an AI-driven Meta Alert System detected a suicide-related Instagram post by an 18-year-old. The system promptly notified police, who located and rescued the youth within 12 minutes, preventing a suicide attempt. The incident underscores AI's critical role in harm prevention.[AI generated]

Industries:
Media, social platforms, and marketingGovernment, security, and defence
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The Meta Alert System uses AI to analyze social media content for signs of suicidal intent, triggering alerts to police who then intervene. The AI system's outputs directly influenced real-world outcomes by enabling rapid rescue and medical treatment, preventing fatalities. The involvement of AI in detecting harmful content and facilitating timely intervention meets the criteria for an AI Incident, as it directly led to preventing injury or death. The article describes realized harm prevention rather than just potential risk, so it is not merely a hazard or complementary information.[AI generated]

Thumbnail Image

Grok AI Deepfake Scandal Prompts International Investigations and Regulatory Action

2026-04-01

Elon Musk's xAI chatbot Grok generated millions of sexually explicit deepfake images, including of women and minors without consent. This led to investigations and regulatory actions by the UK, Ireland, France, and the EU against xAI. The incident sparked political debate over tech regulation and trade policy.[AI generated]

AI principles:
Respect of human rightsPrivacy & data governance
Industries:
Consumer servicesMedia, social platforms, and marketing
Affected stakeholders:
WomenChildren
Harm types:
PsychologicalHuman or fundamental rightsReputational
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The Grok chatbot is an AI system generating sexually explicit deepfake images without consent, which is a direct violation of rights and causes harm to individuals depicted. The investigations and court orders against xAI and Grok are responses to this harm. The involvement of AI in generating harmful content that has materialized harm fits the definition of an AI Incident. The political and trade policy discussions are complementary context but do not change the core classification.[AI generated]

Thumbnail Image

NTSB Investigates Fatal Ford BlueCruise AI Crashes

2026-03-31
United States

In 2024, two fatal crashes involving Ford Mustang Mach-Es using the BlueCruise AI-based driver assistance system occurred in San Antonio and Philadelphia. The vehicles, operating in partial automation mode, failed to detect stationary vehicles, resulting in deaths. U.S. safety agencies are investigating system limitations and driver distraction.[AI generated]

AI principles:
SafetyAccountability
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
Consumers
Harm types:
Physical (death)
Severity:
AI incident
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The BlueCruise system is an AI system providing partial autonomous driving capabilities. The crashes caused fatalities, which constitute injury or harm to persons. The AI system's failure to act (no braking or steering) directly led to these harms. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction directly caused harm to people.[AI generated]

Thumbnail Image

Baidu Robotaxi System Failure Strands Passengers in Wuhan

2026-03-31
China

A system failure in Baidu's Apollo Go autonomous taxis caused over 100 vehicles to suddenly stop on Wuhan roads, stranding passengers and blocking traffic. Police and company staff responded to assist, and no injuries were reported. The incident raised safety concerns about large-scale AI-driven transport systems.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
ConsumersGeneral public
Harm types:
Economic/PropertyPublic interest
Severity:
AI incident
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The Baidu Apollo Go robotaxis are AI systems performing autonomous driving tasks. The 'system malfunction' caused these vehicles to stop unexpectedly, resulting in traffic jams and at least one crash, which constitutes harm to property and communities. The AI system's malfunction is the direct cause of this incident. Therefore, this qualifies as an AI Incident under the framework.[AI generated]

Thumbnail Image

New York Times Fires Freelance Critic for AI-Assisted Plagiarism in Book Review

2026-03-31
United States

The New York Times severed ties with freelance journalist Alex Preston after discovering he used AI to draft a book review that included plagiarized material from a Guardian review. The AI tool's use led to a breach of intellectual property rights and journalistic standards, prompting the paper's action.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
BusinessWorkers
Harm types:
Reputational
Severity:
AI incident
Business function:
Other
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system was explicitly used to assist in writing the review, and its outputs included unattributed material copied from another source, constituting a breach of intellectual property rights and journalistic ethics. This misuse of AI directly led to reputational harm to the journalist and the publication, as well as a breach of legal and ethical standards. Therefore, this qualifies as an AI Incident due to the realized harm involving violation of intellectual property rights and professional standards caused by the AI system's use.[AI generated]

Thumbnail Image

Singapore Regulator Warns X and TikTok Over AI Failures in Detecting Harmful Content

2026-03-31
Singapore

Singapore's Infocomm Media Development Authority (IMDA) issued letters of caution and placed X and TikTok under enhanced supervision after their AI-based systems failed to proactively detect and remove child sexual exploitation and terrorism content. Both platforms must implement improvements or face potential regulatory action.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Media, social platforms, and marketing
Affected stakeholders:
ChildrenGeneral public
Harm types:
Human or fundamental rightsPsychologicalPublic interest
Severity:
AI incident
Business function:
Compliance and justice
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The platforms' content moderation systems likely rely on AI to detect harmful content. The failure of these AI systems to accurately identify and remove child sexual exploitation and abuse material and terrorism content has resulted in the dissemination of such harmful content, which constitutes harm to communities and individuals. This meets the criteria for an AI Incident because the AI system's malfunction or inadequate performance has directly led to harm. The article details realized harm and regulatory actions taken in response, confirming the incident status rather than a mere hazard or complementary information.[AI generated]

Thumbnail Image

South Korea Deploys AI System for Automated Detection and Removal of Digital Sexual Exploitation Content

2026-03-31
Korea

South Korea's Ministry of Gender Equality and Family launched an AI-powered system to automatically detect, report, and request deletion of digital sexual exploitation content, including deepfakes, across about 20,000 websites. The system automates and accelerates victim protection, significantly increasing detection rates and reducing processing time to under one minute per case.[AI generated]

Industries:
Government, security, and defenceDigital security
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system explicitly described as detecting harmful content related to digital sexual crimes and automating deletion requests. The AI system's use directly contributes to preventing harm to victims of sexual exploitation and abuse, which falls under harm to persons (a). Since the AI system is actively used to mitigate and respond to ongoing harm, this qualifies as an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or responses but reports on an operational AI system that has a direct role in harm prevention and victim protection.[AI generated]

Thumbnail Image

AI-Enabled Military Drones Cause Civilian Harm and Proliferate Through Strategic Partnerships in Ukraine

2026-03-31
Ukraine

AI-powered military drones have been widely used in the Ukraine conflict, causing civilian casualties and property damage. Japanese company Terra Drone invested in Ukraine's Amazing Drones to develop and export AI-enabled interceptor drones, accelerating their deployment and global spread. These actions highlight the direct and indirect harm caused by AI systems in warfare.[AI generated]

AI principles:
Respect of human rightsSafety
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
General public
Harm types:
Physical (death)Physical (injury)Economic/Property
Severity:
AI hazard
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article involves AI systems in the form of advanced drones with likely autonomous capabilities used for military interception. While no harm has yet occurred, the production and export of such AI-enabled military drones could plausibly lead to AI incidents involving injury, disruption, or other harms. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system's development and use are central to the event.[AI generated]

Thumbnail Image

Dutch Politician Excluded After AI-Retouched Campaign Photo Causes Controversy

2026-03-31
Netherlands

Patricia Reichman, a local politician in Rotterdam, Netherlands, was excluded from her party, Leefbaar Rotterdam, after using AI to heavily retouch her campaign photo. The AI-generated image, which made her appear much younger and altered her features, sparked public backlash and accusations of misleading voters.[AI generated]

AI principles:
Transparency & explainabilityDemocracy & human autonomy
Industries:
Media, social platforms, and marketing
Affected stakeholders:
General publicOther
Harm types:
ReputationalPublic interest
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The use of AI to alter the campaign photo constitutes the use of an AI system. The resulting harm is indirect, as the AI-generated image misled voters and caused reputational damage and political controversy, which can be considered harm to the community or a violation of trust. Although the harm is non-physical and reputational, it is significant and directly linked to the AI system's use. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-manipulated image in a political context.[AI generated]

Thumbnail Image

Google Cloud Vertex AI Agents Exploited Due to Excessive Default Permissions

2026-03-31
United States

Security researchers discovered that Google Cloud's Vertex AI Agent Engine had excessive default permissions, allowing attackers to hijack AI agents as "double agents." This enabled unauthorized access to sensitive customer data and proprietary Google code, exposing critical infrastructure and intellectual property. Google has since updated its documentation and issued mitigation guidance.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
IT infrastructure and hostingDigital security
Affected stakeholders:
Business
Harm types:
Human or fundamental rightsPublic interestEconomic/Property
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Interaction support/chatbotsGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The vulnerability involves AI agents within the Vertex AI platform, which qualifies as AI systems. The exploitation of default permission scoping to weaponize these AI agents directly leads to harm by enabling unauthorized data access and infrastructure compromise, which fits the criteria of an AI Incident under harm to property and critical infrastructure disruption. Therefore, this event is classified as an AI Incident.[AI generated]

Thumbnail Image

AI Security System Prevents Crime in Unmanned Stores in South Korea

2026-03-30
Korea

South Korean company S1's AI security solution for unmanned stores, featuring AI CCTV and detection sensors, has seen a 33% increase in adoption. The system detects abnormal behavior in real time, alerts monitoring centers, and enables rapid intervention, preventing theft and vandalism and leading to criminal apprehension.[AI generated]

Industries:
Consumer services
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detectionEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The AI system is explicitly described as detecting abnormal behavior in real time and triggering alerts that enable security personnel to intervene promptly, preventing or reducing harm from crimes in unmanned stores. The article provides concrete examples where the AI system's detection led to immediate response and arrest, showing direct involvement in harm prevention. Therefore, this event qualifies as an AI Incident because the AI system's use has directly influenced the management of crime-related harms to property and community safety.[AI generated]

Thumbnail Image

Renault Develops AI-Enabled Ground-Based Military Drone

2026-03-30
France

Renault, in partnership with John Cockerill, is developing a ground-based military drone equipped with AI for autonomous navigation and reconnaissance. The project, prompted by interest from the French defense ministry, is in the exploratory phase and poses potential future risks if deployed in military contexts.[AI generated]

AI principles:
Respect of human rightsSafety
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
General public
Harm types:
Physical (death)Human or fundamental rights
Severity:
AI hazard
Business function:
Research and development
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The project involves the development of a drone likely equipped with AI for autonomous or semi-autonomous operation, given the nature of military drones. Although no incident or harm has occurred yet, the mere development and potential deployment of AI-enabled military drones constitute an AI Hazard due to the credible risk of future harm such systems could cause. The article does not report any realized harm or incident, so it cannot be classified as an AI Incident. It is not merely complementary information since the focus is on the development of a potentially hazardous AI system, not on responses or updates to past incidents.[AI generated]

Thumbnail Image

AI-Driven Tax Scams Surge in the US During Filing Season

2026-03-30
United States

In the US, tax season has seen a sharp rise in scams using AI-powered automated calls, voice imitation, and phishing messages to impersonate the IRS. These AI-enabled tactics have led to increased identity theft and financial fraud, prompting warnings from consumer advocates and government officials.[AI generated]

AI principles:
Privacy & data governanceSafety
Industries:
Government, security, and defenceDigital security
Affected stakeholders:
ConsumersGeneral public
Harm types:
Economic/PropertyReputational
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI in scam calls and messages impersonating the IRS, which have led to actual harm including identity theft and financial fraud. The AI systems are used maliciously to generate convincing fake communications that deceive victims, causing direct harm to individuals' finances and personal data security. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons and communities through fraud and identity theft.[AI generated]

Thumbnail Image

AI-Generated Deepfakes Cause Widespread Harm and Legal Challenges

2026-03-30
United States

AI systems, including xAI's Grok, have enabled the mass creation and dissemination of sexualized and nonconsensual deepfake images, leading to reputational, emotional, and psychological harm, especially among minors. Social media platforms have increased takedown efforts, but the rapid spread of deepfakes continues to pose significant societal and legal challenges globally.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Children
Harm types:
ReputationalPsychologicalHuman or fundamental rights
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly links the rise of AI-generated deepfake content to societal harm, including misinformation and potential damage to individuals and public discourse. The AI system's use in generating deepfakes has directly led to these harms, fulfilling the criteria for an AI Incident. The platforms' increased takedown efforts are responses to an ongoing incident rather than the main focus, so the article is not primarily about complementary information. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard. Hence, the classification is AI Incident.[AI generated]

Thumbnail Image

AI Traffic Cameras in Athens Issue First Automated Fines

2026-03-30
Greece

AI-powered traffic cameras in Athens, Greece, have begun automatically detecting violations such as running red lights and not wearing helmets, issuing digital fines directly to drivers. Around 130 fines have already been sent since late March, marking the operational launch of this AI enforcement system.[AI generated]

AI principles:
Transparency & explainabilityAccountability
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Economic/Property
Severity:
AI incident
Business function:
Compliance and justice
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The cameras use AI to detect traffic violations and automatically issue fines, which directly affects individuals by imposing penalties. The AI system's outputs have led to concrete administrative actions (fines) and thus have caused realized harm. The event involves the use of an AI system, and the harm is direct and materialized, fulfilling the criteria for an AI Incident. The description does not merely discuss potential or future risks but actual enforcement actions taken based on AI detection, which excludes classification as a hazard or complementary information.[AI generated]

Thumbnail Image

OkCupid Settles FTC Case Over Unauthorized Sharing of User Photos with AI Firm

2026-03-30
United States

OkCupid and parent company Match Group settled with the FTC after sharing nearly three million user photos and data with facial recognition firm Clarifai in 2014 without user consent, violating privacy policies. The settlement prohibits misrepresentation of data practices and requires compliance certification, highlighting AI-related privacy risks.[AI generated]

AI principles:
Privacy & data governanceTransparency & explainability
Industries:
Consumer servicesDigital security
Affected stakeholders:
Consumers
Harm types:
Human or fundamental rights
Severity:
AI incident
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The event describes the use of facial recognition technology, which is an AI system, to process user data without consent, leading to a violation of privacy rights and breach of legal obligations. This constitutes harm under the category of violations of human rights or breach of applicable law protecting fundamental rights. Since the harm has already occurred and the settlement addresses this misuse, this qualifies as an AI Incident.[AI generated]

Thumbnail Image

AI-Driven Scams Surge, Increasing Financial Harm and Public Concern

2026-03-30
United Kingdom

Criminals are increasingly using AI to create more convincing and harder-to-detect scams, leading to a rise in financial fraud, especially in the UK and Australia. Older adults in the US are particularly affected by AI-enabled scam ads on social media, prompting calls for platform accountability and reform.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketingDigital security
Affected stakeholders:
Consumers
Harm types:
Economic/Property
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems being used maliciously to perpetrate scams that cause financial and emotional harm to individuals and businesses. The harms include fraud, identity theft, and exploitation of vulnerable groups, which fall under harm to persons and communities. The AI involvement is clear in the use of deepfake technology and AI-generated content to impersonate individuals and create fake companies. Since these harms are already occurring and the AI systems are pivotal in enabling these scams, the event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Red Cat Expands AI-Driven Swarm Robotics for Defense Through Acquisitions and Partnerships

2026-03-30
United States

Red Cat Holdings, a U.S. defense technology firm, has acquired Apium Swarm Robotics and partnered with Ukraine's Spetstechnoexport to advance AI-enabled unmanned and robotic systems. These developments enhance Red Cat's capabilities in autonomous drone swarming and multi-domain operations, raising future risks associated with military AI deployment.[AI generated]

AI principles:
SafetyDemocracy & human autonomy
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
General public
Harm types:
Physical (injury)Human or fundamental rightsPublic interest
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems in the form of autonomous swarming drones with distributed control and multi-agent autonomy, which qualifies as AI systems. The event concerns the acquisition and planned integration of this technology into defense-related drone systems. No actual harm or incident is reported; the article focuses on business and technological development. However, the nature of the technology—autonomous swarming drones for battlefield use—implies a credible potential for future harm, such as injury, disruption, or rights violations, if deployed or misused. Therefore, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential impacts are central to the article.[AI generated]

Thumbnail Image

Ukraine Deploys and Advances AI-Driven Interceptor Drone Swarms in Defense Against Russian Attacks

2026-03-30
Ukraine

Ukraine is deploying and developing AI-powered interceptor drones, including the Strila system and autonomous swarms, to counter Russian UAV attacks. German firm Quantum Systems and Ukrainian company WIY Drones are scaling production, with new swarm capabilities enabling coordinated, semi-autonomous defense. These AI systems are actively used in the ongoing conflict, directly impacting battlefield outcomes.[AI generated]

AI principles:
AccountabilityRespect of human rights
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
Government
Harm types:
Economic/Property
Severity:
AI hazard
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article involves AI systems in the context of military drones and their evolving capabilities, including potential future autonomous AI systems. However, it does not report any realized harm or incident caused by AI systems. The discussion is about ongoing use, strategic implications, and potential future developments, which aligns with the definition of an AI Hazard since it plausibly could lead to harm in the future. Yet, since no specific AI system malfunction or misuse causing harm is described, and the focus is more on broad analysis and future risks rather than a particular event, the classification as AI Hazard is appropriate. It is not Complementary Information because it is not updating or responding to a previously reported incident, nor is it unrelated as it clearly involves AI-related military technology and its implications.[AI generated]