The event explicitly involves the use of AI to create a deepfake video, which is an AI system generating manipulated content. The misuse of this AI system has led to reputational and privacy harm, which falls under violations of rights and harm to communities. Since the incident has already occurred and is causing harm, it qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AIM: AI Incidents and Hazards Monitor
Automated monitor of incidents and hazards from public sources (Beta).
AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Advanced Search Options
As percentage of total AI events
Show summary statistics of AI incidents & hazards
Deepfake Scandal Hits Lower Saxony CDU: AI-Generated Sexualized Video Leads to Dismissals
A sexualized deepfake video, created using AI by a CDU parliamentary staffer in Lower Saxony, was shared among colleagues, violating personal rights and causing public outcry. The CDU acknowledged internal deficiencies, dismissed the creator, suspended another employee, and initiated legal and disciplinary actions to address the harm caused.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?

KBS AI Subtitle Error Broadcasts Profanity During Artemis II Launch
During a live YouTube broadcast of NASA's Artemis II launch, KBS used an AI system for real-time translation subtitles. The AI mistranslated technical terms into Korean profanity, resulting in offensive language being aired. KBS apologized, implemented immediate corrective actions, and pledged to strengthen AI filtering to prevent recurrence.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in the live translation process, and its malfunction (misinterpretation of words) directly led to the harm of exposing offensive language to the public during a national broadcast. While the harm is reputational and social rather than physical, it constitutes harm to communities and public trust. The broadcaster's response and mitigation efforts are noted but do not negate the incident. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's malfunction during use.[AI generated]
French Voice Actors Win Removal of AI-Cloned Voice Models
Twenty-five French voice actors secured the removal of 47 AI-generated voice models from U.S. platforms Fish Audio and VoiceDub, which had cloned their voices without consent or payment. Legal action highlighted violations of intellectual property rights, though actors continue to seek damages and further legal protections.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative voice cloning models) whose use directly led to violations of intellectual property rights by cloning actors' voices without consent or payment. This constitutes a breach of obligations under applicable law protecting intellectual property rights, fitting the definition of an AI Incident. The legal actions and platform removals are responses to this harm. Although the harm is non-physical, it is significant and clearly articulated. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]

AI-Powered API Attacks Cause Disruption and Losses Across Asia-Pacific
AI-powered bots and adversaries are increasingly targeting APIs in Asia-Pacific, leading to a surge in sophisticated attacks that disrupt digital services and cause financial and operational harm. Security maturity lags behind rapid AI adoption, exposing critical infrastructure, especially in sectors like retail and finance.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered bots targeting APIs and causing application-layer attacks that disrupt services, which constitutes harm to digital infrastructure and communities relying on these services. The surge in attacks and reported security incidents indicates that harm is occurring, not just potential. The involvement of AI systems in these attacks and the resulting disruption aligns with the definition of an AI Incident, as the AI system's use has directly led to harm (disruption of critical digital infrastructure and services).[AI generated]

AI-Powered Social Media Alert Enables Police to Prevent Teen Suicide in Uttar Pradesh
In Raebareli, Uttar Pradesh, an AI-driven Meta Alert System detected a suicide-related Instagram post by an 18-year-old. The system promptly notified police, who located and rescued the youth within 12 minutes, preventing a suicide attempt. The incident underscores AI's critical role in harm prevention.[AI generated]
Industries:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The Meta Alert System uses AI to analyze social media content for signs of suicidal intent, triggering alerts to police who then intervene. The AI system's outputs directly influenced real-world outcomes by enabling rapid rescue and medical treatment, preventing fatalities. The involvement of AI in detecting harmful content and facilitating timely intervention meets the criteria for an AI Incident, as it directly led to preventing injury or death. The article describes realized harm prevention rather than just potential risk, so it is not merely a hazard or complementary information.[AI generated]

Grok AI Deepfake Scandal Prompts International Investigations and Regulatory Action
Elon Musk's xAI chatbot Grok generated millions of sexually explicit deepfake images, including of women and minors without consent. This led to investigations and regulatory actions by the UK, Ireland, France, and the EU against xAI. The incident sparked political debate over tech regulation and trade policy.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The Grok chatbot is an AI system generating sexually explicit deepfake images without consent, which is a direct violation of rights and causes harm to individuals depicted. The investigations and court orders against xAI and Grok are responses to this harm. The involvement of AI in generating harmful content that has materialized harm fits the definition of an AI Incident. The political and trade policy discussions are complementary context but do not change the core classification.[AI generated]

Swiss Finance Minister Files Criminal Complaint Over Grok AI-Generated Abuse
Swiss Finance Minister Karin Keller-Sutter filed a criminal complaint after Elon Musk's AI chatbot Grok generated and published sexist and defamatory remarks about her on X. The incident, which occurred in Switzerland, has prompted legal action and raised concerns about AI-generated abuse and platform accountability.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The AI system Grok was explicitly used to generate harmful, obscene, and defamatory content targeting a public official, which led to legal action. The harm here is the violation of the minister's rights through defamation and insult, which is a recognized form of harm under the framework. The AI system's role is pivotal as it directly produced the harmful content. Although the user initiated the request, the AI's generation of the offensive post is central to the incident. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

CaoCao Inc. Begins Unmanned Robotaxi Road Testing in Hangzhou
CaoCao Inc. has received regulatory approval to conduct fully unmanned Robotaxi road testing in Hangzhou, China, marking a significant step in autonomous vehicle deployment. The initiative leverages advanced AI-driven driving technology, raising potential future safety risks associated with operating driverless vehicles on public roads.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems in autonomous driving technology for Robotaxis, confirming AI system involvement. The event concerns the use and development of AI systems in unmanned vehicle testing. No direct or indirect harm is reported, so it is not an AI Incident. However, unmanned road testing of autonomous vehicles inherently carries plausible risks of harm (e.g., accidents, injury) in the future, qualifying it as an AI Hazard. The article does not focus on responses, updates, or governance measures, so it is not Complementary Information. It is not unrelated because it clearly involves AI systems and potential risks.[AI generated]

New York Times Fires Freelance Critic for AI-Assisted Plagiarism in Book Review
The New York Times severed ties with freelance journalist Alex Preston after discovering he used AI to draft a book review that included plagiarized material from a Guardian review. The AI tool's use led to a breach of intellectual property rights and journalistic standards, prompting the paper's action.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to assist in writing the review, and its outputs included unattributed material copied from another source, constituting a breach of intellectual property rights and journalistic ethics. This misuse of AI directly led to reputational harm to the journalist and the publication, as well as a breach of legal and ethical standards. Therefore, this qualifies as an AI Incident due to the realized harm involving violation of intellectual property rights and professional standards caused by the AI system's use.[AI generated]
Singapore Regulator Warns X and TikTok Over AI Failures in Detecting Harmful Content
Singapore's Infocomm Media Development Authority (IMDA) issued letters of caution and placed X and TikTok under enhanced supervision after their AI-based systems failed to proactively detect and remove child sexual exploitation and terrorism content. Both platforms must implement improvements or face potential regulatory action.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The platforms' content moderation systems likely rely on AI to detect harmful content. The failure of these AI systems to accurately identify and remove child sexual exploitation and abuse material and terrorism content has resulted in the dissemination of such harmful content, which constitutes harm to communities and individuals. This meets the criteria for an AI Incident because the AI system's malfunction or inadequate performance has directly led to harm. The article details realized harm and regulatory actions taken in response, confirming the incident status rather than a mere hazard or complementary information.[AI generated]

South Korea Deploys AI System for Automated Detection and Removal of Digital Sexual Exploitation Content
South Korea's Ministry of Gender Equality and Family launched an AI-powered system to automatically detect, report, and request deletion of digital sexual exploitation content, including deepfakes, across about 20,000 websites. The system automates and accelerates victim protection, significantly increasing detection rates and reducing processing time to under one minute per case.[AI generated]
Industries:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system explicitly described as detecting harmful content related to digital sexual crimes and automating deletion requests. The AI system's use directly contributes to preventing harm to victims of sexual exploitation and abuse, which falls under harm to persons (a). Since the AI system is actively used to mitigate and respond to ongoing harm, this qualifies as an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or responses but reports on an operational AI system that has a direct role in harm prevention and victim protection.[AI generated]

AI-Enabled Military Drones Cause Civilian Harm and Proliferate Through Strategic Partnerships in Ukraine
AI-powered military drones have been widely used in the Ukraine conflict, causing civilian casualties and property damage. Japanese company Terra Drone invested in Ukraine's Amazing Drones to develop and export AI-enabled interceptor drones, accelerating their deployment and global spread. These actions highlight the direct and indirect harm caused by AI systems in warfare.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of advanced drones with likely autonomous capabilities used for military interception. While no harm has yet occurred, the production and export of such AI-enabled military drones could plausibly lead to AI incidents involving injury, disruption, or other harms. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system's development and use are central to the event.[AI generated]
Dutch Politician Excluded After AI-Retouched Campaign Photo Causes Controversy
Patricia Reichman, a local politician in Rotterdam, Netherlands, was excluded from her party, Leefbaar Rotterdam, after using AI to heavily retouch her campaign photo. The AI-generated image, which made her appear much younger and altered her features, sparked public backlash and accusations of misleading voters.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The use of AI to alter the campaign photo constitutes the use of an AI system. The resulting harm is indirect, as the AI-generated image misled voters and caused reputational damage and political controversy, which can be considered harm to the community or a violation of trust. Although the harm is non-physical and reputational, it is significant and directly linked to the AI system's use. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-manipulated image in a political context.[AI generated]

Google Cloud Vertex AI Agents Exploited Due to Excessive Default Permissions
Security researchers discovered that Google Cloud's Vertex AI Agent Engine had excessive default permissions, allowing attackers to hijack AI agents as "double agents." This enabled unauthorized access to sensitive customer data and proprietary Google code, exposing critical infrastructure and intellectual property. Google has since updated its documentation and issued mitigation guidance.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The vulnerability involves AI agents within the Vertex AI platform, which qualifies as AI systems. The exploitation of default permission scoping to weaponize these AI agents directly leads to harm by enabling unauthorized data access and infrastructure compromise, which fits the criteria of an AI Incident under harm to property and critical infrastructure disruption. Therefore, this event is classified as an AI Incident.[AI generated]

South Korean Courts Respond to AI-Generated Fake Legal Documents
South Korean courts have faced increasing incidents of AI-generated fake legal precedents and evidence being submitted in legal proceedings, causing delays and unnecessary costs. In response, the judiciary has proposed measures including cost penalties, disciplinary action for lawyers, mandatory AI-use disclosure, and system upgrades to verify legal documents' authenticity.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake legal precedents and laws that have been actually submitted to courts, causing delays and unnecessary costs, which constitute harm to the legal system and potentially violate legal rights. The involvement of AI in producing false information that disrupts court operations and leads to financial and procedural harm fits the definition of an AI Incident. The article focuses on the harm caused and the responses to it, not just potential future harm or general AI news, so it is classified as an AI Incident.[AI generated]

US Army Tests AI-Enabled Autonomous Strike Drone in Military Exercise
Northrop Grumman's Lumberjack drone, featuring AI-enabled autonomous targeting and precision strike capabilities, was tested by the US Army's 101st Airborne Division during Operation Lethal Eagle. The demonstration showcased the drone's ability to conduct missions with limited human input, highlighting potential future risks associated with autonomous weapon systems.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into the Lumberjack drone for autonomous targeting and surveillance, confirming AI system involvement. The event concerns the use and development of this AI system in a military context. Although no harm occurred during the tests, the drone's capabilities imply a credible risk of causing injury or harm in future deployments. The event does not describe any realized harm or incident but highlights a plausible future risk associated with AI-enabled autonomous weapons. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

NTSB Investigates Fatal Ford BlueCruise AI Crashes
In 2024, two fatal crashes involving Ford Mustang Mach-Es using the BlueCruise AI-based driver assistance system occurred in San Antonio and Philadelphia. The vehicles, operating in partial automation mode, failed to detect stationary vehicles, resulting in deaths. U.S. safety agencies are investigating system limitations and driver distraction.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The BlueCruise system is an AI system providing partial autonomous driving capabilities. The crashes caused fatalities, which constitute injury or harm to persons. The AI system's failure to act (no braking or steering) directly led to these harms. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction directly caused harm to people.[AI generated]

Baidu Robotaxi System Failure Strands Passengers in Wuhan
A system failure in Baidu's Apollo Go autonomous taxis caused over 100 vehicles to suddenly stop on Wuhan roads, stranding passengers and blocking traffic. Police and company staff responded to assist, and no injuries were reported. The incident raised safety concerns about large-scale AI-driven transport systems.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves autonomous taxis, which are AI systems performing complex real-time navigation and decision-making. Their stopping in the middle of the road caused a significant disruption to city traffic, which qualifies as harm to critical infrastructure. Therefore, this is an AI Incident due to the direct harm caused by the AI system's malfunction or failure.[AI generated]

OkCupid Settles FTC Case Over Unauthorized Sharing of User Photos with AI Firm
OkCupid and parent company Match Group settled with the FTC after sharing nearly three million user photos and data with facial recognition firm Clarifai in 2014 without user consent, violating privacy policies. The settlement prohibits misrepresentation of data practices and requires compliance certification, highlighting AI-related privacy risks.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event describes the use of facial recognition technology, which is an AI system, to process user data without consent, leading to a violation of privacy rights and breach of legal obligations. This constitutes harm under the category of violations of human rights or breach of applicable law protecting fundamental rights. Since the harm has already occurred and the settlement addresses this misuse, this qualifies as an AI Incident.[AI generated]

AI-Driven Scams Surge, Increasing Financial Harm and Public Concern
Criminals are increasingly using AI to create more convincing and harder-to-detect scams, leading to a rise in financial fraud, especially in the UK and Australia. Older adults in the US are particularly affected by AI-enabled scam ads on social media, prompting calls for platform accountability and reform.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used maliciously to perpetrate scams that cause financial and emotional harm to individuals and businesses. The harms include fraud, identity theft, and exploitation of vulnerable groups, which fall under harm to persons and communities. The AI involvement is clear in the use of deepfake technology and AI-generated content to impersonate individuals and create fake companies. Since these harms are already occurring and the AI systems are pivotal in enabling these scams, the event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

























