Tesla’s FSD is an AI-driven driver-assist system whose malfunction (failure to register an SUV in a blind spot) directly contributed to a collision with minor injuries and a totaled car. The incident demonstrates realized harm from an AI system in use, making this an AI Incident.[AI generated]
AIM: AI Incidents and Hazards Monitor
Automated monitor of incidents and hazards from public sources (Beta).
AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Advanced Search Options
As percentage of total AI events
Show summary statistics of AI incidents & hazards

Safety Concerns Over Tesla's Self-Driving Software in Ride-Hailing Services
A self-driving Tesla used for Uber collided with an SUV in Las Vegas, raising safety concerns about autonomous ride-hailing services. Additionally, Tesla's 'Full Self-Driving' software in a Cybertruck attempted to drive onto a median, highlighting potential malfunctions in AI systems that could lead to harm.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?

California Enacts Laws Against AI-Generated Child Sexual Abuse Imagery
California Governor Gavin Newsom signed bipartisan bills criminalizing the creation, possession, and distribution of AI-generated child sexual abuse images and deepfake nudes. The new laws close legal loopholes, making such content illegal even if not depicting real individuals, in response to increasing misuse of generative AI tools.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, specifically AI tools used to generate harmful sexual imagery and deepfakes. The misuse of these AI systems has directly led to harms including violations of human rights and sexual exploitation of minors and adults, which are clear harms under the AI Incident definition. The laws are a response to realized harms caused by AI-generated content, not just potential future harms. Therefore, this event qualifies as an AI Incident because it concerns the direct use of AI systems to produce illegal and harmful content affecting individuals' rights and safety.[AI generated]

Arkansas Sues YouTube Over AI-Driven Addiction and Youth Mental Health Harm
Arkansas has sued YouTube, Google, and Alphabet, alleging that YouTube's AI-powered recommendation algorithms are deliberately designed to be addictive, causing mental health issues among youth. The lawsuit claims this has led to increased state spending on mental health services, while the companies deny the allegations.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
YouTube's recommendation system is an AI system that influences user engagement by steering users towards certain content. The lawsuit claims this system amplifies harmful material and drives addictive behavior among youth, leading to mental health harms and exposure to harmful content. These harms fall under injury or harm to health and harm to communities. The AI system's use is directly linked to these harms, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a governance response but describes ongoing harm caused by the AI system's operation.[AI generated]

AI-Enabled KARGU Drone Successfully Destroys Armored Target with Armor-Piercing Warhead
The Turkish-developed AI-powered KARGU drone, equipped with an armor-piercing warhead, autonomously identified and destroyed an armored vehicle in its first test. Its operational deployment in military and counter-terrorism contexts demonstrates direct harm to property and potential harm to persons, highlighting the risks of lethal autonomous weapon systems.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The KARGU drone is an AI system as it uses AI and image processing capabilities for autonomous navigation and target engagement. Its use in combat with a new armor-piercing warhead that successfully destroyed a target demonstrates direct harm caused by the AI system's operation. The article explicitly states the system's operational use in military and counter-terrorism contexts, implying realized harm to property and potentially persons. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]

AI-Driven HDMI Cable Data Theft Technique Unveiled
Researchers from Uruguay's University of the Republic have demonstrated a method to intercept electromagnetic signals from HDMI cables using AI, allowing the recreation of screen images and potential data theft. This AI-driven technique poses a new cybersecurity threat by enabling real-time espionage and access to sensitive information.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
While the study demonstrates that AI can be used to recreate screen content (including sensitive credentials) from HDMI emissions, there is no specific, confirmed incident of harm using this exact method. Rather, it outlines a credible future threat—i.e. a novel AI-enabled espionage attack—making it a potential hazard rather than a realized incident or a mere background update.[AI generated]
Australian Politician Warns of AI-Enabled Security Risks in Chinese Electric Vehicles
Nationals MP Barnaby Joyce urged Australia to consider banning Chinese-made electric vehicles, citing fears that AI-enabled features like remote software updates and tracking could be weaponized for malicious purposes. No actual incident has occurred, but concerns center on potential national security and privacy risks.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through the technology in electric vehicles and solar inverters that include software capable of remote updates, tracking, and control, which can be reasonably inferred to involve AI or advanced algorithmic systems. The concerns raised relate to the potential misuse or malicious use of these AI-enabled systems to cause harm such as disruption or privacy violations. However, no actual incident or harm has occurred; the fears are about plausible future misuse. Therefore, this qualifies as an AI Hazard, as the development and use of these AI systems could plausibly lead to an AI Incident involving national security or privacy harms, but no direct or indirect harm has yet materialized.[AI generated]

Waymo Autonomous Vehicle Disrupts Kamala Harris Motorcade
A Waymo autonomous vehicle stalled during a U-turn, blocking Vice President Kamala Harris' motorcade in San Francisco. Police intervened to move the vehicle, highlighting ongoing issues with Waymo cars causing traffic disruptions. This incident raises concerns about the reliability of AI systems in managing urban traffic effectively.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves a deployed AI system (a Waymo driverless car) whose use directly led to harm (vandalism, emotional trauma, risk to passengers). Although the damage was inflicted by humans, it targeted the AI-enabled vehicle and threatened passenger safety, qualifying as an AI Incident under harm to persons and property.[AI generated]

Students Use AI to Create Obscene Image of Teacher
Two class IX students in Moradabad, UP, have been booked for using AI tools to create and post a morphed obscene image of their female teacher on social media. The incident led to an FIR under the IT Act, and police are investigating while working to remove the image from the internet.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article describes a realized harm caused by the misuse of AI tools: two minors used AI to create and post a fake obscene image of their teacher, infringing her rights and causing reputational and psychological harm. This constitutes a direct AI Incident under the framework.[AI generated]

Scania and Fortescue Partner to Develop Autonomous Mining Road Trains
Scania and Fortescue are collaborating to develop, test, and validate AI-powered autonomous road trains for mining operations in Queensland, Australia. The project aims to improve sustainability and efficiency, but introduces potential future safety risks associated with deploying autonomous heavy vehicles in demanding mining environments.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and planned validation of an autonomous AI system for mining trucks, which qualifies as an AI system. However, there is no mention of any injury, disruption, rights violation, or other harm caused by the system so far. The event concerns the development and prospective use of the AI system, which could plausibly lead to future harms (e.g., safety risks in autonomous mining vehicles), but no incident or harm has yet occurred. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm in the future but does not describe an actual incident or realized harm.[AI generated]

AI-Generated Fake Soldier Images Used in Russian Disinformation Campaigns Against Ukraine
AI-generated images of supposed Ukrainian soldiers and civilians are being spread on social media by bots to manipulate emotions, boost engagement, and facilitate Russian disinformation and fraud. Ukrainian authorities warn these posts are part of an information warfare campaign, exploiting public trust and enabling harmful narratives and scams.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated photos being used to spread false narratives and manipulate social media users, which is a direct use of AI systems (generative AI) leading to harm in the form of misinformation and potential fraud. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities and potential violations of rights through manipulation and fraud. The involvement of bots spreading these posts further supports the AI system's role in causing harm.[AI generated]

AI-Generated Fake Messages Used in Extortion Scheme
In Eskişehir, businessman Hikmet Öztürk was accused by S.M. of sending threatening WhatsApp messages. Öztürk claims the messages were AI-generated fakes and that S.M. demanded 100,000 lira to drop the case. This incident highlights the use of AI in creating false evidence for extortion.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article describes the actual use of an AI system to fabricate WhatsApp conversations, falsely accuse and threaten a victim, and extort funds. This misuse directly caused psychological harm, harassment, and fraud, meeting the definition of an AI Incident.[AI generated]

AI-Generated Avatar Used in YouTube Cryptocurrency Scam
Ranveer Allahbadia's YouTube channels were hacked and rebranded as 'Tesla' in a cyberattack. Hackers used an AI-generated Elon Musk avatar to promote a cryptocurrency scam, urging viewers to invest in Bitcoin or Ethereum. All videos were deleted, and the channels were removed from YouTube.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The incident involved active misuse of AI (an AI-generated deepfake avatar) to perpetrate a cryptocurrency scam, directly causing potential financial harm to viewers. Since an AI system’s use directly led to real-world harm, this qualifies as an AI Incident.[AI generated]
US Senator Targeted by Deepfake Impersonating Ukrainian Official
Senator Ben Cardin was targeted by a deepfake scammer posing as former Ukrainian Foreign Minister Dmytro Kuleba during a Zoom call. The deepfake technology convincingly mimicked Kuleba's appearance and voice, raising security concerns. Cardin became suspicious when the impersonator asked uncharacteristic questions, prompting Senate security to issue warnings.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event describes an actual malicious use of AI-driven deepfake technology to deceive a senator’s office, posing national security and political risks. This is a direct harm scenario where AI-generated content was used to influence and extract sensitive information, fitting the definition of an AI Incident.[AI generated]

ECB President Warns of AI-Induced Financial System Risks
European Central Bank President Christine Lagarde warned that AI and new technologies pose potential risks and vulnerabilities to the financial system, urging macroprudential policies to adapt. While no specific incident has occurred, Lagarde highlighted the need for effective risk mitigation as AI adoption accelerates in finance.[AI generated]
AI principles:
Industries:
Harm types:
Severity:
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI technologies and their potential to create new vulnerabilities in the financial system, which could plausibly lead to harm such as financial instability or systemic risk. No actual harm or incident is reported; the focus is on warnings and the need for policy adaptation. This fits the definition of an AI Hazard, as it concerns plausible future harm from AI use in finance. It is not Complementary Information because it is not updating or responding to a past incident, nor is it unrelated as it clearly involves AI systems and their impact.[AI generated]

AI-Driven Demand Predicted to Cause New Semiconductor Shortage
Consulting firm Bain & Company warns that surging demand for GPUs and chips by tech giants for AI data centers may soon trigger a major semiconductor shortage. Experts highlight that the complex supply chain and rapid AI growth could disrupt chip availability, risking economic and operational impacts across industries.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems indirectly through the demand for AI hardware but does not describe any realized harm or incident caused by AI system development, use, or malfunction. The report warns of a plausible future supply shortage due to AI-driven demand, which could lead to economic or operational disruptions if it occurs. However, since no actual harm or incident has yet occurred, and the article primarily provides an analysis and warning about potential future risks, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
Political Consultant Fined for AI-Generated Biden Robocalls
The FCC fined political consultant Steven Kramer $6 million for using AI to create fake robocalls mimicking President Biden's voice, urging New Hampshire voters not to vote in the Democratic primary. The calls, intended to highlight AI's potential dangers, led to charges of voter suppression and impersonation.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The misuse of AI-generated voice cloning for robocalls to mislead and suppress voters constitutes a clear case where an AI system’s use directly led to harm (voter suppression, violation of election integrity). This is not merely a warning or update, but an actual incident of AI-caused harm.[AI generated]

Google Gemini for Workspace Exposed to Indirect Prompt Injection Attacks
Researchers found Google's Gemini for Workspace AI assistant vulnerable to indirect prompt injection, allowing attackers to embed malicious instructions in emails or documents. This can manipulate AI outputs, potentially leading to phishing attacks or misleading messages. Google was notified but dismissed the issue as intended behavior, leaving users at risk.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google Gemini for Workspace) that is susceptible to indirect prompt injection attacks, which manipulate the AI's outputs in ways that could mislead users or facilitate phishing attacks. This constitutes a direct or indirect cause of harm to users' security and privacy, fitting the definition of an AI Incident due to violations of user rights and potential harm to individuals. The fact that the vulnerability has been exploited in proofs-of-concept and reported to Google, with no remediation, confirms the realized risk rather than a mere potential hazard.[AI generated]

AI-Driven Social Media Algorithms Promote Alcohol to French Youth, Raising Health Concerns
French associations report that AI-powered algorithms on platforms like Instagram and TikTok have exposed youth to over 11,000 alcohol-promoting posts from 2021 to 2024. This targeted promotion, largely unregulated, increases young people's desire to consume alcohol, raising significant public health concerns.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions targeted algorithms used to promote alcohol content on social media, which can be reasonably inferred to involve AI systems for content recommendation and targeting. The exposure of young people to such advertising is linked to increased desire to consume alcohol, which is a health harm and social harm. The AI system's role in enabling this targeted exposure is pivotal in the chain of harm. Thus, this qualifies as an AI Incident due to indirect harm caused by AI-enabled targeted advertising promoting alcohol to minors.[AI generated]

























