aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 4461 incidents & hazards
Thumbnail Image

Safety Concerns Over Tesla's Self-Driving Software in Ride-Hailing Services

2024-09-30

A self-driving Tesla used for Uber collided with an SUV in Las Vegas, raising safety concerns about autonomous ride-hailing services. Additionally, Tesla's 'Full Self-Driving' software in a Cybertruck attempted to drive onto a median, highlighting potential malfunctions in AI systems that could lead to harm.[AI generated]

AI principles:
AccountabilityRobustness & digital security
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
Consumers
Harm types:
Physical (injury)Economic/PropertyReputational
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionForecasting/prediction
Why's our monitor labelling this an incident or hazard?

Tesla’s FSD is an AI-driven driver-assist system whose malfunction (failure to register an SUV in a blind spot) directly contributed to a collision with minor injuries and a totaled car. The incident demonstrates realized harm from an AI system in use, making this an AI Incident.[AI generated]

Thumbnail Image

California Enacts Laws Against AI-Generated Child Sexual Abuse Imagery

2024-09-30

California Governor Gavin Newsom signed bipartisan bills criminalizing the creation, possession, and distribution of AI-generated child sexual abuse images and deepfake nudes. The new laws close legal loopholes, making such content illegal even if not depicting real individuals, in response to increasing misuse of generative AI tools.[AI generated]

AI principles:
Respect of human rightsPrivacy & data governance
Industries:
Media, social platforms, and marketingGovernment, security, and defence
Affected stakeholders:
Children
Harm types:
Human or fundamental rightsPsychologicalReputational
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly, specifically AI tools used to generate harmful sexual imagery and deepfakes. The misuse of these AI systems has directly led to harms including violations of human rights and sexual exploitation of minors and adults, which are clear harms under the AI Incident definition. The laws are a response to realized harms caused by AI-generated content, not just potential future harms. Therefore, this event qualifies as an AI Incident because it concerns the direct use of AI systems to produce illegal and harmful content affecting individuals' rights and safety.[AI generated]

Thumbnail Image

Arkansas Sues YouTube Over AI-Driven Addiction and Youth Mental Health Harm

2024-09-30

Arkansas has sued YouTube, Google, and Alphabet, alleging that YouTube's AI-powered recommendation algorithms are deliberately designed to be addictive, causing mental health issues among youth. The lawsuit claims this has led to increased state spending on mental health services, while the companies deny the allegations.[AI generated]

AI principles:
Human wellbeingSafety
Industries:
Media, social platforms, and marketingGovernment, security, and defence
Affected stakeholders:
ChildrenGovernment
Harm types:
PsychologicalEconomic/PropertyPublic interest
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Organisation/recommenders
Why's our monitor labelling this an incident or hazard?

YouTube's recommendation system is an AI system that influences user engagement by steering users towards certain content. The lawsuit claims this system amplifies harmful material and drives addictive behavior among youth, leading to mental health harms and exposure to harmful content. These harms fall under injury or harm to health and harm to communities. The AI system's use is directly linked to these harms, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a governance response but describes ongoing harm caused by the AI system's operation.[AI generated]

Thumbnail Image

AI-Enabled KARGU Drone Successfully Destroys Armored Target with Armor-Piercing Warhead

2024-09-30

The Turkish-developed AI-powered KARGU drone, equipped with an armor-piercing warhead, autonomously identified and destroyed an armored vehicle in its first test. Its operational deployment in military and counter-terrorism contexts demonstrates direct harm to property and potential harm to persons, highlighting the risks of lethal autonomous weapon systems.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
GovernmentGeneral public
Harm types:
Physical (death)Physical (injury)Economic/Property
Severity:
AI incident
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The KARGU drone is an AI system as it uses AI and image processing capabilities for autonomous navigation and target engagement. Its use in combat with a new armor-piercing warhead that successfully destroyed a target demonstrates direct harm caused by the AI system's operation. The article explicitly states the system's operational use in military and counter-terrorism contexts, implying realized harm to property and potentially persons. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

AI-Driven HDMI Cable Data Theft Technique Unveiled

2024-09-29

Researchers from Uruguay's University of the Republic have demonstrated a method to intercept electromagnetic signals from HDMI cables using AI, allowing the recreation of screen images and potential data theft. This AI-driven technique poses a new cybersecurity threat by enabling real-time espionage and access to sensitive information.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Digital securityIT infrastructure and hosting
Affected stakeholders:
BusinessGovernment
Harm types:
Human or fundamental rightsEconomic/PropertyReputational
Severity:
AI hazard
Business function:
ICT management and information security
Autonomy level:
No-action autonomy (human support)
AI system task:
Recognition/object detectionContent generation
Why's our monitor labelling this an incident or hazard?

While the study demonstrates that AI can be used to recreate screen content (including sensitive credentials) from HDMI emissions, there is no specific, confirmed incident of harm using this exact method. Rather, it outlines a credible future threat—i.e. a novel AI-enabled espionage attack—making it a potential hazard rather than a realized incident or a mere background update.[AI generated]

Thumbnail Image

Australian Politician Warns of AI-Enabled Security Risks in Chinese Electric Vehicles

2024-09-29

Nationals MP Barnaby Joyce urged Australia to consider banning Chinese-made electric vehicles, citing fears that AI-enabled features like remote software updates and tracking could be weaponized for malicious purposes. No actual incident has occurred, but concerns center on potential national security and privacy risks.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Mobility and autonomous vehiclesDigital security
Affected stakeholders:
ConsumersGeneral public
Harm types:
Public interestHuman or fundamental rights
Severity:
AI hazard
Business function:
Maintenance
AI system task:
Recognition/object detectionOther
Why's our monitor labelling this an incident or hazard?

The article involves AI systems implicitly through the technology in electric vehicles and solar inverters that include software capable of remote updates, tracking, and control, which can be reasonably inferred to involve AI or advanced algorithmic systems. The concerns raised relate to the potential misuse or malicious use of these AI-enabled systems to cause harm such as disruption or privacy violations. However, no actual incident or harm has occurred; the fears are about plausible future misuse. Therefore, this qualifies as an AI Hazard, as the development and use of these AI systems could plausibly lead to an AI Incident involving national security or privacy harms, but no direct or indirect harm has yet materialized.[AI generated]

Thumbnail Image

Waymo Autonomous Vehicle Disrupts Kamala Harris Motorcade

A Waymo autonomous vehicle stalled during a U-turn, blocking Vice President Kamala Harris' motorcade in San Francisco. Police intervened to move the vehicle, highlighting ongoing issues with Waymo cars causing traffic disruptions. This incident raises concerns about the reliability of AI systems in managing urban traffic effectively.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehiclesGovernment, security, and defence
Affected stakeholders:
GovernmentGeneral public
Harm types:
Public interestReputationalEconomic/Property
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves a deployed AI system (a Waymo driverless car) whose use directly led to harm (vandalism, emotional trauma, risk to passengers). Although the damage was inflicted by humans, it targeted the AI-enabled vehicle and threatened passenger safety, qualifying as an AI Incident under harm to persons and property.[AI generated]

Thumbnail Image

Students Use AI to Create Obscene Image of Teacher

2024-09-28

Two class IX students in Moradabad, UP, have been booked for using AI tools to create and post a morphed obscene image of their female teacher on social media. The incident led to an FIR under the IT Act, and police are investigating while working to remove the image from the internet.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Education and trainingMedia, social platforms, and marketing
Affected stakeholders:
WomenWorkers
Harm types:
ReputationalPsychologicalHuman or fundamental rights
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article describes a realized harm caused by the misuse of AI tools: two minors used AI to create and post a fake obscene image of their teacher, infringing her rights and causing reputational and psychological harm. This constitutes a direct AI Incident under the framework.[AI generated]

Thumbnail Image

Scania and Fortescue Partner to Develop Autonomous Mining Road Trains

2024-09-27

Scania and Fortescue are collaborating to develop, test, and validate AI-powered autonomous road trains for mining operations in Queensland, Australia. The project aims to improve sustainability and efficiency, but introduces potential future safety risks associated with deploying autonomous heavy vehicles in demanding mining environments.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Energy, raw materials, and utilitiesRobots, sensors, and IT hardware
Affected stakeholders:
Workers
Harm types:
Physical (injury)Physical (death)Economic/Property
Severity:
AI hazard
Business function:
Logistics
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the development and planned validation of an autonomous AI system for mining trucks, which qualifies as an AI system. However, there is no mention of any injury, disruption, rights violation, or other harm caused by the system so far. The event concerns the development and prospective use of the AI system, which could plausibly lead to future harms (e.g., safety risks in autonomous mining vehicles), but no incident or harm has yet occurred. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm in the future but does not describe an actual incident or realized harm.[AI generated]

Thumbnail Image

AI-Generated Fake Soldier Images Used in Russian Disinformation Campaigns Against Ukraine

2024-09-27

AI-generated images of supposed Ukrainian soldiers and civilians are being spread on social media by bots to manipulate emotions, boost engagement, and facilitate Russian disinformation and fraud. Ukrainian authorities warn these posts are part of an information warfare campaign, exploiting public trust and enabling harmful narratives and scams.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketingDigital security
Affected stakeholders:
General public
Harm types:
Public interestReputationalPsychological
Severity:
AI incident
AI system task:
Content generationGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated photos being used to spread false narratives and manipulate social media users, which is a direct use of AI systems (generative AI) leading to harm in the form of misinformation and potential fraud. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities and potential violations of rights through manipulation and fraud. The involvement of bots spreading these posts further supports the AI system's role in causing harm.[AI generated]

Thumbnail Image

AI-Generated Fake Messages Used in Extortion Scheme

2024-09-27

In Eskişehir, businessman Hikmet Öztürk was accused by S.M. of sending threatening WhatsApp messages. Öztürk claims the messages were AI-generated fakes and that S.M. demanded 100,000 lira to drop the case. This incident highlights the use of AI in creating false evidence for extortion.[AI generated]

AI principles:
AccountabilityRobustness & digital security
Industries:
Media, social platforms, and marketingDigital security
Affected stakeholders:
Other
Harm types:
Economic/PropertyReputational
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article describes the actual use of an AI system to fabricate WhatsApp conversations, falsely accuse and threaten a victim, and extort funds. This misuse directly caused psychological harm, harassment, and fraud, meeting the definition of an AI Incident.[AI generated]

Thumbnail Image

AI-Generated Avatar Used in YouTube Cryptocurrency Scam

2024-09-26

Ranveer Allahbadia's YouTube channels were hacked and rebranded as 'Tesla' in a cyberattack. Hackers used an AI-generated Elon Musk avatar to promote a cryptocurrency scam, urging viewers to invest in Bitcoin or Ethereum. All videos were deleted, and the channels were removed from YouTube.[AI generated]

AI principles:
AccountabilityPrivacy & data governance
Industries:
Media, social platforms, and marketingDigital security
Affected stakeholders:
Consumers
Harm types:
Economic/PropertyReputational
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The incident involved active misuse of AI (an AI-generated deepfake avatar) to perpetrate a cryptocurrency scam, directly causing potential financial harm to viewers. Since an AI system’s use directly led to real-world harm, this qualifies as an AI Incident.[AI generated]

Thumbnail Image

US Senator Targeted by Deepfake Impersonating Ukrainian Official

Senator Ben Cardin was targeted by a deepfake scammer posing as former Ukrainian Foreign Minister Dmytro Kuleba during a Zoom call. The deepfake technology convincingly mimicked Kuleba's appearance and voice, raising security concerns. Cardin became suspicious when the impersonator asked uncharacteristic questions, prompting Senate security to issue warnings.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Government, security, and defenceDigital security
Affected stakeholders:
Government
Harm types:
ReputationalPublic interest
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event describes an actual malicious use of AI-driven deepfake technology to deceive a senator’s office, posing national security and political risks. This is a direct harm scenario where AI-generated content was used to influence and extract sensitive information, fitting the definition of an AI Incident.[AI generated]

Thumbnail Image

ECB President Warns of AI-Induced Financial System Risks

2024-09-26

European Central Bank President Christine Lagarde warned that AI and new technologies pose potential risks and vulnerabilities to the financial system, urging macroprudential policies to adapt. While no specific incident has occurred, Lagarde highlighted the need for effective risk mitigation as AI adoption accelerates in finance.[AI generated]

AI principles:
AccountabilityRobustness & digital security
Industries:
Financial and insurance servicesGovernment, security, and defence
Harm types:
Public interestEconomic/Property
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

The article explicitly discusses AI technologies and their potential to create new vulnerabilities in the financial system, which could plausibly lead to harm such as financial instability or systemic risk. No actual harm or incident is reported; the focus is on warnings and the need for policy adaptation. This fits the definition of an AI Hazard, as it concerns plausible future harm from AI use in finance. It is not Complementary Information because it is not updating or responding to a past incident, nor is it unrelated as it clearly involves AI systems and their impact.[AI generated]

Thumbnail Image

AI-Driven Demand Predicted to Cause New Semiconductor Shortage

2024-09-26

Consulting firm Bain & Company warns that surging demand for GPUs and chips by tech giants for AI data centers may soon trigger a major semiconductor shortage. Experts highlight that the complex supply chain and rapid AI growth could disrupt chip availability, risking economic and operational impacts across industries.[AI generated]

AI principles:
AccountabilityFairness
Industries:
IT infrastructure and hostingLogistics, wholesale, and retail
Affected stakeholders:
Business
Harm types:
Economic/Property
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

The event involves AI systems indirectly through the demand for AI hardware but does not describe any realized harm or incident caused by AI system development, use, or malfunction. The report warns of a plausible future supply shortage due to AI-driven demand, which could lead to economic or operational disruptions if it occurs. However, since no actual harm or incident has yet occurred, and the article primarily provides an analysis and warning about potential future risks, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

Political Consultant Fined for AI-Generated Biden Robocalls

2024-09-26

The FCC fined political consultant Steven Kramer $6 million for using AI to create fake robocalls mimicking President Biden's voice, urging New Hampshire voters not to vote in the Democratic primary. The calls, intended to highlight AI's potential dangers, led to charges of voter suppression and impersonation.[AI generated]

AI principles:
Transparency & explainabilityRespect of human rights
Industries:
Government, security, and defenceMedia, social platforms, and marketing
Affected stakeholders:
General public
Harm types:
Public interestHuman or fundamental rightsReputational
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The misuse of AI-generated voice cloning for robocalls to mislead and suppress voters constitutes a clear case where an AI system’s use directly led to harm (voter suppression, violation of election integrity). This is not merely a warning or update, but an actual incident of AI-caused harm.[AI generated]

Thumbnail Image

Google Gemini for Workspace Exposed to Indirect Prompt Injection Attacks

2024-09-26

Researchers found Google's Gemini for Workspace AI assistant vulnerable to indirect prompt injection, allowing attackers to embed malicious instructions in emails or documents. This can manipulate AI outputs, potentially leading to phishing attacks or misleading messages. Google was notified but dismissed the issue as intended behavior, leaving users at risk.[AI generated]

AI principles:
AccountabilityRobustness & digital security
Industries:
Digital securityIT infrastructure and hosting
Affected stakeholders:
WorkersBusiness
Harm types:
Economic/PropertyReputationalPsychological
Severity:
AI incident
Business function:
ICT management and information security
Autonomy level:
No-action autonomy (human support)
AI system task:
Interaction support/chatbotsContent generation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Google Gemini for Workspace) that is susceptible to indirect prompt injection attacks, which manipulate the AI's outputs in ways that could mislead users or facilitate phishing attacks. This constitutes a direct or indirect cause of harm to users' security and privacy, fitting the definition of an AI Incident due to violations of user rights and potential harm to individuals. The fact that the vulnerability has been exploited in proofs-of-concept and reported to Google, with no remediation, confirms the realized risk rather than a mere potential hazard.[AI generated]

Thumbnail Image

AI-Driven Social Media Algorithms Promote Alcohol to French Youth, Raising Health Concerns

2024-09-26

French associations report that AI-powered algorithms on platforms like Instagram and TikTok have exposed youth to over 11,000 alcohol-promoting posts from 2021 to 2024. This targeted promotion, largely unregulated, increases young people's desire to consume alcohol, raising significant public health concerns.[AI generated]

AI principles:
SafetyHuman wellbeing
Industries:
Media, social platforms, and marketingHealthcare, drugs, and biotechnology
Affected stakeholders:
Children
Harm types:
PsychologicalPublic interest
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Organisation/recommenders
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions targeted algorithms used to promote alcohol content on social media, which can be reasonably inferred to involve AI systems for content recommendation and targeting. The exposure of young people to such advertising is linked to increased desire to consume alcohol, which is a health harm and social harm. The AI system's role in enabling this targeted exposure is pivotal in the chain of harm. Thus, this qualifies as an AI Incident due to indirect harm caused by AI-enabled targeted advertising promoting alcohol to minors.[AI generated]