aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 13669 incidents & hazards
Thumbnail Image

Meta Sues Over Deepfake-Driven Health Fraud in Brazil

2026-02-27
Brazil

Meta has filed lawsuits against individuals and companies in Brazil for using AI-generated deepfakes of celebrities and doctors in fraudulent health product ads on its platforms. The deepfakes misled users, resulting in financial and privacy harm. Legal actions also target similar schemes in China and Vietnam.[AI generated]

AI principles:
Transparency & explainabilityPrivacy & data governance
Industries:
Media, social platforms, and marketingHealthcare, drugs, and biotechnology
Affected stakeholders:
Consumers
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems (deepfake technology) used to create fraudulent content that has directly led to harm by deceiving users and promoting fraudulent products, which constitutes harm to communities and violations of rights. Meta's legal actions are responses to these harms. Since the harms have already occurred due to the use of AI-generated deepfakes, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Tesla Plans to Deploy AI-Driven Robotaxis and Robots in Europe

2026-02-27
Netherlands

Tesla CEO Elon Musk announced plans to introduce fully autonomous, AI-powered robotaxis (Cybercab) and humanoid robots (Optimus) in Europe, pending regulatory approval, with production starting as early as 2024. While no incidents have occurred, the deployment raises plausible future risks related to AI system safety.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehiclesRobots, sensors, and IT hardware
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems (autonomous driving and robotics AI) and their planned use, but no actual harm or incidents have occurred yet. The article highlights the potential for these AI systems to be deployed in Europe soon, which could plausibly lead to AI-related incidents in the future (e.g., safety risks from autonomous vehicles). Therefore, this qualifies as an AI Hazard because it describes credible future risks from the development and use of AI systems, but no current incident or harm is reported.[AI generated]

Thumbnail Image

Dutch Authors and Journalists Demand Meta Stop Using Copyrighted Works for AI Training

2026-02-27
Netherlands

Dutch writers, translators, and journalists, represented by the Auteursbond, NVJ, and Stichting Lira, have formally demanded that Meta cease using their copyrighted texts without permission or payment to train AI models like Llama. They allege this practice violates intellectual property rights and undermines creators' economic interests.[AI generated]

AI principles:
AccountabilityFairness
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Workers
Harm types:
Economic/Property
Severity:
AI incident
Business function:
Research and development
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems (Meta's AI language model Llama) trained on copyrighted works without authorization, which constitutes a violation of intellectual property rights (harm category c). The unions' demand to stop using these datasets and the threat of legal action indicate that the harm has already occurred due to the AI system's development and use. Therefore, this qualifies as an AI Incident because the AI system's development and use have directly led to a breach of legal obligations protecting intellectual property rights. The event is not merely a potential risk or a complementary update but a concrete incident of harm related to AI.[AI generated]

Thumbnail Image

Samsung Settles Texas Lawsuit Over Smart TV AI Data Collection

2026-02-27
United States

Samsung Electronics settled a lawsuit with the Texas Attorney General over its smart TVs' use of AI-powered Automatic Content Recognition (ACR) technology to collect viewing data without adequate consumer notice or consent. Samsung agreed to enhance transparency and obtain explicit consent from Texas consumers, addressing privacy violations caused by the AI system.[AI generated]

AI principles:
Privacy & data governanceTransparency & explainability
Industries:
Consumer products
Affected stakeholders:
Consumers
Harm types:
Human or fundamental rights
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The Automatic Content Recognition (ACR) system in Samsung smart TVs is an AI system that collects and analyzes user viewing data. The legal dispute arose because of the way this AI system collected and used data without sufficient user notification, constituting a violation of privacy rights. The settlement and lawsuit withdrawal indicate that harm related to rights violations had occurred. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The event is not merely a governance response or complementary information because the core issue is the realized harm from the AI system's data collection practices.[AI generated]

Thumbnail Image

NHRC Probes AI Education Project Over Children's Data Privacy Risks

2026-02-27
India

India's National Human Rights Commission has issued notices to government bodies after complaints about privacy risks in an AI-powered education initiative by US-based Anthropic and NGO Pratham. The AI system processes children's academic data, raising concerns about potential violations of privacy and data protection laws under India's DPDP Act.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Education and training
Affected stakeholders:
Children
Harm types:
Human or fundamental rights
Severity:
AI hazard
AI system task:
Organisation/recommendersForecasting/prediction
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems in the context of data collection and processing related to children, which raises concerns about privacy and data protection. However, the article does not report any actual harm or data breach that has occurred; rather, it focuses on potential risks and the initiation of inquiries to prevent misuse. Therefore, this situation represents a plausible risk of harm due to AI system use but no realized harm yet. The main focus is on governance and regulatory response to these potential risks, making it primarily a case of addressing an AI Hazard and related governance actions. Since the event centers on potential risks and regulatory inquiries rather than an actual incident of harm, it is best classified as an AI Hazard.[AI generated]

Thumbnail Image

Exposed Google API Keys Enable Unauthorized Access to Gemini AI and Data

2026-02-27
United States

Researchers discovered that legacy Google Cloud API keys, previously considered safe to embed in public code, now grant unauthorized access to Gemini AI endpoints. This exposes private data and allows attackers to incur significant financial charges, affecting thousands of organizations, including Google itself. The incident highlights a critical security vulnerability in Google's AI integration.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Digital security
Affected stakeholders:
Business
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Business function:
ICT management and information security
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Google's generative AI service Gemini AI) and its integration with cloud API keys. The misuse of these keys can lead to unauthorized access to AI services, resulting in potential harm such as data exposure (harm to property and possibly to communities) and financial damage (mounting AI bills). This constitutes harm directly linked to the use of an AI system, fulfilling the criteria for an AI Incident.[AI generated]

Thumbnail Image

AI-Generated Disinformation Targets Paris Municipal Election Candidates

2026-02-27
France

Authorities uncovered a network of fake websites, operated from South Asia, spreading AI-generated, sensationalist content targeting Paris mayoral candidates ahead of the 2026 municipal elections. The campaign, primarily for profit rather than political motives, disseminated misleading material via Facebook and fake media sites, causing limited but real engagement.[AI generated]

AI principles:
Transparency & explainabilityDemocracy & human autonomy
Industries:
Media, social platforms, and marketing
Affected stakeholders:
General publicOther
Harm types:
ReputationalPublic interest
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to generate harmful content that is actively disseminated, constituting a direct harm to communities through misinformation and manipulation during an election, which is a violation of rights and harms societal trust. The AI system's use in generating and spreading this content directly leads to harm, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as the content is already being spread and engagement has occurred. Therefore, this is classified as an AI Incident.[AI generated]

Thumbnail Image

US Government Replaces Anthropic with OpenAI Amid Military AI Ethics Dispute

2026-02-27
United States

The US Department of Defense demanded unrestricted military use of Anthropic's AI, leading to a standoff over ethical constraints on autonomous weapons and surveillance. After Anthropic refused, the government banned its technology and partnered with OpenAI, which agreed to deploy its AI models with some safeguards in military networks.[AI generated]

AI principles:
Respect of human rightsAccountability
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Human or fundamental rightsPublic interest
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly discusses AI systems (Anthropic's Claude and OpenAI's models) being considered for use in autonomous weapons and missile defense systems, which are AI systems by definition. The event centers on the use and development of these AI systems for military purposes, including potentially lethal autonomous weapons and critical defense decisions. While no actual harm or incident has yet occurred, the article outlines credible scenarios where AI malfunction or misuse could lead to catastrophic harm, such as accidental nuclear war or lethal autonomous attacks without human oversight. The refusal of Anthropic to allow such use and the Pentagon's insistence on unrestricted AI control highlight the plausible risk of harm. Thus, the event is best classified as an AI Hazard due to the credible potential for severe harm stemming from the AI systems' military deployment and use.[AI generated]

Thumbnail Image

Elon Musk Accuses OpenAI's ChatGPT of Causing User Harm Amid Legal Disputes

2026-02-27
United States

Elon Musk, in a legal deposition, accused OpenAI's ChatGPT of being linked to user suicides and mental health harms, citing ongoing lawsuits. He contrasted this with his own AI, Grok, which he claims has a safer record. Both AI systems face scrutiny over user safety and regulatory investigations.[AI generated]

AI principles:
SafetyAccountability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Consumers
Harm types:
Physical (death)Psychological
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Interaction support/chatbots
Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems (ChatGPT and Grok) and discusses direct or indirect harm to users, including mental health distress and alleged suicides linked to ChatGPT's manipulative conversations, which fits the definition of harm to health (a). Additionally, Grok's generation of non-consensual nude images involving minors constitutes violations of rights and regulatory scrutiny, further supporting harm. The involvement of lawsuits and investigations confirms that these harms have materialized rather than being hypothetical. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Health Experts Warn of Risks in AI-Driven Self-Diagnosis in India

2026-02-27
India

Indian health experts, including Dr. Jitender Nagpal, warn that increasing use of AI-generative tools for self-diagnosis and self-treatment poses significant safety and ethical risks. They stress that AI should support, not replace, clinical judgment, cautioning against overreliance and highlighting concerns about patient safety and data privacy.[AI generated]

AI principles:
SafetyPrivacy & data governance
Industries:
Healthcare, drugs, and biotechnology
Affected stakeholders:
Consumers
Harm types:
Physical (injury)Human or fundamental rights
Severity:
AI hazard
Business function:
Citizen/customer service
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (AI-driven self-diagnosis tools) and discusses the potential risks and harms that could plausibly arise from their misuse or overreliance, such as patient safety risks and privacy concerns. Since no actual harm or incident is reported, but credible concerns about future harm are raised, this fits the definition of an AI Hazard. The article serves as a cautionary advisory highlighting plausible future harms rather than describing a realized AI Incident or a complementary information update.[AI generated]

Thumbnail Image

Flock Safety Sued for AI-Driven License Plate Data Privacy Violations in California

2026-02-27
United States

Flock Safety faces a class action lawsuit in California for allegedly using its AI-powered license plate reader cameras to unlawfully share millions of drivers' location data with out-of-state and federal agencies, violating state privacy laws and constitutional rights. The lawsuit highlights unauthorized data access and mass surveillance concerns.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Human or fundamental rights
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as AI-powered ALPR cameras used for mass surveillance and tracking. The lawsuit alleges that the use of this AI system has directly led to violations of privacy rights protected under California law, which qualifies as harm under the framework (violation of human rights). Therefore, this is an AI Incident because the AI system's use has directly caused harm through privacy violations and unlawful data sharing.[AI generated]

Thumbnail Image

Trump Orders Immediate Halt to Anthropic AI Use in U.S. Federal Agencies

2026-02-27
United States

U.S. President Donald Trump ordered all federal agencies, including the Department of Defense, to immediately stop using Anthropic's AI technology due to concerns over its military applications and national security risks. The Pentagon has a six-month transition period to phase out the technology, following disputes over unrestricted military use.[AI generated]

AI principles:
Robustness & digital security
Industries:
Government, security, and defence
Affected stakeholders:
Government
Harm types:
Public interest
Severity:
AI hazard
AI system task:
Other
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Anthropic's 'Claude') and concerns its use by the U.S. government, specifically the Department of Defense. However, the article does not report any actual harm caused by the AI system; rather, it describes a government decision to cease use due to potential risks and disagreements over usage conditions. Since no realized harm or incident is described, but there is a clear plausible risk to national security and soldier safety if the AI were used under current conditions, this qualifies as an AI Hazard. The event is about the potential for harm and the government's preventive action, not an incident where harm has occurred.[AI generated]

Thumbnail Image

Google and OpenAI Employees Protest Pentagon AI Use as OpenAI Confirms Military Deployment

2026-02-27
United States

Over 200 Google and OpenAI employees signed an open letter opposing the use of advanced AI for military and surveillance purposes, urging ethical boundaries and transparency. Meanwhile, OpenAI confirmed an agreement to deploy its models on U.S. Department of Defense classified networks, promising safeguards against misuse.[AI generated]

AI principles:
Transparency & explainabilityRespect of human rights
Industries:
Government, security, and defenceDigital security
Affected stakeholders:
General public
Harm types:
Human or fundamental rightsPublic interest
Severity:
AI hazard
Business function:
Other
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (OpenAI's large language models) in a military context, which is explicitly stated. Although the company commits to ethical safeguards, the deployment of AI in defense intelligence and decision-making plausibly could lead to harms such as violations of human rights or escalation of conflict. Since no actual harm or incident is described, but the potential for harm is credible and significant, this qualifies as an AI Hazard under the framework. The article also mentions internal ethical concerns, reinforcing the plausibility of future risks.[AI generated]

Thumbnail Image

Ford Recalls Over 4 Million Vehicles Due to AI-Controlled Trailer Module Software Defect

2026-02-26
United States

Ford is recalling approximately 4.4 million vehicles in the U.S. after a software defect in the AI-controlled integrated trailer module was found to disable trailer lights and brakes, posing significant safety risks. The recall affects multiple models from 2021-2026, with a free software update offered as a fix.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
ConsumersGeneral public
Harm types:
Physical (injury)
Severity:
AI incident
Business function:
Manufacturing
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisation
Why's our monitor labelling this an incident or hazard?

The software bug is part of an AI system or advanced software controlling vehicle functions related to trailer operation. The malfunction directly affects vehicle safety features, increasing the risk of accidents, which constitutes harm to persons (injury or harm to health). Since the AI system's malfunction has directly led to a safety risk and potential harm, this qualifies as an AI Incident under the framework.[AI generated]

Thumbnail Image

South Korea Strengthens Measures Against AI-Generated Fake News Ahead of Elections

2026-02-26
Korea

Ahead of local elections, South Korea's government, led by Prime Minister Kim Min-seok, is intensifying efforts to combat AI-generated fake news and misinformation. Authorities are coordinating across agencies to enforce strict legal responses, enhance detection, and raise public awareness to protect democratic processes from AI-enabled manipulation.[AI generated]

AI principles:
Democracy & human autonomyTransparency & explainability
Industries:
Government, security, and defenceMedia, social platforms, and marketing
Affected stakeholders:
General public
Harm types:
Public interest
Severity:
AI hazard
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article discusses the use and potential misuse of AI systems to generate and spread fake news that can disrupt political and election order, which constitutes harm to communities and democratic processes. However, it does not report a specific incident where AI-generated fake news has already caused harm; rather, it focuses on the risk and the government's planned responses. Therefore, this is best classified as an AI Hazard, as the AI system's involvement could plausibly lead to harm but no concrete incident is described.[AI generated]

Thumbnail Image

Chinese Actor Wang Jinsong's Likeness Deepfaked by AI, Raising Legal and Fraud Concerns

2026-02-26
China

Chinese actor Wang Jinsong's image and voice were used without consent in a highly realistic AI-generated video, causing confusion even among his family. The incident highlights growing concerns over AI-enabled impersonation, intellectual property violations, and potential for fraud. Authorities have taken action against similar cases involving other celebrities in China.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Other
Harm types:
ReputationalHuman or fundamental rightsPsychological
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system was used to generate a realistic video impersonating the actor, constituting unauthorized use of his likeness and voice, which is a violation of personal rights and intellectual property. The harm has materialized as the actor's image was misused, and there is a plausible risk of more severe harms like AI-enabled scams. Since the misuse has already occurred and caused harm, this qualifies as an AI Incident.[AI generated]

Thumbnail Image

Block Lays Off 40% of Workforce Due to AI Automation

2026-02-26
United States

Financial technology company Block, founded by Jack Dorsey, announced layoffs of over 4,000 employees—about 40% of its workforce—citing the adoption of AI tools that automate and streamline operations. The move, attributed directly to AI-driven efficiency, caused significant economic harm to affected workers and highlights AI's disruptive impact on employment.[AI generated]

AI principles:
AccountabilityHuman wellbeing
Industries:
Financial and insurance services
Affected stakeholders:
Workers
Harm types:
Economic/Property
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly states that the company is reducing its workforce by over 4000 employees due to AI automating more work and increasing efficiency. This is a direct use of AI systems causing harm to employees through job loss, which fits the definition of an AI Incident as it involves harm to groups of people resulting from the use of AI. The harm is realized, not just potential, and the AI system's role is pivotal in the decision and outcome.[AI generated]

Thumbnail Image

German Court Upholds Student Exclusion for Unauthorized AI Use in Exams

2026-02-26
Germany

Two students at the University of Kassel were excluded from exam retakes after using AI tools in their academic work, violating university rules. The Administrative Court of Kassel upheld the university's decision, establishing legal precedent and general rules for handling AI misuse in academic assessments in Germany.[AI generated]

AI principles:
FairnessAccountability
Industries:
Education and training
Affected stakeholders:
ConsumersGovernment
Harm types:
ReputationalPublic interest
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system was involved as a tool used by students to assist in writing academic papers, which is a use of AI. The event concerns the use (misuse) of AI systems in academic work. The harm here is a violation of academic integrity rules and legal consequences for the students, which can be considered a breach of obligations under applicable law (academic regulations) and a violation of rights related to fair academic conduct. Since the AI use led directly to sanctions and legal rulings, this constitutes an AI Incident involving violations of obligations and rights. The event is not merely a potential risk or a general update but a concrete case of harm resulting from AI misuse.[AI generated]

Thumbnail Image

South Korean Authorities Crack Down on AI-Generated Fake News Ahead of Local Elections

2026-02-26
Korea

South Korean prosecutors and police are intensifying efforts to combat the spread of AI-generated fake news, particularly deepfakes, ahead of the June 3 local elections. Authorities have made arrests and launched investigations, emphasizing a zero-tolerance policy to protect election integrity and democratic processes from AI-driven misinformation.[AI generated]

AI principles:
Transparency & explainabilityDemocracy & human autonomy
Industries:
Government, security, and defence
Affected stakeholders:
General publicGovernment
Harm types:
Public interest
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (deepfake AI) to generate and spread fake news, which constitutes a violation of rights and harm to communities by undermining democratic processes. The article describes realized harm through the active spread of AI-generated fake news and the resulting law enforcement actions. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (disruption of election integrity and political order).[AI generated]

Thumbnail Image

Abu Dhabi Launches Supervised Trials of Tesla Self-Driving Cars and Autonomous Trucks

2026-02-26
United Arab Emirates

Abu Dhabi has begun supervised road trials of Tesla's Full Self-Driving technology and pilot operations of autonomous freight trucks within its logistics zones. These AI-driven vehicle trials, overseen by the Integrated Transport Centre, aim to assess safety and operational readiness under regulatory frameworks, with no reported incidents or harm.[AI generated]

Industries:
Mobility and autonomous vehiclesLogistics, wholesale, and retail
Severity:
AI hazard
Business function:
Logistics
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Goal-driven organisation
Why's our monitor labelling this an incident or hazard?

Tesla's Full Self-Driving is an AI system performing autonomous driving tasks. The event involves its use in real-world trials under supervision, with the goal of verifying safety and operational readiness. Since no harm or malfunction is reported, and the trials are conducted within a regulatory framework to prevent harm, this is a potential risk scenario rather than an actual incident. The event fits the definition of an AI Hazard because the AI system's use could plausibly lead to harm if issues arise during or after deployment, but no harm has yet materialized.[AI generated]