aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 13993 incidents & hazards
Thumbnail Image

South Korea Plans AI-Based National Emergency Response System

2026-03-18
Korea

South Korea's National Fire Agency and KT consortium have begun designing an AI and cloud-based next-generation 119 emergency response system. The project aims to unify regional systems, automate emergency call analysis, and enhance disaster response nationwide, but no AI-related harm or malfunction has occurred yet.[AI generated]

Industries:
Government, security, and defenceIT infrastructure and hosting
Severity:
AI hazard
Business function:
Citizen/customer service
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Event/anomaly detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves the development and intended use of an AI system for emergency response, which could plausibly lead to significant impacts on public safety. However, since the system is still in the design and planning phase and no harm or malfunction has occurred, it represents a potential future risk rather than an actual incident. Therefore, it qualifies as an AI Hazard because the AI system's use could plausibly lead to harm if issues arise during deployment or operation, but no harm has yet materialized.[AI generated]

Thumbnail Image

UK Regulator Bans AI App Ad for Promoting Non-Consensual Nudification

2026-03-18
United Kingdom

The UK Advertising Standards Authority banned a YouTube ad for PixVideo AI Video Maker, which implied users could digitally remove women's clothing. The ad was deemed offensive, irresponsible, and harmful, promoting sexualisation and objectification of women through AI-powered image manipulation. Eight complaints prompted regulatory action.[AI generated]

AI principles:
Respect of human rightsFairness
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Women
Harm types:
PsychologicalHuman or fundamental rights
Severity:
AI incident
Business function:
Marketing and advertisement
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The AI system (PixVideo) is involved in the use of AI to alter images, specifically with the potential to remove clothing digitally. The ad's implication that users could do this without consent directly relates to violations of rights (privacy, dignity) and harm to communities (gender-based harm and stereotypes). The complaints and regulatory response indicate that harm has occurred or is ongoing, fulfilling the criteria for an AI Incident. The involvement of AI in the app's functionality and the resulting harm from its use and promotion justify classification as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

AI-Generated Fraudulent Messages Target Citizens Ahead of Holiday

2026-03-18
Türkiye

Turkey's Dezenformasyonla Mücadele Merkezi (DMM) warned citizens about increased AI-generated fraudulent messages on social media and messaging apps before the holiday. Scammers use AI to impersonate trusted contacts or institutions, directing users to fake links to steal personal and financial information.[AI generated]

AI principles:
Privacy & data governanceTransparency & explainability
Industries:
Digital securityMedia, social platforms, and marketing
Affected stakeholders:
General public
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly mentions the use of AI-generated content in fraudulent messages that lead to harm by attempting to steal personal and financial data from individuals. This constitutes harm to people through deception and potential financial injury, directly linked to the use of AI systems generating such content. Therefore, this qualifies as an AI Incident because the AI system's use in generating deceptive content has directly led to harm or attempted harm.[AI generated]

Thumbnail Image

Military Use of AI Sparks International Concerns and Ethical Disputes

2026-03-18
United States

Anthropic and OpenAI have faced disputes with the US military over the use of their AI models, including Claude, in autonomous weapons and surveillance. China warned of ethical risks as AI is used in military operations, raising concerns about loss of human control and potential harm.[AI generated]

AI principles:
Respect of human rightsDemocracy & human autonomy
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Physical (death)Physical (injury)Human or fundamental rights
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems (Anthropic's Claude and OpenAI's ChatGPT) used by the U.S. military in weapons and operations, with disputes over their use in autonomous weapons and surveillance. It also references actual military actions by Israel using advanced AI applications causing physical and human harm. These facts demonstrate direct involvement of AI systems in causing harm to people and communities, fulfilling the criteria for an AI Incident. The political and ethical disputes and contract changes are contextual but do not negate the realized harm from AI use in military conflict. Hence, the classification as AI Incident is appropriate.[AI generated]

Thumbnail Image

BMG Sues Anthropic Over AI Training With Copyrighted Song Lyrics

2026-03-18
United States

BMG Rights Management has sued AI company Anthropic in California, alleging its Claude chatbot was trained on and reproduces copyrighted song lyrics from artists like Bruno Mars and the Rolling Stones without authorization. The lawsuit claims direct copyright infringement by Anthropic's AI system, impacting music industry rights holders.[AI generated]

AI principles:
AccountabilityPrivacy & data governance
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Business
Harm types:
Economic/Property
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Interaction support/chatbotsContent generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (large language models) in its development phase (training) where copyrighted material was allegedly used without permission, constituting a violation of intellectual property rights. This is a direct legal claim of harm (copyright infringement) caused by the AI system's development and use. Therefore, it qualifies as an AI Incident under the category of violations of intellectual property rights.[AI generated]

Thumbnail Image

Sony Removes 135,000 AI-Generated Deepfake Songs Impersonating Artists

2026-03-18
United States

Sony Music requested the removal of over 135,000 AI-generated deepfake songs impersonating its artists, including Beyoncé, Queen, and Harry Styles, from streaming platforms. These deepfakes caused direct commercial harm, violated intellectual property rights, and risked damaging artists' reputations, especially during new album releases.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
BusinessWorkers
Harm types:
Economic/PropertyReputational
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating fake music content that is used to fraudulently boost streaming counts and royalties, directly harming legitimate artists economically and violating their intellectual property rights. This meets the definition of an AI Incident because the AI system's use has directly led to harm (economic and rights violations). The discussion of detection tools and transparency relates to responses but does not overshadow the primary incident of harm caused by AI-generated fraudulent content.[AI generated]

Thumbnail Image

Student Faces Trial for AI-Generated Sexual Images of Schoolmates in Córdoba

2026-03-18
Argentina

A student in Córdoba, Argentina, used AI to create and publish manipulated sexual images of female classmates, including minors, on adult websites, identifying them by name and linking to their social media. The victims suffered psychological harm and privacy violations, prompting a criminal trial for gender-based violence.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Media, social platforms, and marketing
Affected stakeholders:
WomenChildren
Harm types:
PsychologicalReputationalHuman or fundamental rights
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly mentions the use of AI tools to create false sexual images by placing victims' faces on nude bodies, which were then published online with identifying information. This use of AI directly caused harm to the victims, including psychological injury and violation of their rights, fitting the definition of an AI Incident under violations of human rights and harm to persons. Therefore, this event qualifies as an AI Incident.[AI generated]

Thumbnail Image

AI Targeting Error Leads to Civilian Deaths in Iran

2026-03-18
Iran

On February 28, 2026, a US military AI system, reportedly Claude, caused a fatal targeting error during a missile strike in Minab, Iran, hitting a girls' school and killing 165–180 civilians. The incident highlights the risks of AI use in warfare and the consequences of outdated data and maps.[AI generated]

AI principles:
SafetyRespect of human rights
Industries:
Government, security, and defence
Affected stakeholders:
ChildrenGeneral public
Harm types:
Physical (death)
Severity:
AI hazard
AI system task:
Reasoning with knowledge structures/planningGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly (Anthropic's AI models) and concerns their use in sensitive military and security domains. The conflict arises from the company's refusal to permit unrestricted military use, which could plausibly lead to harms related to autonomous weapons or mass surveillance if such AI systems were used without ethical constraints. Although no direct harm or incident has occurred, the dispute and exclusion reflect a credible risk scenario about AI deployment in critical infrastructure and defense, fitting the definition of an AI Hazard. The article does not report any realized injury, rights violation, or disruption caused by the AI systems, so it is not an AI Incident. It is also not merely complementary information or unrelated, as the core focus is on the potential risks and governance challenges of AI use in military contexts.[AI generated]

Thumbnail Image

Chicken Soup for the Soul Publisher Sues Tech Giants Over AI Copyright Infringement

2026-03-18
United States

Chicken Soup for the Soul publisher filed a lawsuit in California federal court against Apple, Google, Meta, OpenAI, Anthropic, Nvidia, Perplexity AI, and xAI, alleging their AI systems were trained on pirated copies of its books without permission, constituting mass copyright infringement and unauthorized use of proprietary content.[AI generated]

AI principles:
AccountabilityPrivacy & data governance
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Business
Harm types:
Economic/Property
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems (large language models/chatbots) trained using unauthorized copyrighted content, which is a direct violation of intellectual property rights, a recognized harm under the AI Incident definition. The lawsuit indicates that the harm has already occurred, not just a potential risk. Hence, it meets the criteria for an AI Incident rather than an AI Hazard or Complementary Information. The presence of multiple major AI companies and the explicit mention of AI training using pirated content further supports this classification.[AI generated]

Thumbnail Image

AI-Generated Fake Wedding Photos of Zendaya and Tom Holland Cause Public Confusion

2026-03-17
United States

AI-generated fake wedding photos of Zendaya and Tom Holland circulated online, misleading the public and even close acquaintances. Zendaya addressed the incident on Jimmy Kimmel Live!, revealing that many people believed the images were real, causing confusion and emotional distress among her social circle.[AI generated]

AI principles:
Transparency & explainabilityAccountability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
General publicOther
Harm types:
Psychological
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system was involved in generating highly realistic fake images that caused misinformation and confusion among the public, leading to emotional reactions such as anger from people who believed the wedding had occurred. This constitutes harm to communities by spreading false information and misleading people, which fits the definition of an AI Incident. The AI system's use directly led to this harm through the creation and dissemination of deceptive content.[AI generated]

Thumbnail Image

AI Coding Assistants Drive Surge in Secret Leaks on GitHub

2026-03-17
United States

In 2025, AI-assisted coding tools, notably Claude Code, doubled the rate of secret leaks in public GitHub commits compared to human developers. GitGuardian reported a 34% year-over-year increase, with nearly 29 million secrets exposed, escalating security risks for organizations and digital infrastructure worldwide.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Digital securityIT infrastructure and hosting
Affected stakeholders:
Business
Harm types:
Economic/PropertyReputationalPublic interest
Severity:
AI incident
Business function:
Research and development
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The report explicitly links AI-assisted code commits to a doubling of secret leak rates, with concrete numbers showing 29 million secrets leaked in 2025 and a 34% year-on-year increase. The AI systems (e.g., ClaudeCode) are directly involved in generating code that contains exposed credentials, which is a clear violation of security and can lead to harm such as breaches and unauthorized access. The harm is realized and ongoing, not just potential. The involvement of AI in causing or contributing to this harm meets the criteria for an AI Incident, as the AI system's use has directly led to significant harm related to cybersecurity breaches and exposure of sensitive information.[AI generated]

Thumbnail Image

AI-Generated Deepfake Nudes of 18 Minors Spark Investigation in Almería

2026-03-17
Spain

Spanish authorities are investigating the use of the AI application ClothOff to generate fake nude and sexual images of at least 18 underage female students from an Almería institute. The incident, revealed by the provincial cybercrime prosecutor, highlights severe privacy violations and criminal offenses enabled by AI deepfake technology.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Digital securityMedia, social platforms, and marketing
Affected stakeholders:
Children
Harm types:
Human or fundamental rightsPsychological
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI application to generate realistic nude images of minors, which is a direct violation of their rights and constitutes child pornography under the law. The AI system's use has directly led to harm (violation of rights and dignity of minors), fulfilling the criteria for an AI Incident. The investigation and legal context confirm the harm has occurred, not just a potential risk. The involvement of AI in generating these images is clear and central to the event, and the harms are significant and clearly articulated.[AI generated]

Thumbnail Image

AI Facial Recognition in Sao Paulo Leads to Mistaken Arrests

2026-03-17
Brazil

Sao Paulo's Smart Sampa AI facial recognition system, used by police to identify fugitives via 40,000 cameras, has led to thousands of arrests. However, over 8% of those detained were released due to identification errors, resulting in wrongful arrests and violations of individual rights.[AI generated]

AI principles:
FairnessRespect of human rights
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Human or fundamental rights
Severity:
AI incident
Business function:
Compliance and justice
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI facial-recognition system for law enforcement, which is an AI system by definition. The system's use has directly caused harm through mistaken arrests and wrongful detentions, which are violations of human rights and legal protections. The harms are realized and documented, not merely potential. Hence, the event meets the criteria for an AI Incident due to the direct link between the AI system's use and harm to individuals' rights and freedoms.[AI generated]

Thumbnail Image

AI-Powered Cyberwarfare Attacks Impact Australia and UK

2026-03-17
Australia

Nation-state actors are increasingly using AI to conduct sophisticated cyberwarfare attacks, causing significant harm to organizations in Australia and the UK. These AI-driven attacks have led to cybersecurity breaches, financial losses, and disruptions to critical infrastructure, highlighting the urgent need for enhanced cybersecurity measures.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Digital securityGovernment, security, and defence
Affected stakeholders:
BusinessGovernment
Harm types:
Economic/PropertyPublic interest
Severity:
AI incident
AI system task:
Goal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions that 48% of businesses suffered AI-powered attacks, indicating direct harm caused by AI systems used by threat actors. The involvement of AI in the development and use of cyber-attacks by nation-state actors leading to harm to businesses and potential risks to critical infrastructure meets the criteria for an AI Incident. The article does not merely discuss potential risks or general AI developments but reports on actual attacks and their impacts, thus qualifying as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Lawyer Sanctioned for Submitting AI-Generated Fake Legal Precedents in Siracusa Court

2026-03-17
Italy

A lawyer in Siracusa, Italy, was sanctioned after submitting four fabricated legal precedents generated by an AI system in a civil case. The court found the cited rulings did not exist, highlighting the risks of unverified AI-generated content in legal proceedings and resulting in a breach of professional conduct.[AI generated]

AI principles:
SafetyTransparency & explainability
Industries:
Government, security, and defence
Affected stakeholders:
GovernmentGeneral public
Harm types:
ReputationalPublic interest
Severity:
AI incident
Business function:
Compliance and justice
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI by a lawyer to produce legal citations that were fabricated or incorrect, leading to a sanction. The AI system's use directly contributed to a violation of legal obligations and professional conduct, which falls under violations of applicable law protecting intellectual property and legal rights. Therefore, this constitutes an AI Incident due to the realized harm of legal misconduct and breach of legal standards caused by AI-generated false information.[AI generated]

Thumbnail Image

AI-Powered Editing Tools Drive Surge in Insurance Fraud in the US

2026-03-17
United States

A Verisk study reveals that AI-powered image editing tools are fueling a rise in digital insurance fraud, with 36% of US consumers willing to alter claim images or documents. Insurers report increasingly sophisticated manipulated media, challenging detection and eroding trust in the insurance process.[AI generated]

AI principles:
Robustness & digital securityTransparency & explainability
Industries:
Financial and insurance services
Affected stakeholders:
Business
Harm types:
Economic/PropertyReputational
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-powered image editing tools being used to manipulate insurance claim evidence, which directly leads to insurance fraud—a form of harm to property and financial interests. The harm is realized, as insurers report a sharp rise in manipulated media submissions and fraud cases. The AI system's use in this context is central to the incident, fulfilling the criteria for an AI Incident under the OECD framework.[AI generated]

Thumbnail Image

EU Report Reveals China's Use of AI for Disinformation and Harassment

2026-03-17
China

The EU's annual report exposes China's extensive use of AI to generate deepfake videos, comics, and automated content for disinformation campaigns. These AI-driven efforts target dissenters, spread state propaganda, and manipulate information, causing harm through harassment, false narratives, and undermining trust across multiple countries.[AI generated]

AI principles:
Respect of human rightsDemocracy & human autonomy
Industries:
Media, social platforms, and marketing
Affected stakeholders:
General publicCivil society
Harm types:
PsychologicalPublic interestHuman or fundamental rights
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The report explicitly states that AI systems are used to generate fake videos, deepfakes, and other content to manipulate information and discredit opposition voices. This manipulation has already occurred and caused harm by spreading false narratives and harassment, which fits the definition of an AI Incident due to harm to communities and violations of rights. The AI involvement is direct in the use of AI-generated content for disinformation campaigns, and the harms are realized, not just potential. Therefore, this event qualifies as an AI Incident.[AI generated]

Thumbnail Image

AI-Driven Autonomous Trucks Tested on U.S. Highways Raise Safety Concerns

2026-03-17
United States

Aurora Innovation and other companies are testing AI-powered driverless semi trucks on Texas highways, with plans for wider deployment by 2027. Incidents like phantom braking and industry concerns have led to pauses and the reintroduction of human operators, highlighting potential risks but no reported harm yet.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Severity:
AI hazard
Business function:
Logistics
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The presence of AI systems is explicit, as the trucks use autonomous AI driving systems. The article mentions safety concerns and incidents like phantom braking and lawsuits, indicating malfunction or problematic use. However, no specific harm event (injury, property damage, or rights violation) is reported as having occurred. The article highlights potential future risks and ongoing testing, which fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the current testing and associated risks, not on responses or governance. It is not unrelated because AI systems are central to the event.[AI generated]

Thumbnail Image

NATO Orders AI-Enabled Parrot Micro-Drones for Military Use

2026-03-17
France

French company Parrot has received its first NATO orders for AI-powered ANAFI UKR micro-drones, with deliveries starting in early 2026. The drones, intended for military surveillance and defense, are being supplied to Finland and another undisclosed defense client, raising concerns about potential future risks from autonomous military AI systems.[AI generated]

AI principles:
SafetyRespect of human rights
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
General public
Harm types:
Physical (death)Human or fundamental rights
Severity:
AI hazard
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The microdrones mentioned are AI systems used for surveillance and defense, implying autonomous or AI-assisted capabilities. The announcement concerns the production and delivery of these drones, with no current harm reported. However, given their military use and AI capabilities, there is a credible risk that their deployment could lead to harms such as injury or rights violations. Thus, this event fits the definition of an AI Hazard rather than an Incident or Complementary Information.[AI generated]

Thumbnail Image

Zelensky Warns Europe of AI-Enabled Drone Threats

2026-03-17
Ukraine

Ukrainian President Volodymyr Zelensky, speaking in London, warned European nations about the rising threat of AI-powered drones. He highlighted that such drones, used by Russia and Iran against Ukraine's critical infrastructure, are now affordable for non-state actors, increasing the risk of mass attacks across Europe.[AI generated]

AI principles:
SafetyRespect of human rights
Industries:
Mobility and autonomous vehiclesGovernment, security, and defence
Affected stakeholders:
General publicGovernment
Harm types:
Physical (death)Physical (injury)Public interest
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article focuses on the potential dangers of AI-enabled drones and the evolving military threat they represent, which could plausibly lead to AI incidents involving harm to people and communities. Since no specific harm or incident has yet occurred as described, but the risk is credible and recognized by leaders, this qualifies as an AI Hazard. The mention of AI in drones and the defense partnership to counter them supports the presence of AI systems and the plausible future harm they could cause.[AI generated]