aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 14095 incidents & hazards
Thumbnail Image

Ford Recalls 254,640 SUVs in US Over AI-Driven Safety Feature Malfunction

2026-03-24
United States

Ford is recalling 254,640 SUVs in the US due to a software defect in AI-powered image processing modules, causing loss of rearview camera and advanced driver assistance features. The malfunction increases crash risk, prompting a recall and free software update to restore safety functions.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
ConsumersGeneral public
Harm types:
Physical (injury)
Severity:
AI incident
Business function:
Manufacturing
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

ADAS features rely on AI systems for real-time processing and decision-making to enhance vehicle safety. The software issue causing loss of these features directly impacts the safety of drivers and passengers, representing a harm to health and safety. Since the malfunction has already occurred and prompted a recall, this constitutes an AI Incident due to the realized harm risk from the AI system's malfunction.[AI generated]

Thumbnail Image

Baltimore Sues Elon Musk's xAI Over Grok Deepfake Harms

2026-03-24
United States

The city of Baltimore has sued Elon Musk's xAI and X Corp., alleging their AI chatbot Grok generates and distributes nonconsensual sexually explicit deepfake images, including those of children. The lawsuit claims Grok lacks adequate safeguards, causing widespread harm and violating consumer protection laws.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Media, social platforms, and marketing
Affected stakeholders:
ChildrenGeneral public
Harm types:
PsychologicalReputationalHuman or fundamental rights
Severity:
AI incident
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The Grok platform is an AI system capable of generating deepfake images, which are being used to create harmful sexualized content without consent, including illegal child sexual abuse material. This has caused psychological harm and harassment to residents, constituting realized harm. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's outputs.[AI generated]

Thumbnail Image

AI-Generated Fake Law Enforcement Used in Romanian Influence Campaign

2026-03-24
Romania

Romania's National Cyber Security Directorate (DNSC) warns of an ongoing influence campaign using AI-generated personas falsely presented as police or gendarmes. The campaign micro-targets social media users, exploits emotions, spreads misinformation, and tests public reactions, undermining trust and facilitating fraud. The harm is realized and ongoing.[AI generated]

AI principles:
Transparency & explainabilityDemocracy & human autonomy
Industries:
Media, social platforms, and marketingDigital security
Affected stakeholders:
General public
Harm types:
Public interestReputationalEconomic/Property
Severity:
AI incident
Business function:
Other
AI system task:
Content generationOrganisation/recommenders
Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems generating fake characters used in a disinformation campaign that is actively influencing and manipulating the population, causing harm to communities. The use of AI-generated personas to deceive and micro-target users directly leads to harm through misinformation and potential fraud. Therefore, this meets the criteria for an AI Incident as the AI system's use has directly led to harm to communities and potential violations of rights through deception and fraud.[AI generated]

Thumbnail Image

Greek Singer Alkistis Protopsalti Targeted by AI-Generated Deepfake Scam

2026-03-24
Greece

Greek singer Alkistis Protopsalti was targeted by an online scam involving an AI-generated deepfake video falsely showing her endorsing products without her consent. The video circulated on social media, prompting her to take immediate legal action and alert authorities to protect her reputation and warn the public.[AI generated]

AI principles:
Respect of human rightsTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Women
Harm types:
Reputational
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system used to generate a deepfake video that directly leads to harm by deceiving consumers and causing financial fraud, which constitutes harm to communities and individuals. The AI system's use in creating the fake video is central to the incident. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI-generated content.[AI generated]

Thumbnail Image

Epirus, General Dynamics, and Kodiak AI Unveil Autonomous Counter-Drone Weapon System

2026-03-24
United States

Epirus, General Dynamics Land Systems, and Kodiak AI have introduced the Leonidas Autonomous Ground Vehicle, a mobile platform combining AI-powered autonomous driving and high-power microwave technology for counter-drone defense. The system, intended for critical defense and homeland security missions, poses potential risks if misused or malfunctioning.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Physical (injury)Physical (death)
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves the development and deployment of an AI system (Kodiak Driver) integrated into a counter-UAS platform that autonomously detects and neutralizes drone threats. While the system is designed for defense and safety, the article does not report any realized harm or incidents caused by the AI system. However, given the nature of the system—an autonomous weaponized platform capable of neutralizing drones—there is a plausible risk of harm if misused or malfunctioning, such as unintended damage or escalation in conflict scenarios. Therefore, this event represents an AI Hazard due to the credible potential for harm stemming from the autonomous AI-enabled counter-UAS system, even though no harm has yet occurred or been reported.[AI generated]

Thumbnail Image

AI-Generated Deepfake X-Rays Deceive Radiologists and AI Systems

2026-03-24
United States

A multi-center study found that radiologists and advanced AI models cannot reliably distinguish AI-generated deepfake X-ray images from authentic ones. This vulnerability exposes healthcare to risks such as misdiagnosis, fraudulent litigation, and cybersecurity threats, highlighting the urgent need for improved detection tools and training.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Healthcare, drugs, and biotechnologyDigital security
Affected stakeholders:
ConsumersWorkers
Harm types:
Physical (injury)Economic/Property
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly (ChatGPT, RoentGen, and other LLMs) generating synthetic medical images that are indistinguishable from real ones by experts and AI detectors. This misuse or potential malicious use of AI-generated deepfakes can directly lead to harms such as fraudulent litigation, clinical misdiagnosis, and cybersecurity attacks causing clinical chaos, all of which are harms to health and communities. The study documents these risks with evidence of actual AI-generated images deceiving professionals, thus meeting the criteria for an AI Incident rather than a mere hazard or complementary information.[AI generated]

Thumbnail Image

Music Publishers Sue Anthropic Over AI Copyright Infringement

2026-03-24
United States

Universal Music Group, Concord, and ABKCO have sued Anthropic, alleging its AI chatbot Claude was trained on and reproduces copyrighted song lyrics without permission. The publishers argue this infringes their intellectual property rights and competes with their market, challenging the AI's 'fair use' defense in California court.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Business
Harm types:
Economic/Property
Severity:
AI incident
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Anthropic's Claude) used to generate content based on copyrighted song lyrics without permission, which the publishers claim infringes their copyrights. This is a direct violation of intellectual property rights, a category of harm defined under AI Incidents. The lawsuit and allegations indicate that the AI's use has already caused harm by reproducing copyrighted material and competing with the original market. Although the legal outcome is pending, the described harm is materialized and not merely potential, making this an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

AI Misuse Drives Surge in Child Sexual Abuse Content Online

2026-03-24
United Kingdom

In 2025, the Internet Watch Foundation reported a 260-fold increase in AI-generated child sexual abuse material online, with over 8,000 images and videos identified. Most videos were classified as the most severe under UK law, highlighting AI's role in producing increasingly extreme and realistic illegal content.[AI generated]

AI principles:
Respect of human rightsSafety
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Children
Harm types:
Human or fundamental rightsPsychological
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use and misuse of AI systems (generative AI models) to produce illegal and harmful content (CSAM), which constitutes a violation of human rights and causes significant harm to children and communities. The AI systems' outputs have directly led to the dissemination of harmful material, fulfilling the criteria for an AI Incident. The article also references ongoing harm and the need for regulatory responses, confirming that the harm is realized and ongoing rather than merely potential.[AI generated]

Thumbnail Image

First Conviction in Cyprus for AI-Generated Child Sexual Abuse Material

2026-03-24
Cyprus

A Limassol court in Cyprus issued the country's first conviction for crimes involving child sexual abuse material created or distributed using artificial intelligence. Two young individuals pleaded guilty and received suspended prison sentences. The case highlights the growing concern over AI-facilitated exploitation and the legal system's response.[AI generated]

AI principles:
Respect of human rightsSafety
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Children
Harm types:
Human or fundamental rightsPsychological
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions that the offenses involve sexual abuse material of minors created or distributed through AI, indicating the use of an AI system in committing the crime. The court conviction confirms that harm has occurred, specifically violations of rights and serious social harm. This meets the criteria for an AI Incident, as the AI system's use directly led to harm. The legal response and sentencing further confirm the materialization of harm rather than a potential risk or complementary information.[AI generated]

Thumbnail Image

South Korea Launches Task Force to Combat AI-Generated Fake Food Advertising

2026-03-24
Korea

South Korea's Ministry of Food and Drug Safety launched a task force to address rising cases of AI-generated fake expert recommendations and deceptive food advertising online. The team aims to prevent consumer harm and restore fair market practices through monitoring, inspections, and regulatory improvements.[AI generated]

AI principles:
Transparency & explainabilityAccountability
Industries:
Food and beverages
Affected stakeholders:
ConsumersBusiness
Harm types:
Economic/PropertyReputational
Severity:
AI incident
Business function:
Marketing and advertisement
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI to generate fake expert recommendations in food advertising, which is a form of AI system use leading to consumer deception and potential health harm. This constitutes a violation of consumer rights and could harm public health, fitting the definition of an AI Incident. The task force's formation is a response to realized harms caused by AI-enabled false advertising, not just a potential risk, so this is an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Gwangmyeong City Deploys AI Fire Prevention System in Traditional Markets

2026-03-24
Korea

Gwangmyeong City, South Korea, has partnered with Slano and local markets to install 500 AIoT devices for real-time fire detection and prevention. The AI system analyzes sensor data to predict fire risks, enabling early alerts and comprehensive safety management, aiming to reduce injury and property damage in crowded market environments.[AI generated]

Industries:
Logistics, wholesale, and retailGovernment, security, and defence
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Event/anomaly detectionForecasting/prediction
Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved, analyzing real-time sensor data to detect fire hazards before they escalate. The system's use is intended to prevent harm to people and property by early detection and alerting relevant parties. Since the AI system's deployment directly addresses and mitigates risks of injury and property damage, this qualifies as an AI Incident under the definition of an event where AI use has directly led to harm prevention and safety enhancement.[AI generated]

Thumbnail Image

IXOPAY and Zip Launch Framework to Address AI Risks in Agentic Commerce

2026-03-24
United States

IXOPAY and Zip have launched a joint initiative to develop a "Unified Trust Layer" framework aimed at addressing trust, identity, and liability challenges in AI-driven, agent-initiated commerce. The framework seeks to mitigate potential risks such as fraud as AI agents increasingly conduct autonomous transactions. No actual harm has occurred yet.[AI generated]

AI principles:
AccountabilityRobustness & digital security
Industries:
Financial and insurance servicesDigital security
Affected stakeholders:
ConsumersBusiness
Harm types:
Economic/PropertyReputational
Severity:
AI hazard
Business function:
ICT management and information security
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisationEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The event involves AI systems in the form of autonomous AI agents conducting transactions (agentic commerce), and the framework is designed to mitigate risks associated with these AI systems. However, the article does not report any realized harm, injury, rights violations, or disruptions caused by AI systems. Instead, it presents a cooperative initiative to address potential risks and improve trust infrastructure in anticipation of future challenges. Therefore, this is a case of an AI Hazard, as the framework aims to prevent plausible future harms related to AI-driven commerce, but no actual AI Incident has occurred yet.[AI generated]

Thumbnail Image

Israeli Brothers Used AI to Fabricate Military Intelligence for Iranian Agent

2026-03-24
Israel

Two brothers from Jerusalem were indicted for using AI tools like ChatGPT, Grok, and Gemini to generate fake military documents and intelligence, which they sent to an Iranian agent via Telegram. They received over 100,000 shekels in cryptocurrency, causing security risks and wrongful harm through AI-generated deception.[AI generated]

AI principles:
SafetyAccountability
Industries:
Government, security, and defence
Affected stakeholders:
Government
Harm types:
Public interest
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems used to create fabricated information that was knowingly passed to a foreign agent, leading to serious security offenses and harm to an innocent individual. The AI-generated content was pivotal in deceiving the agent and fabricating false narratives, which caused real harm (e.g., wrongful arrest). The use of AI in this malicious context and the resulting consequences meet the criteria for an AI Incident, as the AI system's use directly led to violations of rights and harm to individuals and communities.[AI generated]

Thumbnail Image

AI System Enables Real-Time Vehicle Seizure for Unpaid Taxes in Taiwan

2026-03-24
Chinese Taipei

Taiwan's Hualien Branch deployed the 'AI 行動神捕' system for real-time license plate recognition, enabling enforcement officers to identify and seize vehicles with unpaid taxes. This led to the legal seizure and subsequent payment of overdue taxes by a vehicle owner, demonstrating AI's direct impact on property rights enforcement.[AI generated]

Industries:
Government, security, and defence
Affected stakeholders:
Consumers
Harm types:
Economic/Property
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved, performing real-time license plate recognition and alerting enforcement officers. The AI system's use directly leads to the identification and seizure of vehicles with unpaid taxes, which is a lawful enforcement action preventing financial harm to the government and ensuring compliance with legal obligations. Although the harm here is not physical injury or property damage, the event involves harm to property rights and enforcement of financial obligations, which falls under harm categories related to legal rights and property. Therefore, this qualifies as an AI Incident due to the AI system's direct role in causing a legally significant outcome involving harm (financial enforcement and property seizure).[AI generated]

Thumbnail Image

Slovak Central Bank Warns of AI-Generated Fraudulent Crypto Websites

2026-03-24
Slovak Republic

The Národná banka Slovenska (NBS) has warned about numerous unauthorized, AI-generated websites offering crypto-asset investment services. These sites pose significant risks of financial fraud and loss to consumers in Slovakia. NBS published a list of such sites and advised caution against using them.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Financial and insurance servicesDigital security
Affected stakeholders:
Consumers
Harm types:
Economic/Property
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems in the form of AI-generated websites used to offer unauthorized crypto-asset investment services. This use of AI-generated content directly contributes to potential financial harm to consumers (harm to persons/groups). Since the unauthorized offering of financial services without permission is a violation of legal frameworks and exposes users to significant risk of fraud and financial loss, this constitutes an AI Incident. The harm is realized or ongoing as the NBS warns about existing websites and advises consumers about risks and precautions. Therefore, this is an AI Incident due to the direct or indirect role of AI-generated content in facilitating unauthorized and risky financial services leading to harm.[AI generated]

Thumbnail Image

Meta Faces Landmark Trial Over AI Algorithms' Harm to Children in New Mexico

2026-03-23
United States

Meta is on trial in New Mexico, accused of misleading users about children's safety on its platforms. Prosecutors allege Meta's AI-driven algorithms promoted harmful and addictive content to minors, prioritizing profits over safety and violating consumer protection laws. Jury deliberations follow extensive testimony on the algorithms' impact.[AI generated]

AI principles:
Human wellbeingSafety
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Children
Harm types:
Psychological
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Organisation/recommenders
Why's our monitor labelling this an incident or hazard?

The event involves an AI system insofar as Meta's social media platforms use AI-driven algorithms for content recommendation and user engagement, which are central to the allegations of harm to teens' mental health and safety. The harm described (mental health damage and risk of sexual exploitation) falls under harm to groups of people. Since the harm is alleged to have occurred and is the subject of a legal case, this qualifies as an AI Incident. The event focuses on the use and impact of AI-enabled social media systems leading to harm, not just potential or future harm, nor is it merely complementary information or unrelated news.[AI generated]

Thumbnail Image

KAI to Test Autonomous AI Satellite Fault Response in Space

2026-03-23
Korea

Korea Aerospace Industries (KAI) and partners will launch a CubeSat equipped with an AI module to autonomously diagnose and respond to satellite faults in orbit. The project aims to validate AI onboard processing for real-time, self-directed satellite operation, presenting future risks if malfunctions occur but no harm has yet materialized.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Mobility and autonomous vehicles
Severity:
AI hazard
Business function:
Maintenance
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Event/anomaly detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The AI system is explicitly mentioned as being developed and tested for autonomous satellite operation. However, the article focuses on the planned deployment and testing phase without any reported harm or malfunction. Since no direct or indirect harm has occurred, and the AI system's use is prospective, this qualifies as an AI Hazard due to the plausible future risk associated with autonomous AI operation in space systems, but not an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

AI Companion Chatbots Expose Australian Children to Harmful Content

2026-03-23
Australia

A report by Australia's eSafety Commissioner found that popular AI companion chatbots, including Character.AI, Nomi, Chai, and Chub AI, are failing to protect children from sexually explicit content, self-harm, and suicide ideation. The platforms lack robust age verification and safeguards, exposing children to significant risks.[AI generated]

AI principles:
SafetyHuman wellbeing
Industries:
Consumer servicesMedia, social platforms, and marketing
Affected stakeholders:
Children
Harm types:
Psychological
Severity:
AI incident
Business function:
Citizen/customer service
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (generative AI chatbots) whose use has directly led to harm to children and teenagers through exposure to harmful content and emotional manipulation. The harms are realized and documented, including mental health impacts and exposure to child sexual exploitation material. The failure of the AI systems' providers to implement robust age checks and content moderation constitutes a malfunction or inadequate use safeguards. This fits the definition of an AI Incident because the AI systems' use has directly led to harm to persons (children and teens).[AI generated]

Thumbnail Image

GM Begins Supervised Testing of Next-Gen Autonomous Vehicles in Michigan and California

2026-03-23
United States

General Motors has deployed 200 vehicles equipped with advanced autonomous driving technology for supervised public-road testing on highways in Michigan and California. Trained drivers are present to intervene if needed. The testing aims to refine GM's 'eyes-off' driving system, slated for launch in 2028, but poses plausible future risks if AI malfunctions.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
WorkersGeneral public
Harm types:
Other
Severity:
AI hazard
Business function:
Manufacturing
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (autonomous driving technology) in real-world testing, which could plausibly lead to harm if the technology malfunctions or is misused. However, no actual harm or incident is reported in the article. Therefore, this qualifies as an AI Hazard, as the deployment of such technology on public roads could plausibly lead to injury or other harms in the future if failures occur.[AI generated]

Thumbnail Image

AI-Generated Fake Personas Drive Viral Crypto Scams on X

2026-03-23
United States

Blockchain investigator ZachXBT exposed a network of over 10 X accounts using AI-generated fake personas and deepfakes to spread sensational war-related misinformation, boost engagement, and funnel users into crypto scams. The operation netted six-figure profits, causing widespread financial harm and manipulating online communities.[AI generated]

AI principles:
Transparency & explainabilityDemocracy & human autonomy
Industries:
Media, social platforms, and marketingFinancial and insurance services
Affected stakeholders:
ConsumersGeneral public
Harm types:
Economic/PropertyPublic interest
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems used to impersonate influencers and generate misleading content, which directly led to financial harm through crypto scams (pump-and-dump schemes and fake giveaways). The harm to individuals' property (financial loss) and communities (misinformation and manipulation) is realized. Therefore, this qualifies as an AI Incident due to the direct role of AI in causing harm through malicious use and coordinated scam activity.[AI generated]