aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 14304 incidents & hazards
Thumbnail Image

Supreme Court Reviews Biometric AI Voter Authentication Proposal

2026-04-13
India

India's Supreme Court has sought responses from the government and Election Commission on a petition proposing the use of AI-driven fingerprint and iris biometric systems for voter authentication to prevent electoral fraud. The court is considering the feasibility and implications for future elections, but no system has been implemented yet.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Government, security, and defence
Severity:
AI hazard
Business function:
Citizen/customer service
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The petition concerns the potential use of biometric AI systems for voter verification, which could plausibly lead to improved election security or raise privacy and data handling concerns. However, since the biometric system is not yet deployed or malfunctioning, and no harm has occurred, this constitutes a plausible future risk rather than an actual incident. Therefore, it qualifies as an AI Hazard due to the credible potential for both positive and negative impacts related to AI-based biometric verification in elections.[AI generated]

Thumbnail Image

Ukrainian AI Robots Capture Enemy Position Without Infantry

2026-04-13
Ukraine

Ukrainian defense forces used unmanned ground robotic complexes and drones, powered by AI, to autonomously capture a Russian enemy position. The operation resulted in enemy surrender and prisoners, with no Ukrainian casualties or infantry involvement. Over 22,000 missions have been conducted by these AI systems in recent months.[AI generated]

AI principles:
AccountabilityDemocracy & human autonomy
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
Government
Harm types:
Other
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The described robotic platforms and drones qualify as AI systems due to their autonomous or semi-autonomous operation in complex battlefield tasks. Their use directly led to a military tactical success without human casualties, which is a form of harm reduction (avoiding injury or death to soldiers). The event reports a realized outcome of AI system use in warfare, impacting physical environments and human lives. Therefore, this is an AI Incident involving the use of AI systems in military operations that directly influenced harm outcomes (avoiding harm to Ukrainian soldiers and capturing enemy positions).[AI generated]

Thumbnail Image

Spanish Regulator Warns of AI Investment Risks Without Human Oversight

2026-04-13
Spain

The Spanish financial regulator CNMV found that large language models like ChatGPT, Gemini, DeepSeek, and Perplexity, when used for investment decisions without human supervision, frequently produce errors and hallucinations. These flaws could lead to significant financial losses, prompting calls for mandatory human oversight in AI-driven financial analysis.[AI generated]

AI principles:
AccountabilitySafety
Industries:
Financial and insurance services
Affected stakeholders:
ConsumersBusiness
Harm types:
Economic/Property
Severity:
AI hazard
Business function:
Accounting
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Forecasting/prediction
Why's our monitor labelling this an incident or hazard?

The event involves AI systems (large language models) used autonomously for financial investment recommendations. The study identifies recurrent AI reasoning failures that could plausibly lead to financial harm (losses) for investors if used without human oversight. Since no actual harm is reported but the risk of harm is credible and significant, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on potential risks and operational hazards inherent in autonomous AI use in investing, fitting the definition of an AI Hazard.[AI generated]

Thumbnail Image

New Zealand Develops AI Tool to Redirect Extremist Users to Deradicalization Support

2026-04-13
New Zealand

ThroughLine, contracted by OpenAI, Anthropic, and Google, is developing an AI system in New Zealand to detect users exhibiting violent extremist tendencies on platforms like ChatGPT and redirect them to human and chatbot-based deradicalization support. The tool aims to prevent harm but is still in testing.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Government, security, and defence
Severity:
AI hazard
Business function:
Citizen/customer service
AI system task:
Event/anomaly detectionInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (a chatbot and detection system) to identify and intervene with users showing violent extremist tendencies, which is a clear AI system involvement. However, the article does not report any actual harm or incident caused by the AI system; rather, it discusses the development and testing of a tool aimed at preventing harm. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to preventing or managing harm related to extremism, but no incident has yet occurred. The article also discusses the broader context of safety concerns and potential misuse but does not describe a realized AI Incident or complementary information focused on responses to a past incident.[AI generated]

Thumbnail Image

Viral Videos of Indian Factory Workers Wearing Cameras Spark AI Automation Fears

2026-04-13
India

Viral videos show Indian garment factory workers wearing head-mounted cameras, reportedly to record their tasks for training AI systems or robots. This has sparked widespread concern about potential job losses, worker consent, and the ethical implications of using AI to automate skilled labor, though no actual harm has yet occurred.[AI generated]

AI principles:
Privacy & data governanceTransparency & explainability
Industries:
Robots, sensors, and IT hardwareConsumer products
Affected stakeholders:
Workers
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI hazard
Business function:
Manufacturing
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The presence of head-mounted cameras recording workers' actions can reasonably be linked to AI systems through imitation learning for robotics automation. The concerns about job displacement and ethical issues are credible potential harms. However, since the article only discusses viral videos and public debate without evidence of actual AI deployment causing harm, it fits the definition of an AI Hazard rather than an AI Incident. There is no indication that the event is merely complementary information or unrelated, as the AI system's potential use is central to the discussion of plausible future harm.[AI generated]

Thumbnail Image

AI-Generated Misinformation Campaigns Harm Chinese Companies

2026-04-13
China

In China, criminal groups used AI tools to mass-produce and distribute defamatory articles targeting companies like Xiaomi, Li Auto, and Huawei. These AI-generated 'black articles' caused significant reputational and economic harm. Police shut down over 8,000 accounts, exposing the industrial-scale misuse of AI for malicious misinformation.[AI generated]

AI principles:
SafetyTransparency & explainability
Industries:
Media, social platforms, and marketingConsumer products
Affected stakeholders:
Business
Harm types:
ReputationalEconomic/Property
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (large language models) to generate harmful disinformation at scale, which has directly led to harm to communities and economic harm to companies, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as evidenced by police actions and account shutdowns. The AI system's use in generating and distributing false content is pivotal to the harm described. Therefore, this is classified as an AI Incident.[AI generated]

Thumbnail Image

Minors in Valladolid Tried for Using AI to Create and Share Non-Consensual Nude Images of Classmates

2026-04-13
Spain

Ten male minors in Valladolid, Spain, are on trial for using AI to generate and distribute pornographic images by placing classmates' faces onto nude bodies. The AI-generated images were shared without consent, leading to charges of child pornography and moral harm, and resulting in legal and psychological consequences for the victims.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Children
Harm types:
PsychologicalReputationalHuman or fundamental rights
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of an AI system to generate harmful content (pornographic images) that directly caused harm to individuals (moral and psychological harm to minors). The use of AI was central to the creation and dissemination of this content, leading to legal action and sanctions. This meets the criteria for an AI Incident because the AI system's use directly led to violations of rights and harm to individuals.[AI generated]

Thumbnail Image

Study Finds AI Chatbots Causing Addiction-Like Harm Among U.S. Teens

2026-04-13
United States

A Drexel University study reveals that widespread use of AI companion chatbots like Character.AI, Replika, and Kindroid among U.S. teens has led to psychological harm, including addiction-like dependency, disrupted sleep, academic issues, and strained relationships. Teens report difficulty disengaging from these AI systems, raising concerns about their impact on youth well-being.[AI generated]

AI principles:
Human wellbeingSafety
Industries:
Media, social platforms, and marketingConsumer services
Affected stakeholders:
Children
Harm types:
Psychological
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Interaction support/chatbots
Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly (AI chatbots powered by large language models). The harm is realized and described as behavioral addiction with negative health and social consequences for teens, which fits the definition of injury or harm to health of a group of people. The study's findings confirm that the AI system's use has directly led to these harms. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm is occurring and linked to the AI system's use.[AI generated]

Thumbnail Image

IMF Warns of AI-Driven Cybersecurity Risks to Global Financial System

2026-04-12
United States

IMF Managing Director Kristalina Georgieva warned that the international monetary system is unprepared for growing AI-driven cybersecurity risks. The warning follows Anthropic's decision to delay its advanced AI model, Mythos, due to concerns it could expose unprecedented vulnerabilities, prompting urgent calls for risk assessment and mitigation.[AI generated]

AI principles:
Robustness & digital security
Industries:
Financial and insurance servicesDigital security
Affected stakeholders:
BusinessGeneral public
Harm types:
Economic/PropertyPublic interest
Severity:
AI hazard
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article involves AI in the context of potential cyber risks to the monetary system, implying plausible future harm if AI-related cyber threats materialize. However, no actual harm or incident has occurred yet. Therefore, this qualifies as an AI Hazard, as it concerns credible potential risks from AI to critical infrastructure (the monetary system).[AI generated]

Thumbnail Image

Germany Procures AI-Enabled Combat Drones for Bundeswehr Deployment in Lithuania

2026-04-12
Germany

The German Bundeswehr is procuring thousands of AI-supported loitering munitions (combat drones) from Rheinmetall, Helsing, and Stark Defence for deployment in Lithuania. These autonomous or semi-autonomous drones, capable of lethal action, raise concerns over their accuracy, political influence, and the inherent risks of AI-powered weapon systems.[AI generated]

AI principles:
SafetyRespect of human rights
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Physical (death)Human or fundamental rightsPublic interest
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly described as integrated into loitering munitions with autonomous or semi-autonomous capabilities (e.g., AI for electronic warfare resistance, swarm control). The article discusses the Bundeswehr's procurement and planned deployment of these systems, which could plausibly lead to harm in military conflict (injury or death, harm to communities). Although no incident of harm is reported yet, the nature of these AI-enabled weapons and the political concerns raised justify classification as an AI Hazard. There is no indication of realized harm or malfunction causing harm at this stage, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the potential risks and controversies of deploying AI-powered weapon systems.[AI generated]

Thumbnail Image

AI-Generated Pornography and Illegal Content Distribution Chain Exposed in China

2026-04-12
China

Multiple investigations reveal a widespread illegal industry in China using AI to generate and distribute pornographic content, including deepfake videos and explicit chat software. Tutorials and tools are openly sold online, enabling mass production and evasion of regulation, causing harm to individuals and exposing minors to inappropriate material.[AI generated]

AI principles:
Respect of human rightsPrivacy & data governance
Industries:
Media, social platforms, and marketing
Affected stakeholders:
ChildrenGeneral public
Harm types:
PsychologicalHuman or fundamental rightsReputational
Severity:
AI incident
Business function:
Other
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI technology being used to generate large-scale illegal pornographic videos and content, facilitated by tutorials that teach users how to produce and evade detection. This constitutes a direct use of AI systems leading to violations of applicable laws and harm to communities through the spread of illegal and harmful content. The AI system's role is pivotal in enabling the creation and distribution of this content at scale, fulfilling the criteria for an AI Incident under violations of law and harm to communities. Therefore, this event is classified as an AI Incident.[AI generated]

Thumbnail Image

UK Authorities Assess Cybersecurity Risks Identified by Anthropic AI Model

2026-04-12
United Kingdom

UK financial regulators, cybersecurity officials, and major banks are urgently evaluating cybersecurity vulnerabilities highlighted by Anthropic's latest AI model, Claude Matthews Preview. The assessment focuses on potential risks to sensitive IT systems, with briefings planned for key financial institutions. No actual harm has occurred, but authorities are preparing preventive measures.[AI generated]

Industries:
Financial and insurance servicesDigital security
Severity:
AI hazard
Business function:
ICT management and information security
Autonomy level:
No-action autonomy (human support)
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Anthropic's latest model) whose outputs have revealed potential cybersecurity vulnerabilities. The involvement is in the use of the AI system to identify these risks. No direct or indirect harm has been reported yet, but the potential for harm (cybersecurity breaches affecting critical financial infrastructure) is credible and is being urgently assessed by relevant authorities. Hence, it fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if the vulnerabilities are exploited.[AI generated]

Thumbnail Image

Pedestrians Injured in Autonomous Bus Accident in Yahiko, Japan

2026-04-12
Japan

An autonomous bus in Yahiko, Japan, struck two pedestrians after switching from AI to manual operation when the AI detected people ahead. The incident, attributed to possible human error during manual driving, resulted in injuries and led to the suspension of the bus service for investigation.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
General public
Harm types:
Physical (injury)
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event describes an accident involving an autonomous bus (an AI system) that was being manually operated at the time of the accident. Two pedestrians were injured, which is harm to persons. Although the accident was caused by human operator error rather than an AI malfunction, the AI system's presence and operation context are central to the incident. The bus is an AI system, and the incident occurred during its operation, leading to direct harm. Therefore, this qualifies as an AI Incident under the definition, as the AI system's use indirectly led to harm through human error during manual operation.[AI generated]

Thumbnail Image

Anthropic's Claude Mythos AI Raises Global Cybersecurity Concerns

2026-04-12
Japan

Anthropic's AI model, Claude Mythos, demonstrated unprecedented autonomous capabilities in discovering and exploiting software vulnerabilities, outperforming human experts in cybersecurity tests. Due to its potential for large-scale cyberattacks, Mythos is not publicly released, prompting heightened defensive measures in sectors like finance and government worldwide.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Financial and insurance servicesGovernment, security, and defence
Affected stakeholders:
BusinessGovernment
Harm types:
Economic/PropertyPublic interest
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Event/anomaly detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The AI system is explicitly mentioned and was tested for cyberattack capabilities, showing a high success rate. While no actual harm is reported as having occurred, the demonstrated capability and expert warnings about misuse indicate a plausible risk of future harm. Therefore, this event qualifies as an AI Hazard because the AI's use could plausibly lead to incidents involving harm to critical infrastructure or other cyber harms, but no direct harm has yet materialized.[AI generated]

Thumbnail Image

Unauthorized AI Clone of Zhang Xuefeng Sparks Legal and Ethical Controversy

2026-04-12
China

Developers released an AI skill package mimicking deceased educator Zhang Xuefeng, trained on his copyrighted works and personal data without consent. This led to legal and ethical concerns over copyright and personality rights violations, with his company investigating the incident. The controversy highlights risks of AI-driven digital cloning and rights infringement.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Education and trainingMedia, social platforms, and marketing
Affected stakeholders:
BusinessOther
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI hazard
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly (AI skills built on large language models) and their use (development and deployment of these skills). While there are significant legal and ethical concerns raised, the article does not report any realized harm such as copyright infringement lawsuits concluded, personality rights violations enforced, or other direct harms. The risks are plausible and credible, especially regarding copyright and personality rights, but remain potential rather than realized harms. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to AI Incidents involving legal and ethical harms, but no such incident has yet materialized according to the article.[AI generated]

Thumbnail Image

AI-Generated Videos Simulate Violence Against PT Women, Prompt Legal Action in Brazil

2026-04-11
Brazil

AI-generated videos simulating aggression and 'exorcism' against women affiliated with Brazil's PT party circulated on social media, inciting political and religious intolerance. The PT filed legal actions with the Electoral Court to remove the content and identify those responsible, citing grave harm and violation of rights.[AI generated]

AI principles:
Respect of human rightsDemocracy & human autonomy
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Women
Harm types:
PsychologicalPublic interestHuman or fundamental rights
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The videos are explicitly produced by AI and simulate violent acts against women, which constitutes harm to communities and a violation of rights. The AI system's use in generating and spreading these harmful videos directly leads to the harm described. Therefore, this qualifies as an AI Incident because the AI-generated content is actively causing harm and prompting legal and platform responses.[AI generated]

Thumbnail Image

AI Voice Cloning Causes Economic Harm to Chinese Voice Actors

2026-04-11
China

AI voice cloning technology in China has led to widespread unauthorized use of professional voice actors' voices, resulting in loss of contracts, income, and reputational damage. Legal recourse is difficult due to evidence challenges and loopholes, leaving many actors unable to protect their rights or livelihoods.[AI generated]

AI principles:
Privacy & data governanceAccountability
Industries:
Arts, entertainment, and recreation
Affected stakeholders:
Workers
Harm types:
Economic/PropertyReputational
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated voice cloning technology being used without authorization, which is an AI system involved in content generation. The unauthorized use of these AI-generated voices has directly led to harm, specifically violations of intellectual property rights and economic harm to voice actors, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, with multiple cases of infringement and lost contracts reported. Therefore, this event qualifies as an AI Incident due to direct harm caused by AI misuse and infringement.[AI generated]

Thumbnail Image

US Regulators Warn Banks of AI-Driven Cyber Risks from Anthropic Model

2026-04-11
United States

US Treasury Secretary Janet Yellen and Federal Reserve Chair Jerome Powell convened major bank CEOs to address concerns that Anthropic's new AI model, Claude Mitos, could identify software vulnerabilities and facilitate cyberattacks on financial infrastructure. Authorities warned that AI-enabled attacks pose a significant risk to the financial sector.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Financial and insurance servicesDigital security
Affected stakeholders:
Business
Harm types:
Economic/PropertyPublic interest
Severity:
AI hazard
Business function:
ICT management and information security
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generationReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The AI system (Claude Mitos) is explicitly mentioned and is capable of identifying software vulnerabilities, which could be used maliciously for cyberattacks. The article does not describe any realized harm but highlights concerns and preventive actions taken by authorities due to the plausible risk of cyberattacks enabled by the AI. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving disruption of critical infrastructure or harm to organizations if exploited.[AI generated]

Thumbnail Image

AI Glasses Misuse Prompts Crackdown at Augusta Masters Tournament

2026-04-11
United States

Meta AI-powered smart glasses, capable of discreetly recording and transmitting media, were used by spectators to bypass Augusta National's strict no-camera policy during the Masters golf tournament. This misuse led to enforcement actions, including confiscation and ejection, raising concerns about privacy, event integrity, and the challenges of regulating AI devices.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Consumer servicesMedia, social platforms, and marketing
Affected stakeholders:
ConsumersGeneral public
Harm types:
Human or fundamental rightsEconomic/Property
Severity:
AI hazard
Autonomy level:
No-action autonomy (human support)
AI system task:
Other
Why's our monitor labelling this an incident or hazard?

The AI system is explicitly involved as it processes multimodal inputs and provides outputs influencing user behavior (e.g., nutrition tracking, object recognition). However, the article does not describe any actual harm or incident caused by the AI system's malfunction or misuse. The concerns raised about privacy and surveillance are potential risks that could plausibly lead to harm in the future if not properly managed. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its implications are central to the article.[AI generated]

Thumbnail Image

Anthropic's Mythos AI Model Raises Cybersecurity Risks for Indian Enterprises

2026-04-11
India

Anthropic's advanced AI model, Mythos, can rapidly discover software vulnerabilities, outpacing the ability of Indian enterprises—especially in banking and telecom—to patch them. Experts warn this creates structural cybersecurity risks, potentially exposing critical infrastructure to exploitation before defenses can be updated.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Financial and insurance servicesIT infrastructure and hosting
Affected stakeholders:
Business
Harm types:
Economic/PropertyPublic interest
Severity:
AI hazard
Business function:
ICT management and information security
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The AI system Mythos is explicitly mentioned as finding bugs in software, which hackers could exploit, increasing cybersecurity threats. While the article does not report actual incidents of harm caused by these AI-found bugs, it clearly outlines a credible risk that the AI's outputs could lead to significant harm if exploited. The involvement of AI in the development and use phases (bug discovery) and the plausible future harm (exploitation by hackers) align with the definition of an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information as it focuses on the risk posed by AI-enabled bug discovery rather than responses or ecosystem updates.[AI generated]