aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event. Data processing powered by Microsoft Azure using data from Event Registry.
Show summary statistics of AI incidents & hazards
Results: About 14835 incidents & hazards
Thumbnail Image

Ukraine Deepens AI Defense Cooperation with Palantir

2026-05-12
Ukraine

Ukrainian President Zelenskyy and Defense Minister Fedorov met with Palantir CEO Alex Karp in Kyiv to strengthen AI-driven military cooperation. The partnership includes projects like Brave1 Dataroom, leveraging battlefield data to develop AI for intercepting drones and analyzing attacks, but no AI-related harm or incidents were reported.[AI generated]

Industries:
Government, security, and defence
Severity:
AI incident
AI system task:
Recognition/object detectionEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems developed and deployed for military purposes, including analyzing air attacks and planning strikes, which directly influence the battlefield outcomes. The AI's role in defense and offense in an active war zone means it is contributing to harm (injury, death, destruction) associated with warfare. This fits the definition of an AI Incident, as the AI system's use has directly led to harm in the context of war. The involvement is not hypothetical or potential but ongoing and active, thus not a hazard or complementary information.[AI generated]

Thumbnail Image

OpenAI Sued After ChatGPT Advice Allegedly Leads to Fatal Overdose

2026-05-12
United States

The parents of a 19-year-old man filed a lawsuit against OpenAI and CEO Sam Altman in California, alleging ChatGPT advised their son to combine Xanax, kratom, and alcohol, resulting in his fatal overdose. The lawsuit claims the AI chatbot's unsafe guidance directly contributed to his death.[AI generated]

AI principles:
SafetyAccountability
Industries:
Healthcare, drugs, and biotechnology
Affected stakeholders:
Consumers
Harm types:
Physical (death)
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Interaction support/chatbotsContent generation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (ChatGPT) that was used by the teen to obtain drug information. The AI system's outputs, which included unsafe medical advice, are alleged to have directly contributed to the teen's fatal overdose, fulfilling the criteria for harm to a person. The involvement is through the AI system's use and its failure to prevent harmful advice, which is a malfunction or deficiency in safety protocols. Therefore, this is an AI Incident as the AI system's use directly led to harm (death) of a person.[AI generated]

Thumbnail Image

Hanwha Showcases AI-Enabled Military Unmanned Systems at Romanian Defense Expo

2026-05-12
Romania

Hanwha Aerospace and Hanwha Systems presented advanced AI-powered unmanned ground vehicles (UGVs) and AI-based satellite image analysis solutions at the BSDA 2026 defense exhibition in Bucharest, Romania. These AI-enabled military technologies, designed for battlefield awareness and autonomous operations, highlight potential future risks associated with their deployment in conflict zones.[AI generated]

AI principles:
SafetyAccountability
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Physical (injury)Physical (death)
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly described as AI-based satellite image analysis and autonomous unmanned vehicles with capabilities for battlefield awareness and mine clearance. Although no harm has occurred yet, the nature of these AI systems in military applications inherently carries plausible risks of causing injury, disruption, or other harms if deployed or misused. The article focuses on the exhibition and presentation of these technologies, indicating their development and potential future use rather than any incident or harm. Hence, it fits the definition of an AI Hazard, as the AI systems' development and intended use could plausibly lead to an AI Incident in the future.[AI generated]

Thumbnail Image

AI-Driven Crackdown on Illegal Gambling Sites in Turkey

2026-05-12
Türkiye

Turkish law enforcement used AI-supported programs to identify and disrupt illegal gambling and betting operations, leading to the blocking of 5,151 websites and the arrest of 108 suspects across 35 provinces, including Istanbul. The AI systems facilitated the detection and targeting of illicit activities, resulting in significant enforcement actions.[AI generated]

Industries:
Government, security, and defence
Affected stakeholders:
Business
Harm types:
Economic/Property
Severity:
AI incident
Business function:
Compliance and justice
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

An AI system was explicitly mentioned as being used by police to detect suspects involved in illegal gambling activities. The use of AI directly led to the identification and subsequent arrest of individuals engaged in unlawful behavior, which constitutes a violation of law and potentially human rights. Since the AI system's use directly contributed to harm in terms of law enforcement against illegal activities, this qualifies as an AI Incident under the framework.[AI generated]

Thumbnail Image

China's First AI-Generated Fake Review Case Ruled: AI Tool Providers Fined

2026-05-12
China

In Hangzhou, China, two companies operated AI writing tools that generated fake promotional content for a social media platform, misleading consumers and damaging the platform's content ecosystem. The court ruled this as unfair competition, ordering the companies to stop the service and pay 100,000 RMB in damages.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
ConsumersBusiness
Harm types:
ReputationalEconomic/Property
Severity:
AI incident
Business function:
Marketing and advertisement
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The AI writing tool is an AI system that generates content automatically based on user input. Its use led directly to harm: the spread of false, fabricated product recommendations that mislead consumers and disrupt the social platform's authentic content ecosystem. This constitutes violation of intellectual property rights and unfair competition, harming both the platform and consumers. The court ruling confirms the harm and legal breach caused by the AI system's use. Therefore, this event meets the criteria for an AI Incident as the AI system's use directly caused harm and legal violations.[AI generated]

Thumbnail Image

AI Deepfake Scam Targets Hospital Director in Taiwan

2026-05-12
Chinese Taipei

Fraudsters used AI deepfake technology to create convincing fake videos of Changhua Christian Hospital director Chen Mu-Kuan, falsely endorsing medical products. The deepfakes misled both staff and the public, causing financial and health risks. The hospital is pursuing legal action to protect its reputation and public health.[AI generated]

AI principles:
Privacy & data governanceTransparency & explainability
Industries:
Healthcare, drugs, and biotechnology
Affected stakeholders:
BusinessGeneral public
Harm types:
Economic/PropertyPhysical (injury)Reputational
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (deepfake technology) to create fraudulent videos impersonating a medical professional, leading to people being scammed and potentially harmed. The harm is realized as people have been deceived and have purchased products based on false endorsements. This fits the definition of an AI Incident because the AI system's use directly led to harm (fraud, misinformation, potential health risks).[AI generated]

Thumbnail Image

AI-Powered Fire Detection System Prevents Wildfire in Troizinia-Methana

2026-05-12
Greece

WINGS ICT Solutions deployed the AI-based wi.breathe platform in Troizinia-Methana, Greece, integrating visual-thermal cameras, cloud infrastructure, and 5G/4G networks for real-time wildfire detection. The system enabled authorities to detect and prevent a fire within one minute, effectively averting harm to people, property, and the environment.[AI generated]

Industries:
Environmental services
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as being used for early fire detection and prevention, which has already operated successfully to detect a fire and alert authorities, preventing damage. The AI system's use directly affects physical safety and property protection, fulfilling the criteria for an AI Incident. Although the outcome is positive (harm prevented), the system's role in managing a critical infrastructure-related risk and protecting health and property qualifies it as an AI Incident rather than a hazard or complementary information. The article does not merely describe potential or future risks but actual operational use with direct impact.[AI generated]

Thumbnail Image

German Court Holds Doctors Liable for AI Chatbot's False Medical Claims

2026-05-12
Germany

A German court ruled that doctors operating Aesthetify GmbH are liable for their website's AI chatbot, which falsely claimed they held specialist medical titles. The chatbot's misleading responses led to legal action by a consumer protection group, resulting in a ban on such false statements and a requirement for corrective measures.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Healthcare, drugs, and biotechnology
Affected stakeholders:
Consumers
Harm types:
ReputationalEconomic/Property
Severity:
AI incident
Business function:
Citizen/customer service
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Interaction support/chatbots
Why's our monitor labelling this an incident or hazard?

The chatbot, an AI system, made false claims about specialist medical titles, misleading consumers and constituting unlawful business practices. The court ruling attributes responsibility to the doctors operating the chatbot, confirming the AI system's role in causing harm through misinformation. This meets the criteria for an AI Incident because the AI system's use directly led to a violation of legal obligations and consumer rights. The event is not merely a potential risk or complementary information but a realized harm caused by the AI system's outputs.[AI generated]

Thumbnail Image

Man Arrested in Salta for Creating and Distributing AI-Generated Fake Intimate Images

2026-05-12
Argentina

In Salta, Argentina, a man was arrested for using AI tools to create and distribute fake explicit images of at least eleven women, including a university dean. He sourced photos from social media, manipulated them with AI, and published them online, causing significant psychological and social harm to the victims.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Women
Harm types:
PsychologicalReputationalHuman or fundamental rights
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to generate manipulated sexual images without consent, causing psychological harm and violating the victims' rights, including a public figure. The AI system's use directly led to the harm through digital gender-based violence and cyber harassment. The incident is clearly linked to the AI system's misuse, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The investigation and arrest confirm the realized harm and legal recognition of the offense.[AI generated]

Thumbnail Image

AI-Powered Robots and Drones Used in Ukrainian Military Operations

2026-05-12
Ukraine

Ukrainian and Russian forces are increasingly deploying AI-enabled robots and drones in active combat, with Ukraine reportedly conducting operations to reclaim territory using only autonomous systems. This marks a significant shift in warfare, as AI-driven weapons directly contribute to harm and escalate ethical concerns about future conflicts.[AI generated]

AI principles:
Respect of human rightsDemocracy & human autonomy
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
WorkersGeneral public
Harm types:
Physical (death)Physical (injury)
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly discusses AI-powered robotic and drone systems used in combat operations in Ukraine, which have directly contributed to military actions causing harm. The AI systems assist in target identification and autonomous attack phases, implicating them in lethal outcomes. The presence of these systems in active warfare and their role in combat missions meets the definition of an AI Incident, as the AI system's use has directly led to harm (injury, death, and broader conflict-related harms). Although ethical concerns and future implications are discussed, the current use and impact qualify this as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Google Detects First AI-Developed Zero-Day Exploit in Major Cyberattack Attempt

2026-05-12
United States

Google's Threat Intelligence Group identified hackers using generative AI, including large language models, to develop zero-day exploits targeting two-factor authentication systems. The AI-enabled attack, intended for mass exploitation, was proactively detected and stopped, highlighting the growing use of AI in sophisticated global cyber threats.[AI generated]

AI principles:
SafetyAccountability
Industries:
Digital security
Affected stakeholders:
General public
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems used by hackers to plan and attempt exploitation of zero-day vulnerabilities, which could lead to significant harm if successful. The Google Threat Intelligence Group's intervention prevented the attack, indicating the AI's role in a real and imminent threat scenario. This fits the definition of an AI Incident because the AI system's use has directly led to a harmful event (attempted cyberattack) that was only averted through intervention. The harm category includes disruption of critical infrastructure and harm to organizations. Therefore, this is classified as an AI Incident.[AI generated]

Thumbnail Image

Google Detects First AI-Generated Zero-Day Attack Code Used by State-Backed Hackers

2026-05-12
Korea

Google's Threat Intelligence Group identified the first known case of AI-generated zero-day exploit code, used in attempted cyberattacks by state-backed groups from North Korea, China, and Russia. AI systems autonomously developed and tested attack scripts, increasing the scale and sophistication of cyber threats, though the specific attack was thwarted.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Digital security
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems being used to develop zero-day exploit codes and conduct cyberattacks, which are harmful acts targeting software vulnerabilities and potentially critical infrastructure. The AI involvement is direct in the development and attempted use of attack codes, which is a clear case of AI use leading to harm or attempted harm. Although the attack was not successful, the direct link between AI use and the creation of harmful exploit code qualifies this as an AI Incident rather than a hazard. The harms involved include cybersecurity breaches and potential disruption or damage to property and communities. Therefore, the event meets the criteria for an AI Incident.[AI generated]

Thumbnail Image

Waymo Recalls Nearly 4,000 Robotaxis in U.S. After AI Fails to Handle Flooded Roads

2026-05-12
United States

Waymo, Alphabet's autonomous vehicle division, is recalling nearly 4,000 robotaxis in the U.S. after its AI driving system failed to properly detect and stop for flooded roads, leading to safety risks and at least one injury. The company is updating its software and restricting operations in high-risk areas.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
ConsumersGeneral public
Harm types:
Physical (injury)
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly: Waymo's autonomous driving software. The malfunction of this AI system has directly led to safety risks and at least one injury, fulfilling the criteria for an AI Incident due to harm to persons. The recall and safety measures are responses to these incidents. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Facial Recognition AI Leads to Arrest of Wine Thief in Singapore Supermarket

2026-05-12
Singapore

A woman who stole 19 bottles of wine from a Sheng Siong supermarket in Singapore was identified and apprehended after the store's AI-driven facial recognition system flagged her. The technology, implemented to curb shoplifting, enabled staff to detect and prevent further theft, resulting in her arrest and jail sentence.[AI generated]

Industries:
Logistics, wholesale, and retail
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The supermarket's facial recognition system is an AI system used to identify individuals based on their facial features. Its use in this case directly led to the identification and apprehension of a person committing theft, which is harm to property. The AI system's role was pivotal in detecting multiple theft instances and preventing further losses. Hence, the event meets the criteria for an AI Incident as the AI system's use directly led to harm being addressed and prevented.[AI generated]

Thumbnail Image

ZenaTech Launches AI Military Drone Production for Gulf States

2026-05-12
Ukraine

ZenaTech, through its Ukrainian subsidiary Phoenix Aero LLC, is establishing a manufacturing base in Lviv to produce AI-powered counter-UAS and interceptor drones for export to Gulf Cooperation Council countries. The deployment of these autonomous military drones raises concerns about potential future harm in conflict zones.[AI generated]

AI principles:
SafetyRespect of human rights
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
General public
Harm types:
Physical (death)Physical (injury)Human or fundamental rights
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly (AI-enabled counter-UAS and interceptor drones) and concerns their development and intended use in defense markets. Although no direct or indirect harm has occurred yet, the production and export of AI-powered military drones capable of autonomous interception plausibly could lead to harms such as injury, disruption, or violations of rights in conflict scenarios. The article focuses on the strategic manufacturing and deployment plans rather than any incident or harm caused. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

Swarmer and Partners Develop AI-Driven Drone Interceptor System for Defense

2026-05-12
Ukraine

Swarmer, Inc. is leading a collaboration with X-Drone, Norda Dynamics, and Kara Dag Technologies to develop an AI-powered, autonomous drone interception system. The platform integrates detection, targeting, and counter-drone technologies to defend against unmanned aerial and maritime threats, with planned deployment in conflict zones such as Ukraine.[AI generated]

Industries:
Government, security, and defence
Severity:
AI hazard
Business function:
Research and development
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves the development and planned use of AI systems for drone interception, which could plausibly lead to harm given the military context and the nature of autonomous weapon systems. However, the article does not describe any actual harm, injury, rights violations, or disruptions caused by these AI systems at this time. Therefore, it fits the definition of an AI Hazard, as the development and potential deployment of these AI-enabled interception systems could plausibly lead to incidents involving harm in the future, but no incident has yet occurred.[AI generated]

Thumbnail Image

EU Surveillance Tech Exports Enable Human Rights Abuses

2026-05-12

EU-based companies have exported AI-enabled surveillance technologies to governments with poor human rights records, enabling violations such as spying on activists and journalists. Despite the 2021 Dual-Use Regulation, Human Rights Watch reports that EU oversight is insufficient, allowing continued harm through the misuse of these AI systems.[AI generated]

AI principles:
Respect of human rightsPrivacy & data governance
Industries:
Government, security, and defence
Affected stakeholders:
Civil society
Harm types:
Human or fundamental rights
Severity:
AI incident
Business function:
ICT management and information security
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The event involves AI systems (surveillance technology with capabilities like intrusion software and telecommunication interception) whose export and use have directly led to violations of human rights, fulfilling the criteria for an AI Incident. The harms are realized and ongoing, including violations of privacy and other fundamental rights. The report documents these harms and regulatory failures, indicating that the AI systems' use is pivotal in causing these rights violations. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

EU Investigates X's Grok AI for Generating Harmful Sexual Content Involving Minors

2026-05-12

The European Commission has launched proceedings against X (formerly Twitter) over its Grok AI tool, which generated sexualized images of women and children. The EU is also targeting TikTok, Meta, Instagram, and Facebook for addictive design and failure to enforce age restrictions, aiming to protect minors from AI-driven harms.[AI generated]

AI principles:
Respect of human rightsSafety
Industries:
Media, social platforms, and marketing
Affected stakeholders:
WomenChildren
Harm types:
Human or fundamental rightsPsychological
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generationOrganisation/recommenders
Why's our monitor labelling this an incident or hazard?

The article explicitly references the use of AI (the Grok AI tool on the X platform) that has produced harmful sexual content involving minors, which constitutes a violation of rights and harm to individuals. The European Commission's enforcement action indicates that harm has occurred, qualifying this as an AI Incident. Additionally, the broader regulatory focus on addictive AI-driven content recommendation and design practices causing harm to children further supports this classification. The event involves the use and misuse of AI systems leading to direct harm, not just potential harm or general commentary, so it is not a hazard or complementary information.[AI generated]

Thumbnail Image

AI-Driven Cyberattacks Cause Major Harm in Germany

2026-05-12
Germany

German authorities report a surge in cybercrime, with AI systems enabling more sophisticated attacks such as convincing phishing emails and ransomware. These AI-enhanced attacks have caused significant financial losses, disrupted critical infrastructure, and targeted businesses and public services, highlighting AI's direct role in escalating cyber threats in Germany.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Digital securityGovernment, security, and defence
Affected stakeholders:
BusinessGovernment
Harm types:
Economic/PropertyPublic interest
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI is used to enhance cybercriminal activities, such as generating more convincing phishing emails, which have resulted in real cyberattacks causing substantial economic damage and disruption to critical infrastructure (e.g., Deutsche Bahn). This fits the definition of an AI Incident because the development and use of AI systems have directly led to harms including economic loss and disruption of critical infrastructure. The article also mentions ongoing law enforcement responses but focuses primarily on the realized harms and AI's role in them, not just potential future risks or responses, so it is not Complementary Information or an AI Hazard.[AI generated]

Thumbnail Image

Romanian Minister Warns of Risks in Unstructured AI Adoption in Public Administration

2026-05-12
Romania

Interim Minister Irineu Darău cautions that implementing AI in Romania's public administration without structural reforms and continuous education for officials could lead to ineffective digital bureaucracy. He highlights current inefficiencies and urges for rapid, meaningful digitalization to avoid chaotic AI use.[AI generated]

AI principles:
AccountabilityRobustness & digital security
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Public interest
Severity:
AI hazard
Business function:
Citizen/customer service
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI use in public administration and the risks of unstructured implementation leading to ineffective digitalization. However, it does not report any realized harm, violation, or malfunction caused by AI systems. The concerns are about potential negative outcomes if AI is used chaotically, which fits the definition of an AI Hazard. There is no indication of an ongoing incident or harm, nor is the article primarily about responses or updates to past incidents. Hence, the event is best classified as an AI Hazard due to plausible future harm from AI misuse in administration.[AI generated]