aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event. Data processing powered by Microsoft Azure using data from Event Registry.
Show summary statistics of AI incidents & hazards
Results: About 14754 incidents & hazards
Thumbnail Image

Hyundai Rotem and Anduril Collaborate on AI-Driven Military Command Systems

2026-05-07
Korea

Hyundai Rotem and U.S. defense tech firm Anduril have signed an agreement in Seoul to jointly develop AI-based command and control systems for military vehicles, drones, and robots. The collaboration aims to integrate Anduril's Lattice AI OS into unmanned platforms, enabling autonomous operations and swarm control, raising future risks of AI-enabled autonomous weapon systems.[AI generated]

AI principles:
SafetyRespect of human rights
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Physical (death)
Severity:
AI hazard
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves the development and planned use of an AI system (LatticeOS) for autonomous and semi-autonomous military operations, including swarm control and counter-drone activities. Although no harm has yet occurred, the deployment of AI in lethal or military command systems carries credible risks of injury, violation of rights, or disruption, making this a plausible future hazard. Therefore, this qualifies as an AI Hazard rather than an Incident or Complementary Information, as the article focuses on the system's development and intended operational use without reporting actual harm or incident.[AI generated]

Thumbnail Image

Unauthorized Use of AI-Generated Celebrity Likeness in Livestream Sales Leads to Detention in China

2026-05-07
China

In Datong, China, a netizen named Xing illegally used AI tools to create a digital likeness of KMT chairperson Cheng Liwen for livestream sales without authorization. This misuse of AI for commercial gain infringed on personal rights, disrupted online order, and resulted in administrative detention by local police.[AI generated]

AI principles:
Respect of human rightsAccountability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Other
Harm types:
Human or fundamental rightsReputational
Severity:
AI incident
Business function:
Marketing and advertisement
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of an AI tool to generate a digital human likeness (AI digital person) of a real individual without authorization, which was then used in live-streaming commerce. This unauthorized use infringes on the person's rights and caused social harm by disturbing network order and misleading the public. The legal action taken confirms the harm and violation of laws. The AI system's misuse directly led to these harms, fitting the definition of an AI Incident involving violations of rights and harm to communities.[AI generated]

Thumbnail Image

AI-Powered DeepLoad Malware Targets Nigerian Institutions

2026-05-07
Nigeria

Nigeria's National Information Technology Development Agency (NITDA) has warned of an active AI-powered malware, DeepLoad, targeting government agencies, banks, businesses, and individuals. The malware uses social engineering to infiltrate systems, steal sensitive data, evade antivirus detection, and enable financial fraud and operational disruptions across Nigeria.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Government, security, and defenceFinancial and insurance services
Affected stakeholders:
GovernmentBusiness
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generationEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The DeepLoad malware explicitly incorporates AI-generated code to evade antivirus detection and maintain persistence, qualifying it as an AI system. The malware's active infections have caused direct harms including credential theft, financial fraud, system compromise, and risks to national security, fulfilling the criteria for an AI Incident. The advisory details realized harms and ongoing attacks, not just potential risks, confirming this classification.[AI generated]

Thumbnail Image

IT Contractor Creates Deepfake Videos from Stolen School Staff Photos in Busan

2026-05-07
Korea

A male IT contractor in Busan, South Korea, illegally accessed 194 female school staff members' PCs, stealing over 220,000 personal files and using AI deepfake technology to create manipulated sexual videos. The incident, uncovered after a USB was found, highlights privacy violations and misuse of AI for harmful content creation.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Education and trainingDigital security
Affected stakeholders:
WomenWorkers
Harm types:
PsychologicalReputationalHuman or fundamental rights
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI technology (deepfake generation) to create harmful synthetic sexual videos without consent, which is a violation of human rights and privacy. The AI system's use directly led to harm through the creation and possession of illicit content. The incident is not merely a potential risk but a realized harm, as the deepfake videos were produced and stored. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Trump Shares AI-Generated Image Targeting Biden and Family

2026-05-07
United States

Donald Trump posted an AI-generated image on Truth Social depicting Joe Biden asleep in the Oval Office and his son Hunter using drugs, alongside other political figures. The manipulated image, widely shared online, raises concerns about AI-driven misinformation and reputational harm in U.S. political discourse.[AI generated]

AI principles:
Transparency & explainabilityDemocracy & human autonomy
Industries:
Media, social platforms, and marketing
Affected stakeholders:
GovernmentGeneral public
Harm types:
ReputationalPublic interest
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system was used to generate a fabricated image involving public figures, which is a direct use of AI to create misleading content. This can cause harm to communities by spreading misinformation and potentially violating reputations, which falls under harm to communities. Since the image is actively used by a prominent figure to attack others, the harm is realized rather than potential. Therefore, this qualifies as an AI Incident due to the direct role of AI in generating harmful content that impacts social and political communities.[AI generated]

Thumbnail Image

AI-Generated Deepfakes Cause Harm and Challenge Law Enforcement in Germany

2026-05-07
Germany

AI-generated deepfake images and videos have led to reputational harm, digital violence, and violations of personal rights in Germany. High-profile cases, such as manipulated content of public figures, highlight the challenges faced by police and justice officials, who struggle with detection, legal gaps, and identifying perpetrators.[AI generated]

AI principles:
Respect of human rightsTransparency & explainability
Industries:
Media, social platforms, and marketingGovernment, security, and defence
Affected stakeholders:
General publicGovernment
Harm types:
ReputationalHuman or fundamental rightsPublic interest
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating deepfake images and videos that have directly harmed individuals by discrediting them and spreading manipulated content, fulfilling the criteria for harm to communities and violations of personal rights. The article explicitly states that such harms are happening and that law enforcement is actively dealing with these AI-generated manipulations. Therefore, this is an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

ASU Faculty Protest AI Platform's Unauthorized Use of Teaching Materials

2026-05-07
United States

Arizona State University's AI-powered platforms, Atom and ASU Atomic, repurposed faculty teaching materials without their consent to generate personalized online courses. Faculty expressed concerns over intellectual property violations, lack of consultation, and inaccuracies in AI-generated content, potentially harming educational quality and academic reputations.[AI generated]

AI principles:
AccountabilityPrivacy & data governance
Industries:
Education and training
Affected stakeholders:
Workers
Harm types:
Reputational
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The AI system (ASU Atomic) is explicitly described as using AI to generate educational content by combining and modifying faculty lectures and materials. The faculty's concerns about inaccuracies and misinformation indicate harm to the quality and integrity of education, which can be considered harm to communities and a violation of intellectual property rights. The lack of faculty consultation and compensation further supports the violation of rights. Since the harm is occurring and linked directly to the AI system's use, this event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Samsung Galaxy Watch Uses AI to Predict Fainting and Prevent Injuries

2026-05-07
Korea

Samsung, in collaboration with Chung-Ang University Gwangmyeong Hospital in South Korea, has developed an AI-powered feature for the Galaxy Watch 6 that predicts vasovagal syncope (fainting) episodes. By analyzing biosignals, the AI system can warn users before fainting, potentially reducing injuries from sudden falls.[AI generated]

Industries:
Healthcare, drugs, and biotechnology
Severity:
AI incident
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Forecasting/prediction
Why's our monitor labelling this an incident or hazard?

An AI system (the algorithm analyzing bio-signals from the smartwatch) is explicitly involved in predicting a medical condition that can lead to physical harm (injuries from falls). The AI's use directly contributes to harm prevention by providing early alerts, thus addressing potential injury risks. Since the AI system's use is linked to preventing injury and improving health outcomes, and the event reports successful prediction and clinical validation, this constitutes an AI Incident involving harm to health (a).[AI generated]

Thumbnail Image

AI-Powered TUNGA-X Interceptor Drone Unveiled in Turkey

2026-05-06
Türkiye

STM introduced the TUNGA-X, an AI-enabled autonomous interceptor drone, at the SAHA 2026 defense expo in Istanbul. Designed to counter low-cost kamikaze drones, TUNGA-X uses AI for real-time target detection and interception. While no harm has occurred, its autonomous lethal capabilities present plausible future risks.[AI generated]

AI principles:
SafetyAccountability
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
General public
Harm types:
Physical (death)Physical (injury)
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The TUNGA-X system is an AI system as it uses AI for autonomous flight, target detection, and engagement. The event concerns the development and deployment of an autonomous weapon system designed to neutralize threats, which inherently carries risks of harm (injury, property damage, or escalation in conflict). Although no harm has yet occurred or been reported, the system's autonomous lethal capabilities mean it could plausibly lead to AI Incidents in the future. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their potential impacts.[AI generated]

Thumbnail Image

AI Accounting App Issues Offensive Comments, Causing User Distress

2026-05-06
China

The Feiya AI accounting app in China generated culturally insensitive and offensive remarks when a user logged a clothing purchase for their father, likening it to funeral attire. The incident caused emotional harm, leading to user complaints and membership cancellations. The company apologized, citing an AI model flaw, and implemented urgent fixes and stricter content moderation.[AI generated]

AI principles:
FairnessHuman wellbeing
Industries:
Financial and insurance servicesConsumer services
Affected stakeholders:
Consumers
Harm types:
PsychologicalEconomic/PropertyReputational
Severity:
AI incident
Business function:
Accounting
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system (the AI chatbot in the accounting app) was involved and malfunctioned by generating inappropriate and offensive content, causing harm to the user's emotional well-being. The harm is indirect but real, as the user was upset and offended by the AI's replies. The platform acknowledged the issue, took responsibility, and implemented fixes. This fits the definition of an AI Incident because the AI's malfunction directly led to harm (emotional harm to the user).[AI generated]

Thumbnail Image

AI-Generated Fake Rabbis Spread Antisemitism on TikTok

2026-05-06
United States

A coordinated network of at least 49 TikTok accounts used generative AI to create fake rabbis who spread antisemitic stereotypes and conspiracy theories. These AI-generated avatars amassed over 950,000 followers and 10 million likes, amplifying hate and misinformation by impersonating credible Jewish voices and deceiving audiences.[AI generated]

AI principles:
Respect of human rightsFairness
Industries:
Media, social platforms, and marketing
Affected stakeholders:
General publicOther
Harm types:
PsychologicalHuman or fundamental rightsPublic interest
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly describes AI-generated fake accounts used to spread antisemitic content, which is a clear violation of human rights and causes harm to communities. The AI system's role in generating and disseminating this content is pivotal to the harm occurring. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.[AI generated]

Thumbnail Image

AI-Powered Apple Watch App Trial Aims to Detect Infections in Pediatric Cancer Patients

2026-05-06
Australia

Researchers at Murdoch Children's Research Institute in Australia are trialing an AI-powered app that analyzes Apple Watch health data to detect early signs of infection in children undergoing cancer treatment. The system aims to enable earlier intervention for immunocompromised patients, though no harm or malfunction has been reported.[AI generated]

Industries:
Healthcare, drugs, and biotechnology
Severity:
AI hazard
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Forecasting/predictionEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The article involves an AI system (the Apple Watch app using AI to analyze health data) being used in a medical context. However, it describes a trial and exploration phase without any realized harm or malfunction. The AI system's use could plausibly lead to improved health outcomes, but no incident or harm has been reported. Therefore, this is an AI Hazard as it plausibly could lead to harm prevention or improved care, but no harm or incident has yet occurred.[AI generated]

Thumbnail Image

TikTok Algorithm Systematically Favored Republican Content During 2024 US Elections

2026-05-06
United States

A study published in Nature found that TikTok's AI-driven recommendation algorithm systematically prioritized pro-Republican content in New York, Texas, and Georgia ahead of the 2024 US presidential election. Researchers using dummy accounts observed significant partisan bias, raising concerns about the algorithm's impact on political information exposure and democratic fairness.[AI generated]

AI principles:
FairnessDemocracy & human autonomy
Industries:
Media, social platforms, and marketing
Affected stakeholders:
General publicConsumers
Harm types:
Public interest
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Organisation/recommenders
Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly: TikTok's recommendation algorithm, which uses AI to curate content for users. The study demonstrates that the AI system's use has directly led to a significant harm—systematic political bias in content exposure—which can be considered harm to communities by skewing political information and potentially influencing election outcomes. This constitutes a violation of the right to access balanced information and can undermine democratic processes. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm in the form of biased political information dissemination during a critical election period.[AI generated]

Thumbnail Image

Disney's Facial Recognition System Raises Privacy Concerns in California

2026-05-06
United States

Disney has implemented AI-powered facial recognition at its California resorts, converting visitors' biometric features into unique digital values for identity verification. While Disney claims data is deleted within 30 days, critics warn of privacy risks, surveillance normalization, and potential misuse of biometric data, sparking debate over human rights and data security.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Travel, leisure, and hospitality
Affected stakeholders:
Consumers
Harm types:
Human or fundamental rights
Severity:
AI hazard
Business function:
ICT management and information security
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (facial recognition technology) in a real-world setting (Disney parks) for biometric identification and tracking. Although the article does not report a concrete incident of harm, it outlines credible risks such as privacy erosion, potential misuse of biometric data, algorithmic bias, and security vulnerabilities that could plausibly lead to harms like violations of privacy rights and data breaches. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harms, but no direct harm has yet been documented.[AI generated]

Thumbnail Image

French Cybersecurity Sector Warns of AI-Driven Vulnerability Surge

2026-05-06
France

The Campus Cyber, a major French cybersecurity organization, has issued warnings about Anthropic's new AI model, Mythos, which can rapidly discover critical software vulnerabilities. Experts fear this capability could overwhelm cybersecurity teams and increase systemic risks, urging urgent preparedness to prevent potential large-scale cyberattacks in France and Europe.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Digital security
Affected stakeholders:
GovernmentGeneral public
Harm types:
Public interestEconomic/Property
Severity:
AI hazard
Business function:
ICT management and information security
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (the Mythos AI model) and discusses their use in discovering vulnerabilities that could lead to cyberattacks. No direct harm or incident has yet occurred, but the potential for harm is clearly articulated and plausible, fitting the definition of an AI Hazard. The event is not a realized incident, nor is it merely complementary information since the main focus is on the credible risk posed by AI's capabilities in cybersecurity. Therefore, it is best classified as an AI Hazard.[AI generated]

Thumbnail Image

Actress Sues Over AI-Generated Likeness in 'Avatar' Films

2026-05-06
United States

Actress Q'orianka Kilcher sued James Cameron, Disney, and Lightstorm Entertainment, alleging her facial features were used without consent via AI-driven digital modeling to create the character Neytiri in the 'Avatar' franchise. The lawsuit cites violation of California's deepfake pornography statute and unauthorized use of biometric data.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Arts, entertainment, and recreation
Affected stakeholders:
Women
Harm types:
Human or fundamental rights
Severity:
AI incident
Business function:
Other
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generationRecognition/object detection
Why's our monitor labelling this an incident or hazard?

The event describes a direct harm caused by the use of AI or digital technology to replicate a person's facial features without permission, leading to a violation of her rights. The AI system's involvement is in the creation of the digital character's face, which is central to the harm claimed. The harm is realized, not just potential, as the character has been used in blockbuster films generating significant profits. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of rights (right of publicity and identity), which is a breach of applicable law protecting fundamental rights.[AI generated]

Thumbnail Image

AI-Generated Deepfake Video Fuels Misinformation After Tainan Policewoman's Death

2026-05-06
Chinese Taipei

Following a fatal accident involving a policewoman in Tainan, AI-generated deepfake videos misrepresented the actions of the suspect, a female student, portraying her as indifferent. These manipulated videos, allegedly originating from China, spread widely online, inciting public outrage and reputational harm, and raising concerns about AI-driven misinformation and social disruption.[AI generated]

AI principles:
Respect of human rightsTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
WomenGeneral public
Harm types:
ReputationalPublic interest
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system was explicitly involved in creating a fabricated video that misleads the public about a sensitive incident, causing reputational harm and social disruption. The harm is realized as the video attracted millions of views and led to public outrage and online harassment. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and violations of rights through misinformation and emotional manipulation. Therefore, the event qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Yongin City Expands AI-Based Pothole Detection, Reducing Road Hazards and Complaints

2026-05-06
Korea

Yongin City, South Korea, expanded its AI-based pothole monitoring system to 300 vehicles, integrating real-time road hazard detection with public complaint management. This led to a 19% drop in complaints and a 25% reduction in compensation payouts, demonstrating significant harm prevention and improved road safety through AI deployment.[AI generated]

Industries:
Mobility and autonomous vehiclesGovernment, security, and defence
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detectionEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The AI system is explicitly mentioned as being used to detect potholes and road hazards, which directly contributes to reducing risks on the roads. This use of AI has led to tangible benefits such as fewer complaints and lower compensation costs, implying a reduction in harm related to road safety. Since the AI system's use has directly led to harm reduction and improved safety, this qualifies as an AI Incident under the framework, as it involves the use of AI leading to a positive impact on preventing injury or harm to people and property.[AI generated]

Thumbnail Image

AI-Generated Fake Image Damages Brand Reputation in Taiwan

2026-05-06
Chinese Taipei

A university student in Taiwan used AI to create a fake image showing a mouse in a clothing brand's package, falsely implying hygiene issues. The brand, ALT, suffered reputational harm and is seeking NT$10 million in damages, pursuing legal action against the student for malicious use of AI-generated content.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Consumer products
Affected stakeholders:
Business
Harm types:
Reputational
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system used to generate a fake image that falsely depicts a harmful scenario (a mouse in a product package) damaging the brand's reputation. The harm is realized as reputational and commercial damage, which fits under harm to property or business interests. The AI-generated image is central to the incident, and the brand is taking legal action due to this harm. Hence, this is an AI Incident as the AI system's use directly led to harm.[AI generated]

Thumbnail Image

Warnings Issued Over Risks of Relying on AI for Financial Advice in the UK

2026-05-06
United Kingdom

Azets Wealth Management, a UK accountancy firm, warns that relying solely on AI for financial or investment advice could lead to costly mistakes, as AI tools may not reflect recent tax changes or individual circumstances. The firm urges users to seek professional advice alongside AI-generated information.[AI generated]

AI principles:
Robustness & digital securityTransparency & explainability
Industries:
Financial and insurance services
Affected stakeholders:
Consumers
Harm types:
Economic/Property
Severity:
AI hazard
Business function:
Planning and budgeting
Autonomy level:
No-action autonomy (human support)
AI system task:
Organisation/recommendersContent generation
Why's our monitor labelling this an incident or hazard?

While the article discusses the use of AI in financial advice and cautions against overreliance due to possible inaccuracies, it does not describe any realized harm, malfunction, or misuse of an AI system leading to injury, rights violations, or other harms. The warning is about plausible risks and the current unreliability of AI in this domain, which constitutes a potential risk rather than an actual incident. Therefore, this qualifies as an AI Hazard, reflecting a credible risk of harm if AI is misused for investment advice without proper oversight.[AI generated]