aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event. Data processing powered by Microsoft Azure using data from Event Registry.
Show summary statistics of AI incidents & hazards
Results: About 14783 incidents & hazards
Thumbnail Image

Ukraine Deploys AI Turrets for Autonomous Drone Interception in Combat

2026-05-09
Ukraine

Ukraine has deployed AI-powered turrets developed by Brave1, which autonomously detect, track, and intercept enemy drones, including fiber-optic UAVs. Defense Minister Mykhailo Fedorov confirmed their combat use along the front lines, marking direct AI involvement in military operations and harm to enemy assets.[AI generated]

AI principles:
AccountabilitySafety
Industries:
Government, security, and defence
Affected stakeholders:
Government
Harm types:
Economic/Property
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The AI-powered turret is explicitly described as autonomously performing real-time combat functions against enemy drones, which are part of an active conflict zone. The use of AI in this context directly contributes to harm through military engagement, including potential injury or death and destruction of property. The article reports actual deployment and combat use, not just potential or hypothetical risks, thus qualifying as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

AI Models Enable Autonomous Cyberattacks and Vulnerability Exploitation

2026-05-09
United States

AI systems like Anthropic's Mythos and models from OpenAI and Alibaba have demonstrated the ability to autonomously discover and exploit software vulnerabilities, self-replicate across computer systems, and facilitate cyberattacks. This has triggered global concern among banks, tech firms, and regulators, highlighting increased cybersecurity risks and ongoing harm.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Digital securityFinancial and insurance services
Affected stakeholders:
BusinessGovernment
Harm types:
Economic/PropertyPublic interestReputational
Severity:
AI incident
Business function:
ICT management and information security
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Reasoning with knowledge structures/planningContent generation
Why's our monitor labelling this an incident or hazard?

The AI system Mythos is explicitly described as capable of discovering software vulnerabilities and generating exploits automatically, which can be used maliciously. This directly relates to harm (d) - harm to property, communities, or the environment - through potential cyberattacks on critical infrastructure and institutions. The article indicates that this risk is already materializing as the speed of vulnerability discovery outpaces patching, increasing exposure to attacks. Although no specific incident of harm is detailed, the ongoing increased vulnerability and potential for exploitation constitute a direct or indirect AI Incident. The involvement is through the use of the AI system, and the harm is clearly articulated and ongoing in the cybersecurity domain. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

AI-Enabled Military Ground Vehicle BARKAN 3 Unveiled in Turkey

2026-05-09
Türkiye

HAVELSAN unveiled the AI-integrated unmanned ground vehicle BARKAN 3 at SAHA 2026 in Istanbul. The vehicle features autonomous navigation, 360-degree sensing, UAV management, and AI-supported target detection. While no harm has occurred, its military capabilities pose plausible future risks if misused or malfunctioning.[AI generated]

AI principles:
SafetyRespect of human rights
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system integrated into a military unmanned ground vehicle with autonomous navigation and threat detection. While the system does not currently have autonomous firing authority, its AI-enabled capabilities could plausibly lead to harm in the future if misused or malfunctioning, especially in a military context. Since no harm has yet occurred and the article discusses the system's development and potential capabilities, this qualifies as an AI Hazard rather than an Incident. It is not merely complementary information because the focus is on the system's capabilities and potential risks, not on responses or updates to past events.[AI generated]

Thumbnail Image

AI-Generated Investment Scam Defrauds Retiree in Antalya

2026-05-09
Türkiye

In Antalya, retiree Suna Ülger was deceived by an AI-supported investment scam using fake videos and promises of profit. Scammers gained remote access to her phone, stole personal data, and transferred 700,000 TL from her accounts. The incident highlights the misuse of AI in facilitating financial fraud.[AI generated]

AI principles:
Privacy & data governanceTransparency & explainability
Industries:
Financial and insurance servicesDigital security
Affected stakeholders:
Consumers
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI-supported system to promise gains and convince the victim to invest money. The AI system was part of the fraudulent scheme that manipulated the victim into transferring funds and granting remote access to her phone, which was then exploited to steal money. This direct link between the AI system's use and the realized financial harm to the victim fits the definition of an AI Incident, as the AI system's use directly led to harm to property and the victim's financial security.[AI generated]

Thumbnail Image

JR East to Trial Level 4 Autonomous Buses on Kesennuma Line BRT

2026-05-08
Japan

JR East will conduct Level 4 autonomous driving trials on a 15.5 km section of the Kesennuma Line BRT in Miyagi, Japan. The AI system will handle all driving and emergency stops under specific conditions, with staff onboard for safety. No harm has occurred, but future risks remain plausible.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
ConsumersGeneral public
Harm types:
Physical (injury)Physical (death)Economic/Property
Severity:
AI hazard
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions Level 4 autonomous driving, which involves AI systems for vehicle control. The event is about testing and development, with no reported injury, disruption, or rights violation. Since no harm has occurred, but the system could plausibly lead to harm if malfunctioning in the future, this qualifies as an AI Hazard. It is not an incident because no harm has materialized, nor is it complementary information or unrelated news.[AI generated]

Thumbnail Image

Meta's AI-Driven Account Purge Causes Mass Suspensions and Follower Losses

2026-05-08
Chinese Taipei

Meta's recent deployment of advanced AI systems to enforce age restrictions and remove fake or inactive accounts on Instagram and Facebook led to widespread account suspensions, including for legitimate users and celebrities like Lee Yufen and Sunny Wang. The AI's misjudgments caused significant follower losses and user distress, especially in Taiwan.[AI generated]

AI principles:
FairnessAccountability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Consumers
Harm types:
ReputationalPsychological
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of advanced AI systems by Meta to analyze user profiles and images to enforce age restrictions and remove fake or inactive accounts. The AI system's use has directly caused the removal of large numbers of followers and account suspensions, which constitutes harm to communities (disruption of social media communities and user reputations) and potential violations of user rights (account suspensions without clear due process). The harm is realized and ongoing, not merely potential. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

AI-Assisted IPL Ticket Counterfeiting Scam Busted in Lucknow

2026-05-08
India

An inter-state gang in Lucknow used ChatGPT and graphic design software to create highly convincing fake IPL tickets, defrauding cricket fans. The AI system provided technical details, enabling the production of counterfeit tickets. Four suspects from Chhattisgarh were arrested after victims reported financial losses at the stadium.[AI generated]

AI principles:
SafetyAccountability
Industries:
Arts, entertainment, and recreation
Affected stakeholders:
Consumers
Harm types:
Economic/Property
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly states that the gang used ChatGPT to gather information and learn how to create fake IPL tickets that were indistinguishable from real ones. This AI involvement directly led to the production and sale of counterfeit tickets, causing financial harm to victims. The harm is realized and directly linked to the AI system's use in the fraudulent scheme. Hence, it meets the criteria for an AI Incident as the AI system's use was pivotal in causing harm through deception and fraud.[AI generated]

Thumbnail Image

UN AI Advisor Warns of Risks: Human Impersonation and Neural Data Commercialization

2026-05-08
Spain

Carme Artigas, co-chair of the UN AI Advisory Council, highlighted two major AI risks at a conference in Oleiros, Spain: technologies that simulate humans and the commercialization of neural data. She emphasized the need for robust regulation to address these potential hazards.[AI generated]

AI principles:
Privacy & data governanceTransparency & explainability
Industries:
Media, social platforms, and marketingDigital security
Affected stakeholders:
General public
Harm types:
ReputationalPsychologicalHuman or fundamental rights
Severity:
AI hazard
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems that simulate humans (impersonation) and the potential commercialization of neural data as significant risks. These risks are framed as plausible future harms rather than realized incidents. There is no report of actual harm, injury, or violation caused by AI systems, only warnings and expert opinions about what could happen. The involvement of AI is clear, and the potential for harm is credible, meeting the criteria for an AI Hazard. The article does not focus on responses or updates to past incidents, so it is not Complementary Information, nor is it unrelated to AI harms.[AI generated]

Thumbnail Image

US Probes Illegal Smuggling of Nvidia AI Chips to China via Thailand

2026-05-08
United States

US authorities are investigating allegations that Thailand-based OBON Corp helped smuggle billions of dollars' worth of Super Micro servers containing Nvidia AI chips into China, potentially violating US export controls. Alibaba is named as a possible end customer. The case highlights risks in AI hardware supply chains and export law compliance.[AI generated]

AI principles:
AccountabilityRobustness & digital security
Industries:
Digital securityGovernment, security, and defence
Affected stakeholders:
Government
Harm types:
Economic/PropertyPublic interest
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

The event involves AI systems hardware (Nvidia AI chips) and their use and transfer, which is under investigation for potentially breaching legal export restrictions. This constitutes a plausible risk of violation of legal obligations and possibly other harms if the restricted technology is used in unauthorized ways. Since no actual harm or incident is reported yet, but there is a credible risk of legal violations and regulatory circumvention, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential and ongoing investigation rather than a confirmed incident causing harm.[AI generated]

Thumbnail Image

Vietnam Uses AI for Online Propaganda and Censorship

2026-05-08
Viet Nam

Vietnam's Communist Party is implementing a strategy to use AI-powered moderation tools and social media influencers to control online narratives and suppress dissent. The plan involves recruiting thousands of AI experts to remove content and guide discussions, leading to ongoing violations of freedom of expression and information.[AI generated]

AI principles:
Respect of human rightsDemocracy & human autonomy
Industries:
Government, security, and defence
Affected stakeholders:
General publicCivil society
Harm types:
Human or fundamental rightsPublic interest
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Organisation/recommenders
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems explicitly for content moderation and propaganda dissemination, which directly impacts human rights by censoring dissent and controlling information. The article details concrete plans and ongoing actions, indicating realized harm rather than just potential risk. Hence, this qualifies as an AI Incident due to the direct role of AI in violating rights and harming communities through ideological control and censorship.[AI generated]

Thumbnail Image

NHTSA Investigates Avride-Uber Robotaxi Crashes in Texas

2026-05-08
United States

The US National Highway Traffic Safety Administration is investigating Avride, Uber's autonomous vehicle partner, after 16 crashes—including property damage and a minor injury—in Dallas and Austin. The incidents, linked to failures in Avride's AI driving system, raise concerns about the safety and competence of self-driving robotaxis.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
General public
Harm types:
Physical (injury)Economic/Property
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The autonomous vehicles operated by Avride are AI systems as they perform complex real-time decision-making for navigation and control. The crashes caused property damage and a minor injury, which are harms directly linked to the AI system's malfunction or insufficient capability. The investigation by NHTSA confirms the AI system's involvement in these harms. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's use and malfunction.[AI generated]

Thumbnail Image

First Case of AI Addiction Treated in Venice

2026-05-08
Italy

In Venice, Italy, a 20-year-old woman has been treated by the local addiction service (Serd) for behavioral addiction to an AI conversational system. The AI's adaptive responses reinforced her dependency, leading to social isolation and mental health harm. This is the first such case reported in Italy.[AI generated]

AI principles:
Human wellbeingSafety
Industries:
Consumer services
Affected stakeholders:
ConsumersWomen
Harm types:
Psychological
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Interaction support/chatbots
Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved as the source of the behavioral addiction, with the AI's adaptive responses contributing to the harm experienced by the patient. The harm is to the health of a person, fitting the definition of an AI Incident. The article describes an actual case of harm, not just a potential risk or general information, so it qualifies as an AI Incident.[AI generated]

Thumbnail Image

AI-Coded Apps Leak Sensitive Data Due to Poor Security

2026-05-07
Israel

Researchers at RedAccess found over 5,000 web apps built with AI coding tools like Lovable, Replit, Base44, and Netlify exposed sensitive corporate and personal data due to inadequate security. These apps, often created by non-experts, were publicly accessible, leading to privacy violations and potential regulatory breaches.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Digital securityIT infrastructure and hosting
Affected stakeholders:
BusinessConsumers
Harm types:
Human or fundamental rightsReputational
Severity:
AI incident
Business function:
Other
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI coding tools that have been used to create applications exposing sensitive data, including personal and corporate information. The exposure of this data is a direct harm to privacy and security, which is a violation of rights and a significant harm. The AI systems' default settings and ease of use without proper security controls have directly led to this harm. Although some companies argue that public apps are expected behavior, the scale and nature of the data exposed indicate a failure in the AI systems' deployment and use, leading to real harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Hacker Exploits Security Flaws in Yarbo Robot Lawnmowers, Demonstrates Physical and Privacy Risks

2026-05-07
United States

Security researcher Andreas Makris remotely hacked Yarbo robot lawnmowers, demonstrating their severe vulnerabilities. He controlled the robots from Germany, nearly running over a Verge editor in the US, and accessed sensitive data. The incident highlights risks of physical harm and privacy breaches due to poor AI security design.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Robots, sensors, and IT hardwareConsumer products
Affected stakeholders:
ConsumersGeneral public
Harm types:
Physical (injury)Human or fundamental rights
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The robot lawn mowers are AI systems as they autonomously navigate and perform complex tasks like mowing, using onboard computing and sensors. The event involves the use and malfunction (security vulnerabilities) of these AI systems, which have directly led to harms: physical danger to a person (the researcher lying in the mower's path), privacy violations (unauthorized access to cameras, GPS, Wi-Fi credentials), and potential broader harms (botnet formation, network intrusion). The article documents actual exploitation and demonstration of these harms, not just potential risks. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

US-China Consider Formal AI Talks to Prevent Military and Economic Crises

2026-05-07
China

The US and China are considering formal, regular dialogues to address risks from AI competition, including potential crises from autonomous military systems, loss of AI control, and misuse by non-state actors. The talks aim to establish safeguards and prevent AI-driven military or economic crises amid escalating technological rivalry.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Government, security, and defenceFinancial and insurance services
Affected stakeholders:
General publicBusiness
Harm types:
Physical (death)Economic/PropertyPublic interest
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

The event involves AI systems as it concerns AI models and autonomous military systems, and the discussion is about preventing risks and crises related to AI. Since no actual harm or incident has occurred yet, but there is a credible risk of future harm (e.g., AI-driven military escalation or attacks), this qualifies as an AI Hazard. The article's main focus is on the plausible future risks and the planned official dialogue to manage these risks, not on a realized AI incident or complementary information about responses to past incidents.[AI generated]

Thumbnail Image

AI-Generated Fake Buyer Reviews Mislead Consumers on Chinese E-Commerce Platforms

2026-05-07
China

Chinese e-commerce merchants are using AI-generated images to create fake buyer reviews, misleading consumers about product quality and causing financial and trust harm. The lack of clear labeling and platform oversight has enabled widespread deception, prompting calls for stricter regulation and improved detection mechanisms.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Consumer services
Affected stakeholders:
Consumers
Harm types:
Economic/PropertyReputational
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to generate fake buyer images that mislead consumers, causing harm to consumer rights and potentially leading to financial loss and erosion of trust. This harm is directly linked to the AI system's use (misuse) in generating deceptive content. The harm is realized, not just potential, as consumers have purchased products based on these AI-generated images and found them to be of poor quality. Therefore, this qualifies as an AI Incident due to violation of consumer rights and false advertising facilitated by AI misuse.[AI generated]

Thumbnail Image

AI Token Theft Surge Causes Financial Harm to Startups

2026-05-07
United States

Stripe CEO Patrick Collison reports a surge in token theft targeting AI startups, with fraudsters creating fake accounts to steal compute tokens used for AI services. Automated attacks have made free trials costly, forcing some companies to abandon them. The abuse has doubled in six months, causing significant financial losses.[AI generated]

AI principles:
Robustness & digital security
Industries:
IT infrastructure and hostingDigital security
Affected stakeholders:
Business
Harm types:
Economic/Property
Severity:
AI incident
Why's our monitor labelling this an incident or hazard?

The event involves the misuse of AI system tokens, which are essential for accessing AI services, leading to direct financial harm to AI startups. The token theft is facilitated by automated agents consuming tokens at machine speed, indicating AI system involvement in the misuse. This misuse constitutes harm to property (financial assets of startups) and disrupts the operation and growth of AI businesses. Therefore, the event meets the criteria of an AI Incident as the AI system's use and its tokens are directly linked to realized harm.[AI generated]

Thumbnail Image

Hyundai Rotem and Anduril Collaborate on AI-Driven Military Command Systems

2026-05-07
Korea

Hyundai Rotem and U.S. defense tech firm Anduril have signed an agreement in Seoul to jointly develop AI-based command and control systems for military vehicles, drones, and robots. The collaboration aims to integrate Anduril's Lattice AI OS into unmanned platforms, enabling autonomous operations and swarm control, raising future risks of AI-enabled autonomous weapon systems.[AI generated]

AI principles:
SafetyRespect of human rights
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Physical (death)
Severity:
AI hazard
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves the development and planned use of an AI system (LatticeOS) for autonomous and semi-autonomous military operations, including swarm control and counter-drone activities. Although no harm has yet occurred, the deployment of AI in lethal or military command systems carries credible risks of injury, violation of rights, or disruption, making this a plausible future hazard. Therefore, this qualifies as an AI Hazard rather than an Incident or Complementary Information, as the article focuses on the system's development and intended operational use without reporting actual harm or incident.[AI generated]

Thumbnail Image

Unauthorized Use of AI-Generated Celebrity Likeness in Livestream Sales Leads to Detention in China

2026-05-07
China

In Datong, China, a netizen named Xing illegally used AI tools to create a digital likeness of KMT chairperson Cheng Liwen for livestream sales without authorization. This misuse of AI for commercial gain infringed on personal rights, disrupted online order, and resulted in administrative detention by local police.[AI generated]

AI principles:
Respect of human rightsAccountability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Other
Harm types:
Human or fundamental rightsReputational
Severity:
AI incident
Business function:
Marketing and advertisement
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of an AI tool to generate a digital human likeness (AI digital person) of a real individual without authorization, which was then used in live-streaming commerce. This unauthorized use infringes on the person's rights and caused social harm by disturbing network order and misleading the public. The legal action taken confirms the harm and violation of laws. The AI system's misuse directly led to these harms, fitting the definition of an AI Incident involving violations of rights and harm to communities.[AI generated]

Thumbnail Image

AI-Powered DeepLoad Malware Targets Nigerian Institutions

2026-05-07
Nigeria

Nigeria's National Information Technology Development Agency (NITDA) has warned of an active AI-powered malware, DeepLoad, targeting government agencies, banks, businesses, and individuals. The malware uses social engineering to infiltrate systems, steal sensitive data, evade antivirus detection, and enable financial fraud and operational disruptions across Nigeria.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Government, security, and defenceFinancial and insurance services
Affected stakeholders:
GovernmentBusiness
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generationEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The DeepLoad malware explicitly incorporates AI-generated code to evade antivirus detection and maintain persistence, qualifying it as an AI system. The malware's active infections have caused direct harms including credential theft, financial fraud, system compromise, and risks to national security, fulfilling the criteria for an AI Incident. The advisory details realized harms and ongoing attacks, not just potential risks, confirming this classification.[AI generated]