aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 14030 incidents & hazards
Thumbnail Image

US Charges Super Micro Executives for Smuggling AI Technology to China

2026-03-20
United States

Three individuals, including a co-founder of Super Micro Computer Inc., were charged by US authorities for conspiring to illegally export billions of dollars worth of AI servers with Nvidia chips to China, violating US export control laws and posing a national security risk. Super Micro cooperated with investigators.[AI generated]

AI principles:
AccountabilityRobustness & digital security
Industries:
Government, security, and defenceIT infrastructure and hosting
Affected stakeholders:
GovernmentGeneral public
Harm types:
Public interest
Severity:
AI incident
Business function:
ICT management and information security
AI system task:
Other
Why's our monitor labelling this an incident or hazard?

The event describes a criminal conspiracy involving the diversion of AI-related hardware technology in violation of export laws, which directly implicates the use and misuse of AI systems (chips for AI models). The harm is indirect but material, involving breach of legal obligations and risks to national security, which qualifies as harm under the framework. The AI system's role is pivotal as the chips are essential for AI development, and their illegal diversion is the core of the incident. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Man Arrested in Albacete for Using AI to Create Fake Nude Image of Minor and Threatening Her

2026-03-20
Spain

A man in Albacete, Spain, was arrested after using AI to manipulate a minor's photo, creating a fake nude image. He sent the image to the victim and threatened her and her family to withdraw her police complaint, causing psychological harm and violating her rights.[AI generated]

AI principles:
Respect of human rightsPrivacy & data governance
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Children
Harm types:
Human or fundamental rightsPsychological
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI was used to manipulate a photograph of a minor to create a nude image, which was then used to threaten the victim. This manipulation and subsequent threats constitute violations of human rights and personal safety, fulfilling the criteria for an AI Incident. The AI system's use directly caused harm through image manipulation and intimidation, thus qualifying as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

US Army Receives First Autonomous-Ready Black Hawk Helicopter

2026-03-20
United States

The US Army has received its first H-60Mx Black Hawk helicopter equipped with an AI-driven autonomy suite, enabling fully autonomous or piloted flight. Developed with DARPA and Sikorsky, the aircraft will undergo rigorous testing, marking a significant step toward scaling autonomous military aviation. No harm has occurred yet.[AI generated]

Industries:
Mobility and autonomous vehiclesGovernment, security, and defence
Severity:
AI hazard
Business function:
Research and development
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly describes an AI system integrated into a military helicopter enabling autonomous flight, which qualifies as an AI system. The event concerns the delivery and testing phase, with no mention of any harm or malfunction. Given the nature of autonomous military aircraft, there is a credible risk that the AI system could lead to injury, operational disruption, or other harms in the future. Since no harm has yet occurred, the event is best classified as an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI technology with potential safety implications.[AI generated]

Thumbnail Image

AI Misuse and Fraud Prevention in China's Financial and Social Platforms

2026-03-20
China

In China, AI technologies have been misused for deepfake scams, including impersonating analysts and bypassing biometric authentication, causing financial losses. Conversely, platforms like MiLian Technology and Yiren Zhike deploy AI-driven risk control systems to prevent fraud, significantly reducing scam cases and protecting users' property and rights.[AI generated]

AI principles:
Robustness & digital securityPrivacy & data governance
Industries:
Financial and insurance servicesMedia, social platforms, and marketing
Affected stakeholders:
Consumers
Harm types:
Economic/Property
Severity:
AI incident
Business function:
Other
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI-based data modeling and an AI intelligent pre-warning platform that analyzes data to identify potential victims of fraud and automatically blocks malicious network traffic. The AI system's use has directly led to a significant decrease in telecom fraud cases and has protected critical infrastructure from cyberattacks, which constitutes harm prevention and protection of property and communities. Since the AI system's use has directly led to realized harm reduction and protection, this qualifies as an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or responses but reports on concrete outcomes from AI deployment.[AI generated]

Thumbnail Image

Russia Proposes Sweeping Regulations to Restrict Foreign AI Tools

2026-03-20
Russia

Russia's Ministry for Digital Development has proposed regulations that could ban or restrict foreign AI tools like ChatGPT, Claude, and Gemini if they fail to comply with data localization and content control requirements. The rules aim to protect citizens and promote domestic AI, raising concerns about censorship and restricted access.[AI generated]

AI principles:
Respect of human rightsDemocracy & human autonomy
Industries:
Government, security, and defence
Affected stakeholders:
ConsumersGeneral public
Harm types:
Human or fundamental rightsPublic interest
Severity:
AI hazard
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly (foreign AI tools such as ChatGPT, Claude, Gemini) and concerns their use and regulation. However, the article does not describe any realized harm or incident caused by these AI systems. Instead, it discusses potential future restrictions and regulatory measures aimed at preventing possible harms such as manipulation or discriminatory algorithms. Therefore, this is a plausible future risk scenario related to AI system use and governance, but no direct or indirect harm has yet occurred. The main focus is on the regulatory initiative and its potential impact, making it an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

AI Agent 'OpenClaw' Causes Academic Fraud and Financial Loss Amid Security Concerns in China

2026-03-20
China

The AI agent OpenClaw, widely adopted in China, has enabled academic fraud by generating papers with fabricated references and caused unexpected financial losses due to continuous operation. Its high system permissions pose significant privacy and security risks, prompting government support and regulatory scrutiny.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Education and training
Affected stakeholders:
BusinessGeneral public
Harm types:
Economic/PropertyReputationalHuman or fundamental rights
Severity:
AI incident
Business function:
Research and development
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The AI system 'OpenClaw' is explicitly mentioned and is used to perform complex tasks autonomously. The harm described is financial, with users incurring unexpectedly high bills due to the AI's continuous operation. This constitutes harm to individuals (economic harm), which fits within the scope of AI Incident as the AI system's use has directly led to harm (financial loss). Therefore, this event qualifies as an AI Incident.[AI generated]

Thumbnail Image

AI-Generated Deepfake Video Causes Misinformation and Reputational Harm to Indonesian Actor

2026-03-20
Indonesia

An AI-generated deepfake video falsely depicted Indonesian actor Ari Wibowo marrying Clara Oktavia, leading to widespread misinformation and reputational harm. Ari Wibowo publicly clarified the hoax, expressing concern over the increasing misuse of AI for creating fake news and misleading the public.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
OtherGeneral public
Harm types:
Reputational
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system used to create a fabricated video (deepfake) that misrepresents a real person, leading to misinformation and reputational harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm in the form of false information dissemination and violation of personal rights. The harm is materialized, not just potential, as the actor publicly addresses and takes action against the hoax.[AI generated]

Thumbnail Image

SoftBank Plans Massive AI Data Center in Ohio Powered by Natural Gas

2026-03-20
United States

SoftBank Group Corp. is planning a large AI data center in Ohio, to be powered by $33 billion in natural gas infrastructure. The facility, located at a former uranium enrichment site, aims to support advanced AI operations but raises plausible environmental concerns due to its significant energy demands.[AI generated]

AI principles:
Sustainability
Industries:
IT infrastructure and hostingEnergy, raw materials, and utilities
Affected stakeholders:
General public
Harm types:
Environmental
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

The event involves the development and planned use of an AI system infrastructure (a large AI data center) and its energy sourcing, which could plausibly lead to environmental harm or community impact due to the scale of natural gas power consumption. No actual harm or incident is reported, only plans and projections. Hence, it fits the AI Hazard category as it plausibly could lead to harm in the future but has not yet caused harm.[AI generated]

Thumbnail Image

Indian Cricketer Gautam Gambhir Files Lawsuit Over AI Deepfakes and Identity Misuse

2026-03-19
India

Indian cricketer and coach Gautam Gambhir filed a civil suit in Delhi High Court after AI-generated deepfakes and voice cloning led to widespread impersonation, misinformation, and unauthorized commercial use of his identity. The fabricated videos, viewed millions of times, prompted legal action seeking damages and urgent content removal.[AI generated]

AI principles:
Respect of human rightsPrivacy & data governance
Industries:
Media, social platforms, and marketing
Affected stakeholders:
OtherGeneral public
Harm types:
ReputationalEconomic/Property
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly mentions AI-generated fabricated content that weaponizes Gambhir's identity to spread misinformation and cause harm. This misuse of AI-generated content has already led to harm (reputational and legal rights violations). The filing of a civil suit and injunction indicates the harm is materialized, not just potential. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.[AI generated]

Thumbnail Image

HSBC Plans Massive Job Cuts Driven by AI Automation

2026-03-19
United Kingdom

HSBC is considering cutting up to 20,000 jobs, about 10% of its workforce, over the next 3-5 years as it integrates AI to automate middle- and back-office roles. The proposed downsizing, still under review, highlights AI's potential impact on employment in the banking sector.[AI generated]

AI principles:
Human wellbeing
Industries:
Financial and insurance services
Affected stakeholders:
Workers
Harm types:
Economic/Property
Severity:
AI hazard
Business function:
Monitoring and quality control
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisation
Why's our monitor labelling this an incident or hazard?

The involvement of AI in the workforce reduction plan is explicit, as the job cuts are linked to an AI overhaul. The harm here is indirect, relating to potential job losses affecting employees, which constitutes harm to people (economic and employment harm). Although the plan is at an early stage and the cuts are not yet realized, the credible risk of significant job losses due to AI adoption qualifies this as an AI Hazard rather than an Incident, since the harm is plausible but not yet materialized.[AI generated]

Thumbnail Image

AI-Generated Deepfake Abuse Leads to Legal Action and Media Consequences in Germany

2026-03-19
Germany

Actress Collien Fernandes accused her ex-husband Christian Ulmen of using AI-generated deepfake pornography and fake profiles to commit digital violence, identity theft, and emotional harm. Legal proceedings have begun in Spain and Germany, and broadcaster ProSieben removed Ulmen's show following the allegations. The incident highlights AI's role in personal rights violations.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Women
Harm types:
Human or fundamental rightsPsychological
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI-generated deepfake content that has been distributed and caused harm to Collien Fernandes. The harm is realized and ongoing, as the fake images and videos have been circulating for years, and the victim has filed a legal complaint. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person, specifically violations of rights and reputational harm. The article does not focus on future risks or responses but on the actual harm caused by the AI-generated content.[AI generated]

Thumbnail Image

Fraudulent AI-Generated Music Streaming Scheme Leads to $8 Million Forfeiture

2026-03-19
United States

Michael Smith, of North Carolina, pleaded guilty to conspiracy to commit wire fraud after using AI to generate hundreds of thousands of fake songs and deploying bots to stream them billions of times. This scheme diverted over $8 million in royalties from legitimate artists via major platforms like Spotify and Apple Music.[AI generated]

AI principles:
AccountabilityFairness
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Workers
Harm types:
Economic/Property
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly describes the use of AI-generated songs and automated bot streaming to commit wire fraud against streaming services, resulting in the theft of over $8 million in royalties. This constitutes a direct harm caused by the AI system's use, fulfilling the criteria for an AI Incident due to violation of intellectual property rights and economic harm to artists and rights holders.[AI generated]

Thumbnail Image

Uber and Rivian Announce Major Investment in Autonomous Robotaxi Fleet

2026-03-19
United States

Uber will invest up to $1.25 billion in Rivian to deploy up to 50,000 AI-powered autonomous robotaxis by 2031, starting with 10,000 vehicles in San Francisco and Miami in 2028. The partnership aims to expand across 25 cities in the US, Canada, and Europe, raising future AI safety concerns.[AI generated]

AI principles:
Safety
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
General public
Harm types:
Physical (injury)Physical (death)
Severity:
AI hazard
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly discusses the development and investment in autonomous vehicle technology, which involves AI systems for self-driving capabilities. Although no incident or harm has occurred yet, the nature of autonomous vehicles means there is a credible risk of future harm (e.g., accidents, safety issues) related to AI system malfunction or misuse. Since the article focuses on investment and development without reporting any realized harm, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

Cloudflare CEO Predicts AI Bots Will Surpass Human Web Traffic by 2027

2026-03-19
United States

Cloudflare CEO Matthew Prince warns that AI bot traffic, driven by generative AI agents, could exceed human web traffic by 2027. This surge may disrupt internet infrastructure and business models, as bots visit far more sites than humans, potentially reshaping online search and monetization strategies.[AI generated]

AI principles:
Robustness & digital security
Industries:
IT infrastructure and hosting
Affected stakeholders:
Business
Harm types:
Economic/Property
Severity:
AI hazard
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event describes a credible and significant future shift in internet traffic due to AI systems (AI-powered bots) that could plausibly lead to harms such as overwhelming web infrastructure and potential disruption of internet services. However, no actual harm or incident has yet occurred; the article is primarily about projections and preparations for this shift. Therefore, it fits the definition of an AI Hazard, as it plausibly could lead to incidents involving disruption of critical infrastructure or harm to communities if unmanaged.[AI generated]

Thumbnail Image

Senior Journalist Suspended for Publishing AI-Generated Fake Quotes

2026-03-19
Ireland

Peter Vandermeersch, a senior journalist at Mediahuis, was suspended after admitting to publishing newsletters containing AI-generated fake quotes. He relied on language models like ChatGPT and Perplexity without proper verification, resulting in misinformation and violating journalistic standards. The incident affected outlets in Ireland and the Netherlands.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
BusinessGeneral public
Harm types:
ReputationalPublic interest
Severity:
AI incident
Business function:
Other
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system (language models like ChatGPT, Perplexity, Google Notebook) was explicitly used in content creation. The AI's hallucinations caused the journalist to publish false quotes, which is misinformation harming the public's right to truthful information and trust in media. This constitutes a violation of rights and harm to communities. The harm has already occurred as articles with fabricated quotes were published and later removed. Therefore, this qualifies as an AI Incident due to the direct link between AI use and realized harm.[AI generated]

Thumbnail Image

Tesla and Waymo Robotaxis Involved in Multiple Crashes and Disruptions in U.S. Cities

2026-03-19
United States

Tesla and Waymo autonomous vehicles have reported numerous crashes in Austin, causing property damage and at least one minor injury. Waymo's driverless cars also disrupted a construction site in Nashville. These incidents highlight ongoing safety and operational concerns with AI-driven robotaxis in U.S. urban environments.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
General public
Harm types:
Physical (injury)Economic/Property
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI system (Waymo Driver) involved in real-world use and incidents causing harm, including a collision with a child resulting in minor injuries and other operational failures that could affect safety and emergency response. These constitute direct or indirect harm linked to the AI system's use. Although the overall safety record is positive, the reported incidents meet the criteria for an AI Incident because harm has occurred. The discussion of data presentation and safety concerns adds context but does not negate the occurrence of harm. Therefore, the event is best classified as an AI Incident.[AI generated]

Thumbnail Image

AI Deepfake Videos Victimize Students, Prompt Calls for State Action in Pennsylvania

2026-03-19
United States

AI-generated deepfake videos depicting Radnor High School students in inappropriate situations caused psychological harm and distress. Parents criticized the school's response and urged Governor Josh Shapiro and state officials to establish statewide standards and protections against AI misuse in schools. The incident occurred in Pennsylvania, USA.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Education and training
Affected stakeholders:
Children
Harm types:
PsychologicalHuman or fundamental rights
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

AI deepfakes are generated by AI systems and their use here has directly led to harm to students (victimization). This constitutes an AI Incident as the AI system's use has caused harm to individuals. The article focuses on the harm caused and the call for policy response, indicating a realized harm rather than just a potential risk.[AI generated]

Thumbnail Image

AI-Enabled Financial Scams Cause €20 Million Losses in Croatia, Targeting Youth

2026-03-19
Croatia

Fraudsters in Croatia used AI tools and deepfake technology to conduct sophisticated financial scams, resulting in over €20 million in losses. Young people, especially those experiencing loneliness and social anxiety, were particularly vulnerable to emotional manipulation and deception enabled by these AI systems.[AI generated]

AI principles:
SafetyTransparency & explainability
Industries:
Financial and insurance services
Affected stakeholders:
General public
Harm types:
Economic/Property
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (AI tools and deepfake technology) by fraudsters to perpetrate financial scams that have directly led to significant financial losses and emotional manipulation, especially among young people. This fits the definition of an AI Incident because the AI system's use has directly caused harm (financial loss and emotional harm). Although the article also covers responses and prevention efforts, the core event is the realized harm from AI-enabled frauds, not just potential or complementary information.[AI generated]

Thumbnail Image

AI Clinical Decision Support System Reduces Vascular Events in Stroke Patients

2026-03-19
China

A cluster-randomized clinical trial across 77 hospitals in China found that an AI-powered clinical decision support system (CDSS) for stroke care led to a 27% reduction in new vascular events and improved long-term outcomes. The AI tool integrates imaging analysis and treatment recommendations, demonstrating significant health benefits over conventional care.[AI generated]

Industries:
Healthcare, drugs, and biotechnology
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Recognition/object detectionOrganisation/recommenders
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI system used in clinical decision support for stroke patients, which directly affects treatment decisions and patient outcomes. The involvement of the AI system in improving health outcomes (reducing new vascular events) constitutes a direct impact on health, which fits the definition of an AI Incident (harm or benefit to health caused by AI system use). Although the harm here is positive (improved health), the framework includes injury or harm to health, and improvements or reductions in harm are relevant to incident classification. The event is not a hazard (potential harm) or complementary information (which would focus on responses or updates), but a primary report of AI system use leading to measurable health outcomes. Therefore, the classification is AI Incident.[AI generated]

Thumbnail Image

South Korea Plans AI-Based National Emergency Response System

2026-03-18
Korea

South Korea's National Fire Agency and KT consortium have begun designing an AI and cloud-based next-generation 119 emergency response system. The project aims to unify regional systems, automate emergency call analysis, and enhance disaster response nationwide, but no AI-related harm or malfunction has occurred yet.[AI generated]

Industries:
Government, security, and defenceIT infrastructure and hosting
Severity:
AI hazard
Business function:
Citizen/customer service
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Event/anomaly detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves the development and intended use of an AI system for emergency response, which could plausibly lead to significant impacts on public safety. However, since the system is still in the design and planning phase and no harm or malfunction has occurred, it represents a potential future risk rather than an actual incident. Therefore, it qualifies as an AI Hazard because the AI system's use could plausibly lead to harm if issues arise during deployment or operation, but no harm has yet materialized.[AI generated]