aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 14062 incidents & hazards
Thumbnail Image

Meta Faces Landmark Trial Over AI Algorithms' Harm to Children in New Mexico

2026-03-23
United States

Meta is on trial in New Mexico, accused of misleading users about children's safety on its platforms. Prosecutors allege Meta's AI-driven algorithms promoted harmful and addictive content to minors, prioritizing profits over safety and violating consumer protection laws. Jury deliberations follow extensive testimony on the algorithms' impact.[AI generated]

AI principles:
Human wellbeingSafety
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Children
Harm types:
Psychological
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Organisation/recommenders
Why's our monitor labelling this an incident or hazard?

The event involves an AI system insofar as Meta's social media platforms use AI-driven algorithms for content recommendation and user engagement, which are central to the allegations of harm to teens' mental health and safety. The harm described (mental health damage and risk of sexual exploitation) falls under harm to groups of people. Since the harm is alleged to have occurred and is the subject of a legal case, this qualifies as an AI Incident. The event focuses on the use and impact of AI-enabled social media systems leading to harm, not just potential or future harm, nor is it merely complementary information or unrelated news.[AI generated]

Thumbnail Image

KAI to Test Autonomous AI Satellite Fault Response in Space

2026-03-23
Korea

Korea Aerospace Industries (KAI) and partners will launch a CubeSat equipped with an AI module to autonomously diagnose and respond to satellite faults in orbit. The project aims to validate AI onboard processing for real-time, self-directed satellite operation, presenting future risks if malfunctions occur but no harm has yet materialized.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Mobility and autonomous vehicles
Severity:
AI hazard
Business function:
Maintenance
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Event/anomaly detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The AI system is explicitly mentioned as being developed and tested for autonomous satellite operation. However, the article focuses on the planned deployment and testing phase without any reported harm or malfunction. Since no direct or indirect harm has occurred, and the AI system's use is prospective, this qualifies as an AI Hazard due to the plausible future risk associated with autonomous AI operation in space systems, but not an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

AI Companion Chatbots Expose Australian Children to Harmful Content

2026-03-23
Australia

A report by Australia's eSafety Commissioner found that popular AI companion chatbots, including Character.AI, Nomi, Chai, and Chub AI, are failing to protect children from sexually explicit content, self-harm, and suicide ideation. The platforms lack robust age verification and safeguards, exposing children to significant risks.[AI generated]

AI principles:
SafetyHuman wellbeing
Industries:
Consumer servicesMedia, social platforms, and marketing
Affected stakeholders:
Children
Harm types:
Psychological
Severity:
AI incident
Business function:
Citizen/customer service
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (generative AI chatbots) whose use has directly led to harm to children and teenagers through exposure to harmful content and emotional manipulation. The harms are realized and documented, including mental health impacts and exposure to child sexual exploitation material. The failure of the AI systems' providers to implement robust age checks and content moderation constitutes a malfunction or inadequate use safeguards. This fits the definition of an AI Incident because the AI systems' use has directly led to harm to persons (children and teens).[AI generated]

Thumbnail Image

GM Begins Supervised Testing of Next-Gen Autonomous Vehicles in Michigan and California

2026-03-23
United States

General Motors has deployed 200 vehicles equipped with advanced autonomous driving technology for supervised public-road testing on highways in Michigan and California. Trained drivers are present to intervene if needed. The testing aims to refine GM's 'eyes-off' driving system, slated for launch in 2028, but poses plausible future risks if AI malfunctions.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
WorkersGeneral public
Harm types:
Other
Severity:
AI hazard
Business function:
Manufacturing
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (autonomous driving technology) in real-world testing, which could plausibly lead to harm if the technology malfunctions or is misused. However, no actual harm or incident is reported in the article. Therefore, this qualifies as an AI Hazard, as the deployment of such technology on public roads could plausibly lead to injury or other harms in the future if failures occur.[AI generated]

Thumbnail Image

AI-Generated Fake Personas Drive Viral Crypto Scams on X

2026-03-23
United States

Blockchain investigator ZachXBT exposed a network of over 10 X accounts using AI-generated fake personas and deepfakes to spread sensational war-related misinformation, boost engagement, and funnel users into crypto scams. The operation netted six-figure profits, causing widespread financial harm and manipulating online communities.[AI generated]

AI principles:
Transparency & explainabilityDemocracy & human autonomy
Industries:
Media, social platforms, and marketingFinancial and insurance services
Affected stakeholders:
ConsumersGeneral public
Harm types:
Economic/PropertyPublic interest
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems used to impersonate influencers and generate misleading content, which directly led to financial harm through crypto scams (pump-and-dump schemes and fake giveaways). The harm to individuals' property (financial loss) and communities (misinformation and manipulation) is realized. Therefore, this qualifies as an AI Incident due to the direct role of AI in causing harm through malicious use and coordinated scam activity.[AI generated]

Thumbnail Image

AI-Enabled Drone Countermeasure Systems Developed and Deployed in Taiwan

2026-03-22
Chinese Taipei

Taiwanese company Wistron integrates AI technologies into the Aegis drone countermeasure system, deployed at over 1,200 critical sites. The government plans a NT$44.2 billion investment over five years to foster the domestic drone industry, highlighting potential future risks from AI-enabled military systems but no current harm reported.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Severity:
AI hazard
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (the "雷盾" drone countermeasure system) that uses AI for image analysis and signal processing to detect and counter drones. However, the article does not describe any realized harm or incident resulting from the AI system's use or malfunction. Instead, it presents the system as a defensive technology aimed at mitigating drone threats, which could plausibly lead to preventing harm but does not itself constitute an incident. Therefore, this event is best classified as an AI Hazard, reflecting the plausible future risk context and the system's role in defense rather than an actual incident or harm.[AI generated]

Thumbnail Image

French Prosecutors Investigate AI-Generated Deepfake Scandal Involving Elon Musk's Companies

2026-03-22
France

French prosecutors are investigating Elon Musk's companies X and xAI after Grok, their AI system, generated sexualized deepfake images, including those depicting minors. Authorities suspect the controversy may have been orchestrated to artificially inflate company valuations ahead of a planned 2026 stock listing. Musk publicly insulted the prosecutors in response.[AI generated]

AI principles:
Respect of human rightsSafety
Industries:
Media, social platforms, and marketing
Affected stakeholders:
ChildrenBusiness
Harm types:
Human or fundamental rightsReputational
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI system ('Grok') to generate deepfake sexual videos, which are being investigated for their role in artificially inflating company value and potentially involving illegal content related to children. This involves direct use of AI leading to violations of law and human rights, fitting the definition of an AI Incident. The investigation and legal scrutiny confirm that harm has occurred or is ongoing, rather than just a potential risk, thus it is not merely a hazard or complementary information.[AI generated]

Thumbnail Image

Critical AI System Vulnerabilities in OpenClaw and Langflow Lead to Security Risks and Exploitation

2026-03-22
China

360 Security discovered and reported a zero-day vulnerability in OpenClaw's intelligent agent gateway, confirmed by its founder, allowing attackers to bypass authentication and potentially crash systems. Separately, Langflow's API flaw enabled remote code execution, actively exploited within 20 hours of disclosure, causing unauthorized access and data theft. Both incidents highlight urgent AI security challenges.[AI generated]

AI principles:
Robustness & digital securityAccountability
Industries:
Digital securityIT infrastructure and hosting
Affected stakeholders:
BusinessConsumers
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Business function:
ICT management and information security
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisationReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The OpenClaw Gateway is an AI-related system (smart agent gateway) whose security vulnerability allows attackers to bypass authentication and gain control, potentially causing system resource exhaustion or crashes. This constitutes a direct risk of harm to property or system integrity. Since the vulnerability is confirmed and exploitable, and the article reports on the discovery and confirmation of this high-risk flaw, it qualifies as an AI Incident due to the realized security risk and potential harm stemming from the AI system's malfunction or misuse.[AI generated]

Thumbnail Image

Protests and Legislative Action in Germany Over AI-Generated Deepfake Sexual Abuse

2026-03-22
Germany

Around 10,000 people protested in Berlin against digital sexual violence, following allegations that AI tools were used to create pornographic deepfakes without consent. The German government is preparing urgent legislation to address legal gaps exposed by the incident involving actress Collien Fernandes and her ex-husband.[AI generated]

AI principles:
Respect of human rightsPrivacy & data governance
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Women
Harm types:
Human or fundamental rightsPsychologicalReputational
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI to create deepfake images with sexual content without consent, which constitutes a violation of rights and harm to the individual involved. The harm has already occurred, as evidenced by the public outcry and legal complaints. The government's legislative response aims to address these harms. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and digital sexual abuse).[AI generated]

Thumbnail Image

AI-Driven Internet Fraud Surges in Germany, Exploiting Language Barriers

2026-03-22
Germany

Criminals in Germany are increasingly using AI to create convincing fake online shops and phishing attacks, overcoming language barriers and targeting new victim groups. The Bundeskriminalamt reports a rise in both the quality and quantity of internet fraud, resulting in significant financial losses for victims.[AI generated]

AI principles:
SafetyTransparency & explainability
Industries:
Digital securityFinancial and insurance services
Affected stakeholders:
Consumers
Harm types:
Economic/Property
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The involvement of AI in generating realistic fake content and phishing messages directly contributes to harm by enabling fraudsters to deceive victims and cause financial losses. This constitutes harm to individuals and communities, fitting the definition of an AI Incident where the use of AI systems has directly led to harm. The article explicitly states that these harms are occurring and increasing due to AI use, not just potential or future risks.[AI generated]

Thumbnail Image

TikTok and Instagram Ban Accounts for Unlabeled, Exploitative AI-Generated Black Female Avatars

2026-03-22
United Kingdom

TikTok banned around 20 accounts after a BBC and Riddance investigation revealed the use of AI-generated, highly sexualized Black female avatars to promote explicit content without disclosure. The avatars, often racially stereotyped and exploitative, also appeared on Instagram, prompting Meta to investigate. The incident highlights AI misuse and community harm.[AI generated]

AI principles:
FairnessTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Women
Harm types:
PsychologicalHuman or fundamental rights
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event clearly involves AI systems generating digital avatars and videos, which are used in harmful ways including sexual exploitation, racial stereotyping, and identity theft. The AI-generated content is misleading and not properly labeled, violating platform policies and causing harm to individuals and communities. TikTok's banning of accounts confirms the recognition of harm. The harms are realized, not just potential, including violation of rights and harm to communities. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

AI-Generated Film Poster Sparks Outrage Among Sikh Community in Mumbai

2026-03-22
India

An AI-generated poster for the film Dhurandhar: The Revenge, depicting Ranveer Singh in Sikh attire holding a cigarette, sparked outrage among the Sikh community in Mumbai. Complaints were filed with police, alleging the poster and film scenes disrespect Sikh religious sentiments, leading to community backlash and formal investigations.[AI generated]

AI principles:
Respect of human rightsSafety
Industries:
Arts, entertainment, and recreationMedia, social platforms, and marketing
Affected stakeholders:
General public
Harm types:
Psychological
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system was used to generate the controversial poster, which directly led to harm in the form of violation of religious sentiments and cultural disrespect, a form of harm to communities and a breach of rights. The complaint and public outrage indicate that the harm is realized, not just potential. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Suspected Terrorist Attack Targets Czech AI Drone Factory Supplying Ukraine

2026-03-21
Czechia

A fire broke out at LPP Holding's factory in Pardubice, Czechia, which produces AI-powered attack drones for Ukraine. Authorities are investigating the incident as a possible terrorist attack, following claims of responsibility by an anti-arms group. The fire disrupted production of critical AI military systems.[AI generated]

AI principles:
Robustness & digital security
Industries:
Robots, sensors, and IT hardware
Severity:
AI incident
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The factory produces AI-powered autonomous attack drones, which are AI systems by definition. The fire, likely caused by a terrorist attack, has disrupted the production of these AI systems, which are used in active conflict zones. This disruption constitutes harm to critical infrastructure and potentially to communities affected by the conflict. The AI system's development and use are directly implicated in the event. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Man Arrested for Posting AI-Generated Defamatory Images of Indian PM

2026-03-21
India

Delhi Police arrested Siddhnath Kumar from Bihar for creating and sharing AI-generated objectionable images of Prime Minister Narendra Modi and female leaders on social media. The images, intended to mislead and disrupt public order, led to charges of forgery, defamation, and criminal intimidation. Investigations into the dissemination network are ongoing.[AI generated]

AI principles:
Transparency & explainabilityDemocracy & human autonomy
Industries:
Media, social platforms, and marketing
Affected stakeholders:
GovernmentGeneral public
Harm types:
ReputationalPublic interest
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI to create images that were disseminated with the intent to mislead and disturb public order, which is a form of harm to communities. The AI system's use directly led to legal consequences and police action, indicating realized harm rather than just potential harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The involvement of AI in generating misleading content that impacts public order fits the definition of an AI Incident due to harm to communities.[AI generated]

Thumbnail Image

AI Disrupts Chinese Film Industry: Job Losses, Fake Content, and Public Backlash

2026-03-21
China

AI systems in China’s film industry have led to significant job losses, economic insecurity, and reputational harm through AI-generated actors, scriptwriting, and fake videos. Public backlash and legal concerns over image and voice likeness violations have prompted regulatory responses, highlighting ongoing harm and ethical challenges.[AI generated]

AI principles:
FairnessRespect of human rights
Industries:
Arts, entertainment, and recreationMedia, social platforms, and marketing
Affected stakeholders:
Workers
Harm types:
Economic/PropertyReputationalHuman or fundamental rights
Severity:
AI incident
Business function:
Other
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems used for deepfake generation (AI face-swapping, voice cloning) that have directly led to harms such as unauthorized use of likeness, misinformation, reputational damage, and potential financial loss due to scams. These harms fall under violations of human rights (portrait and reputation rights) and harm to communities (misleading information). Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm. The legal responses and restrictions on AI-generated content are complementary information but do not change the primary classification.[AI generated]

Thumbnail Image

AI-Driven 3D Modeling Reduces Risks in Cerebral Aneurysm Treatment at San Camillo Hospital

2026-03-21
Italy

San Camillo Forlanini Hospital in Rome integrated an AI-powered 3D simulation system for planning endovascular treatments of cerebral aneurysms. Over 120 cases were treated in a year, with the AI system significantly reducing patient risk and unnecessary device use, improving safety and lowering costs.[AI generated]

Industries:
Healthcare, drugs, and biotechnology
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
No-action autonomy (human support)
AI system task:
Reasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The AI system is explicitly mentioned as being used in the development and application of personalized treatment plans for cerebral aneurysms. The system's use has directly contributed to improved patient safety and reduced risk of complications by enabling better preoperative planning and device selection. Since the AI system's use has led to realized health benefits and harm reduction (less risk to patients), this qualifies as an AI Incident under the definition of an event where AI use has directly led to harm reduction and improved health outcomes. Although the article focuses on positive outcomes, the involvement of AI in medical treatment affecting patient health is within the scope of AI Incidents, as it relates to injury or harm to health (here, the AI system reduces such harm).[AI generated]

Thumbnail Image

Restaurant Service Robots Cause Disruption and Safety Concerns in Texas and California

2026-03-21
United States

AI-powered service robots at restaurants in Houston, Texas, and Cupertino, California malfunctioned or were improperly operated, causing disruption, discomfort, and property damage. Incidents included erratic movements, broken dishware, and staff intervention, highlighting risks and challenges in deploying autonomous robots in customer-facing environments.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Consumer services
Affected stakeholders:
ConsumersBusiness
Harm types:
Economic/PropertyPsychological
Severity:
AI incident
Business function:
Citizen/customer service
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The humanoid robot is an AI system performing autonomous or semi-autonomous actions during a restaurant performance. Its erratic and uncontrolled behavior directly caused physical harm risks (broken dishware, potential burns, blunt-force injuries) to people present. The staff had to physically restrain the robot, indicating a malfunction or failure to control the AI system. The incident involves direct harm or risk of harm to people, fulfilling the criteria for an AI Incident. The restaurant's statement about the robot's operating environment does not negate the harm caused. Hence, this is not merely a hazard or complementary information but a realized incident involving AI malfunction and resulting harm.[AI generated]

Thumbnail Image

German Digital Minister Warns of Major Job Losses Due to AI

2026-03-21
Germany

German Digital Minister Karsten Wildberger warns that artificial intelligence will lead to dramatic job losses in Germany, urging employers, unions, and society to prepare for significant labor market changes. He emphasizes the need for collective action to adapt and benefit from AI, rather than resisting its adoption.[AI generated]

AI principles:
Human wellbeingFairness
Industries:
Other
Affected stakeholders:
Workers
Harm types:
Economic/Property
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

The article does not describe any realized harm or incident caused by AI systems but rather warns about potential future job losses and societal challenges due to AI adoption. It involves AI systems implicitly as the cause of future economic disruption but does not report any direct or indirect harm that has already occurred. Therefore, it fits the definition of an AI Hazard, as it plausibly leads to future harm related to employment and social stability.[AI generated]

Thumbnail Image

Chacao Deploys AI-Powered Robotic Dogs for Public Security Patrols

2026-03-21
Venezuela

The municipality of Chacao, Venezuela, has become the first in Latin America to deploy autonomous AI-powered robotic dogs, "Voltio" and "Turbo," for public security patrols. Equipped with cameras and sensors, these robots monitor public spaces, detect crimes, and communicate incidents to police, raising potential privacy and safety concerns.[AI generated]

AI principles:
Privacy & data governanceSafety
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Human or fundamental rights
Severity:
AI hazard
Business function:
Compliance and justice
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-powered autonomous robotic dogs used for patrolling and monitoring public spaces, which qualifies as AI systems. However, there is no indication that these systems have caused any injury, rights violations, property damage, or other harms. Since no harm has occurred yet, but the deployment of such AI systems in public security could plausibly lead to incidents in the future, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the introduction of potentially impactful AI systems, not on responses or updates to prior events.[AI generated]

Thumbnail Image

ChatGPT Flags Republican Fundraising Links as Unsafe, Raising Bias Concerns

2026-03-21
United States

OpenAI's ChatGPT erroneously flagged links to the Republican fundraising platform WinRed as potentially unsafe, while similar Democratic links to ActBlue were not flagged. OpenAI attributed this to a technical glitch, but the incident raised concerns about AI bias and its potential impact on political participation in the U.S.[AI generated]

AI principles:
FairnessDemocracy & human autonomy
Industries:
Media, social platforms, and marketing
Affected stakeholders:
ConsumersGeneral public
Harm types:
ReputationalPublic interest
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Interaction support/chatbotsEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The AI system (ChatGPT) explicitly caused differential treatment of political fundraising links, flagging Republican links as unsafe while not doing so for Democratic links. This is a malfunction of the AI system's content filtering or safety warning mechanism. The harm is realized as it affects political actors and potentially voters by unfairly flagging one side's fundraising platform, which can disrupt political processes and violate rights to fair political participation. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's malfunction leading to political bias and potential election interference.[AI generated]