aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 14661 incidents & hazards
Thumbnail Image

AI Device Improves Detection of Life-Threatening Heart Condition in Black Patients

2026-05-03
United States

Researchers demonstrated that an AI-powered device, worn on the finger, accurately detects moderate to severe aortic valve stenosis—a life-threatening heart condition—especially in Black patients who historically face lower diagnosis rates. The AI system analyzes blood flow signals, improving early detection and reducing health disparities in the United States.[AI generated]

AI principles:
FairnessHuman wellbeing
Industries:
Healthcare, drugs, and biotechnology
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved as it uses an algorithm analyzing physiological signals to detect a serious heart condition. The AI system's use has directly led to improved detection rates, which can prevent harm or death from untreated aortic valve stenosis, especially in a vulnerable population. This constitutes an AI Incident because the AI system's deployment has directly led to a positive health impact, reducing harm and addressing health inequities, which falls under injury or harm to health of groups of people as per the definitions.[AI generated]

Thumbnail Image

Indian Banks Boost Cybersecurity Amid Threats from Anthropic's Mythos AI

2026-05-03
India

Indian public sector banks are increasing IT spending and cybersecurity measures in response to concerns over Anthropic's Claude Mythos AI, which has advanced capabilities to detect and exploit system vulnerabilities. Authorities and bank leaders warn of potential risks to financial data and infrastructure, prompting proactive defense strategies.[AI generated]

AI principles:
AccountabilityRobustness & digital security
Industries:
Financial and insurance servicesDigital security
Affected stakeholders:
GovernmentConsumers
Harm types:
Human or fundamental rightsEconomic/PropertyPublic interest
Severity:
AI hazard
AI system task:
Event/anomaly detectionReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the AI system Claude Mythos and its advanced hacking capabilities, which have prompted banks and government bodies to increase cybersecurity spending and form panels to assess and mitigate risks. No realized harm or incident is described; rather, the article highlights the potential threat and the compressed timeline for weaponization of vulnerabilities due to AI. This fits the definition of an AI Hazard, where the AI system's development and use could plausibly lead to harm but no direct or indirect harm has yet occurred.[AI generated]

Thumbnail Image

AI-Generated Disinformation Becomes Routine, Undermining Public Trust

2026-05-03
Spain

European media watchdogs report a sharp rise in AI-generated disinformation, including deepfakes and manipulated content, now integrated into daily news flows. These AI tools are increasingly used to spread false narratives and discredit authentic evidence, causing widespread confusion and harm to public perception and trust.[AI generated]

AI principles:
Democracy & human autonomyTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
General public
Harm types:
Public interestPsychologicalReputational
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems generating manipulated audiovisual content used for disinformation, which is actively occurring and verified by fact-checking organizations. The harm is to communities through misinformation and manipulation of public perception, fulfilling the criteria for harm to communities under AI Incident definition. The AI system's use in creating and spreading false content directly leads to this harm. Hence, this is not a potential hazard or complementary information but a clear AI Incident.[AI generated]

Thumbnail Image

US Military AI Use Causes Civilian Casualties and Raises Global Security Risks

2026-05-02
United States

The US Department of Defense has rapidly expanded AI deployment in military operations, including mine detection in the Strait of Hormuz and combat targeting. An AI-enabled target recognition error reportedly led to over 160 civilian deaths in Iran, highlighting the risks of AI misuse, lack of regulation, and potential violations of international law.[AI generated]

AI principles:
Respect of human rightsAccountability
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Physical (death)Public interest
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI systems (machine learning software for underwater mine detection) in a defense application. While no actual harm or incident is reported, the deployment of AI for mine detection in a strategic and potentially hazardous environment implies a plausible risk of harm if the system malfunctions or is misused. However, since the article only reports the contract signing and intended use without any realized harm or malfunction, it constitutes an AI Hazard rather than an AI Incident. The event highlights a credible future risk related to AI use in military mine detection operations.[AI generated]

Thumbnail Image

High School Students Use AI Deepfake Technology to Create and Distribute Sexual Images, Nearly 20 Victims in Taichung

2026-05-02
Chinese Taipei

Two male high school students in Taichung, Taiwan, used AI deepfake technology to create and distribute non-consensual sexual images of nearly 20 female classmates. The incident caused significant psychological harm and privacy violations. Authorities and schools have launched investigations, and university admission for one perpetrator is under review.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Children
Harm types:
PsychologicalHuman or fundamental rights
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI technology (deepfake) to produce and spread harmful manipulated images, resulting in realized harm to individuals (psychological distress and violation of rights). This meets the criteria for an AI Incident because the AI system's use directly led to harm to persons and violations of rights. The involvement of AI is clear and central to the harm caused, and the incident is ongoing with active investigation and response.[AI generated]

Thumbnail Image

AI-Generated Fake Sensitive Images Cause Harm Among Students in Đồng Nai

2026-05-02
Viet Nam

In Đồng Nai, Vietnam, a male student used AI software to create fake sensitive images of a female classmate, which were then spread on social media due to personal conflict. The incident caused psychological and reputational harm, prompting police intervention and highlighting risks of AI misuse among students.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Women
Harm types:
PsychologicalReputational
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI technology to generate fake images, which were then shared and caused harm to the victim's psychological well-being and reputation. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person and harm to the community through misinformation and reputational damage. The involvement of law enforcement and educational responses further confirms the recognition of harm caused by AI misuse.[AI generated]

Thumbnail Image

China Removes 98,000 Accounts for Unlabeled AI-Generated Content

2026-05-02
China

Chinese authorities removed over 98,000 social media accounts for publishing AI-generated videos and other content without proper labeling, misleading the public and blurring the line between reality and fiction. The lack of clear AI-generated content tags contributed to misinformation and harmed public understanding, prompting regulatory intervention.[AI generated]

AI principles:
Transparency & explainabilityAccountability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
General public
Harm types:
Public interest
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems in the form of AI-generated videos that were published without proper AI-generated content labels, misleading users about the nature of the content. This misuse of AI-generated content has led to harm by misleading the public and damaging the network ecology, which qualifies as harm to communities. The regulatory actions and platform requirements are responses to this harm. Since the harm has already occurred through misleading content dissemination, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

AI-Generated Digital Exes Spark Privacy and Emotional Concerns in China

2026-05-02
China

A growing trend in China sees young people using AI to create digital replicas of ex-partners by uploading personal data such as chat logs and photos. While these virtual exes offer emotional comfort, the practice raises significant concerns about privacy, emotional dependency, and ethical boundaries.[AI generated]

AI principles:
Privacy & data governanceHuman wellbeing
Industries:
Consumer services
Affected stakeholders:
ConsumersOther
Harm types:
Human or fundamental rightsPsychological
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The AI system is clearly involved as it generates digital replicas based on personal data. The use of AI here is for emotional coping, which is novel but does not currently report any realized harm such as privacy breaches, emotional injury recognized legally, or other harms. The concerns raised are about potential privacy and emotional dependency issues, which represent plausible future risks rather than confirmed harms. Therefore, this event fits best as an AI Hazard, since the AI use could plausibly lead to harms related to privacy and emotional well-being, but no direct or indirect harm has been reported yet.[AI generated]

Thumbnail Image

AI Leaders Warn of Potential Mass Job Displacement Due to Automation

2026-05-02
United States

Anthropic CEO Dario Amodei and OpenAI CEO Sam Altman have publicly debated AI's impact on employment, with Amodei warning of possible mass job losses and Altman downplaying fears, despite some companies attributing layoffs to AI automation. No significant current unemployment effects are reported, but future risks are highlighted.[AI generated]

AI principles:
Human wellbeingRespect of human rights
Industries:
General or personal use
Affected stakeholders:
Workers
Harm types:
Economic/Property
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

The article centers on the CEO's warnings about AI potentially replacing millions of jobs, which is a credible future risk but not a realized harm. It includes research findings showing no current significant unemployment effects and industry pushback against the doomsday narrative. The AI system (Claude) is involved as a tool that could displace labor, but no direct or indirect harm has yet occurred according to the article. Hence, this qualifies as an AI Hazard, reflecting a plausible future harm scenario rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI and its societal impact.[AI generated]

Thumbnail Image

Pentagon Signs AI Agreements with Tech Giants for Secret Military Operations

2026-05-01
United States

The U.S. Department of Defense has signed agreements with seven major tech companies—including Google, Microsoft, Amazon, Nvidia, OpenAI, SpaceX, and Reflection—to use their AI technologies in secret military operations, such as mission planning and weapons targeting. The exclusion of Anthropic, due to ethical disputes, highlights ongoing concerns about AI's role in warfare and potential risks to civilians.[AI generated]

AI principles:
Respect of human rightsTransparency & explainability
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Physical (death)Physical (injury)Human or fundamental rights
Severity:
AI hazard
AI system task:
Reasoning with knowledge structures/planningRecognition/object detection
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems in military operations, including weapon targeting and mission planning, which clearly involves AI system use. While no direct or indirect harm has been reported yet, the deployment of AI in autonomous or semi-autonomous weapons and surveillance systems carries a plausible risk of causing harm to persons, communities, or violating human rights in the future. The article also mentions controversy over the use of AI tools for surveillance and autonomous killing, underscoring the potential for harm. Since no actual harm or incident is described, but the potential for harm is credible and significant, the classification is AI Hazard.[AI generated]

Thumbnail Image

AI-Restored Film Screening in South Korea Banned After Severe Visual Distortions

2026-05-01
Korea

A South Korean distributor used AI to restore the classic film 'A City of Sadness' without proper authorization, resulting in severe visual distortions—such as changing finger counts and facial features—that degraded the film's quality. Public backlash led to the film's screening being banned and raised concerns over unauthorized AI use in cultural works.[AI generated]

AI principles:
AccountabilityRobustness & digital security
Industries:
Media, social platforms, and marketing
Affected stakeholders:
ConsumersBusiness
Harm types:
Economic/PropertyReputational
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system was used to restore the film, but the restoration was unauthorized and caused harm by infringing on intellectual property rights and degrading the film's quality, which harms the community of viewers. The AI system's use in this unauthorized manner directly led to these harms. Therefore, this qualifies as an AI Incident due to violation of intellectual property rights and harm to the community through degraded content.[AI generated]

Thumbnail Image

US Considers Faster Patch Deadlines Due to AI-Driven Cyber Threats

2026-05-01
United States

US cybersecurity officials are considering reducing the deadline for fixing critical government IT vulnerabilities from two weeks to three days. This policy shift is driven by concerns that advanced AI tools, such as Anthropic's Mythos and OpenAI's GPT-5.4-Cyber, enable hackers to exploit flaws much faster, increasing cybersecurity risks.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Government, security, and defenceDigital security
Affected stakeholders:
Government
Harm types:
Public interest
Severity:
AI hazard
Business function:
ICT management and information security
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generationReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems (advanced AI models like Mythos and GPT-5.4-Cyber) being used by hackers to identify and exploit vulnerabilities faster than before. This represents a credible threat that could plausibly lead to harm, such as disruption of critical infrastructure or data breaches. However, the article does not report any actual harm or incident resulting from this AI use, only the potential and the policy response being considered. Therefore, this event fits the definition of an AI Hazard, as it concerns a plausible future harm stemming from AI-enabled hacking capabilities.[AI generated]

Thumbnail Image

AI Systems Enable Early Wildfire Detection and Response in Western US

2026-05-01
United States

AI-powered cameras deployed by utilities and fire agencies in Arizona and Colorado detected smoke early, enabling rapid firefighting response and containment of wildfires, such as the Diamond Fire. This use of AI has directly prevented harm to people and property in wildfire-prone Western US states.[AI generated]

Industries:
Energy, raw materials, and utilitiesEnvironmental services
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Recognition/object detectionEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems deployed for wildfire smoke detection that have successfully identified fires earlier than traditional methods, leading to quicker firefighting responses and containment of fires before they grow large. This use of AI has directly prevented harm to people, property, and communities, fulfilling the criteria for an AI Incident. Although the article also discusses limitations and challenges, the primary focus is on realized benefits and harm prevention through AI use, not just potential risks or future hazards. Therefore, it is classified as an AI Incident due to the direct role of AI in preventing harm.[AI generated]

Thumbnail Image

AI-Generated Deepfake Diet Ads Cause Health Harm to Kathy Hilton

2026-05-01
United States

Kathy Hilton was misled by AI-generated deepfake ads featuring fake celebrity endorsements for a Jell-O diet, leading her to try the diet and suffer negative health effects. The incident highlights the harm caused by deceptive AI-generated content impersonating public figures to promote unsafe products.[AI generated]

AI principles:
Transparency & explainabilitySafety
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Consumers
Harm types:
Physical (injury)
Severity:
AI incident
Business function:
Marketing and advertisement
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems generating deepfake videos impersonating celebrities to promote a false diet. Kathy Hilton was directly harmed by following the diet based on the AI-generated ad, which caused physical health issues. The AI system's misuse directly led to harm to a person, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in causing the harm through deceptive content.[AI generated]

Thumbnail Image

US Navy Deploys AI to Accelerate Mine Detection in Strait of Hormuz

2026-05-01
United States

The US Navy has contracted Domino Data Lab for nearly $100 million to develop AI systems that rapidly detect underwater mines in the Strait of Hormuz. The AI integrates multi-sensor data, enabling faster and more accurate mine identification, aiming to enhance maritime security and protect global trade routes.[AI generated]

Industries:
Government, security, and defence
Severity:
AI hazard
AI system task:
Recognition/object detectionEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI system developed by Domino Data Lab to detect underwater mines, which are a direct threat to maritime safety and global trade. The AI system's role is to speed up and improve mine detection, which is critical to preventing harm to personnel, infrastructure, and economic activity. No actual harm or incident caused by the AI system is reported; rather, the AI is used to mitigate a known hazard. The event thus describes a plausible future harm scenario where the AI system's use is central to managing a significant risk. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to preventing or managing harm related to underwater mines, but no harm has yet occurred due to the AI system itself.[AI generated]

Thumbnail Image

AI-Generated Criminal Memes Cause Secondary Harm to Victims in South Korea

2026-05-01
Korea

AI systems are being used to create realistic images and videos of notorious criminals, which are widely shared as entertainment online. This trivializes serious crimes and inflicts secondary trauma on victims and their families. The incident has sparked controversy and calls for regulation in South Korea.[AI generated]

AI principles:
Human wellbeingRespect of human rights
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Other
Harm types:
Psychological
Severity:
AI incident
Business function:
Other
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating synthetic videos and images that directly cause harm by trivializing serious crimes and potentially causing secondary harm to victims and communities. The AI-generated content is actively spreading and consumed widely, indicating realized harm rather than just potential. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of rights (harm category c and d). The legal and societal challenges further underscore the significance of the harm caused.[AI generated]

Thumbnail Image

AI-Generated Deepfake Videos Used for Celebrity Impersonation and Scams in Vietnam

2026-05-01
Viet Nam

Vietnamese director Lý Hải and his wife warned about AI-generated fake videos and audio impersonating them to promote unverified products and scams. The sophisticated deepfakes deceive viewers, especially the elderly, leading to financial loss and reputational harm. Authorities have intervened in some cases, highlighting the growing misuse of AI for fraud.[AI generated]

AI principles:
SafetyTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
ConsumersGeneral public
Harm types:
Economic/PropertyReputational
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems that generate realistic fake videos (deepfakes) impersonating a real person without consent, which is a direct misuse of AI technology. This misuse has already led to harm by deceiving consumers into potentially fraudulent purchases, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The article explicitly states that the AI-generated videos are being used to sell products deceptively, causing real harm, not just a potential risk.[AI generated]

Thumbnail Image

Disney Replaces Marvel Artists with AI, Leading to Mass Layoffs and Outcry

2026-05-01
United States

Disney laid off about 8% of its workforce, including nearly the entire Marvel visual development team, replacing them with AI systems trained on the artists' previous works. This move sparked public criticism from actress Evangeline Lilly, who accused Disney of exploiting artists' creations and violating labor and intellectual property rights.[AI generated]

AI principles:
Respect of human rightsAccountability
Industries:
Arts, entertainment, and recreation
Affected stakeholders:
WorkersBusiness
Harm types:
Economic/PropertyHuman or fundamental rightsReputational
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly states that Disney is replacing human artists with AI systems that generate creative outputs based on extensive human-created data. This replacement has directly led to layoffs affecting the artists' employment, which is a harm to people. The AI system's role in producing creative content that substitutes human labor is pivotal to the harm. Hence, the event meets the criteria for an AI Incident due to realized harm caused by the use of AI in the creative workforce.[AI generated]

Thumbnail Image

Baykar Unveils AI-Enabled Autonomous Loitering Munition 'Mızrak'

2026-04-30
Türkiye

Turkish defense company Baykar has unveiled the Mızrak, an AI-supported autonomous loitering munition with a range exceeding 1,000 km and significant lethal capabilities. Debuting at SAHA 2026, the system's autonomous targeting and operational flexibility raise concerns about future risks of harm from AI-enabled military weapons.[AI generated]

AI principles:
Respect of human rightsDemocracy & human autonomy
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Physical (death)Human or fundamental rights
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as having AI-supported autonomous capabilities in a military weapon system. Although no incident of harm is reported, the nature of the system as an autonomous lethal munition with advanced AI features implies a credible risk of causing injury or harm in future use. The development and public unveiling of such a system fit the definition of an AI Hazard, as it could plausibly lead to AI Incidents involving harm to persons or communities in conflict. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's capabilities and potential impact.[AI generated]

Thumbnail Image

Meta Faces Legal Action Over AI-Driven Harms to Children in New Mexico

2026-04-30
United States

Meta is considering shutting down its social media services in New Mexico after being found liable for using AI-driven features that harmed children's mental health and facilitated child sexual exploitation. State prosecutors demand platform changes to address addictive features, age verification, and privacy protections for children.[AI generated]

AI principles:
Respect of human rightsPrivacy & data governance
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Children
Harm types:
PsychologicalHuman or fundamental rights
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Organisation/recommenders
Why's our monitor labelling this an incident or hazard?

Meta's platforms employ AI systems that influence user experience and content exposure, which have been found to harm children's mental health and safety. The legal case and penalties indicate that harm has already occurred due to the AI system's use. The event directly relates to AI system use causing violations of rights and harm to a vulnerable group (children). Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm, and the legal actions are responses to that harm.[AI generated]