The event involves the use of AI systems in military operations, including weapon targeting and mission planning, which clearly involves AI system use. While no direct or indirect harm has been reported yet, the deployment of AI in autonomous or semi-autonomous weapons and surveillance systems carries a plausible risk of causing harm to persons, communities, or violating human rights in the future. The article also mentions controversy over the use of AI tools for surveillance and autonomous killing, underscoring the potential for harm. Since no actual harm or incident is described, but the potential for harm is credible and significant, the classification is AI Hazard.[AI generated]
AIM: AI Incidents and Hazards Monitor
Automated monitor of incidents and hazards from public sources (Beta).
AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Advanced Search Options
As percentage of total AI events
Show summary statistics of AI incidents & hazards

Pentagon Signs AI Agreements with Tech Giants for Secret Military Operations
The U.S. Department of Defense has signed agreements with seven major tech companies—including Google, Microsoft, Amazon, Nvidia, OpenAI, SpaceX, and Reflection—to use their AI technologies in secret military operations, such as mission planning and weapons targeting. The exclusion of Anthropic, due to ethical disputes, highlights ongoing concerns about AI's role in warfare and potential risks to civilians.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?

AI-Restored Film Screening in South Korea Banned After Severe Visual Distortions
A South Korean distributor used AI to restore the classic film 'A City of Sadness' without proper authorization, resulting in severe visual distortions—such as changing finger counts and facial features—that degraded the film's quality. Public backlash led to the film's screening being banned and raised concerns over unauthorized AI use in cultural works.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system was used to restore the film, but the restoration was unauthorized and caused harm by infringing on intellectual property rights and degrading the film's quality, which harms the community of viewers. The AI system's use in this unauthorized manner directly led to these harms. Therefore, this qualifies as an AI Incident due to violation of intellectual property rights and harm to the community through degraded content.[AI generated]
US Considers Faster Patch Deadlines Due to AI-Driven Cyber Threats
US cybersecurity officials are considering reducing the deadline for fixing critical government IT vulnerabilities from two weeks to three days. This policy shift is driven by concerns that advanced AI tools, such as Anthropic's Mythos and OpenAI's GPT-5.4-Cyber, enable hackers to exploit flaws much faster, increasing cybersecurity risks.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (advanced AI models like Mythos and GPT-5.4-Cyber) being used by hackers to identify and exploit vulnerabilities faster than before. This represents a credible threat that could plausibly lead to harm, such as disruption of critical infrastructure or data breaches. However, the article does not report any actual harm or incident resulting from this AI use, only the potential and the policy response being considered. Therefore, this event fits the definition of an AI Hazard, as it concerns a plausible future harm stemming from AI-enabled hacking capabilities.[AI generated]
AI Systems Enable Early Wildfire Detection and Response in Western US
AI-powered cameras deployed by utilities and fire agencies in Arizona and Colorado detected smoke early, enabling rapid firefighting response and containment of wildfires, such as the Diamond Fire. This use of AI has directly prevented harm to people and property in wildfire-prone Western US states.[AI generated]
Industries:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems deployed for wildfire smoke detection that have successfully identified fires earlier than traditional methods, leading to quicker firefighting responses and containment of fires before they grow large. This use of AI has directly prevented harm to people, property, and communities, fulfilling the criteria for an AI Incident. Although the article also discusses limitations and challenges, the primary focus is on realized benefits and harm prevention through AI use, not just potential risks or future hazards. Therefore, it is classified as an AI Incident due to the direct role of AI in preventing harm.[AI generated]
:max_bytes(150000):strip_icc():focal(722x320:724x322)/kathy-hilton-bravocon-2025-111725-f5e2ee7664a7414cb9081692cd0dea0d.jpg)
AI-Generated Deepfake Diet Ads Cause Health Harm to Kathy Hilton
Kathy Hilton was misled by AI-generated deepfake ads featuring fake celebrity endorsements for a Jell-O diet, leading her to try the diet and suffer negative health effects. The incident highlights the harm caused by deceptive AI-generated content impersonating public figures to promote unsafe products.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake videos impersonating celebrities to promote a false diet. Kathy Hilton was directly harmed by following the diet based on the AI-generated ad, which caused physical health issues. The AI system's misuse directly led to harm to a person, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in causing the harm through deceptive content.[AI generated]

US Navy Deploys AI to Accelerate Mine Detection in Strait of Hormuz
The US Navy has contracted Domino Data Lab for nearly $100 million to develop AI systems that rapidly detect underwater mines in the Strait of Hormuz. The AI integrates multi-sensor data, enabling faster and more accurate mine identification, aiming to enhance maritime security and protect global trade routes.[AI generated]
Industries:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system developed by Domino Data Lab to detect underwater mines, which are a direct threat to maritime safety and global trade. The AI system's role is to speed up and improve mine detection, which is critical to preventing harm to personnel, infrastructure, and economic activity. No actual harm or incident caused by the AI system is reported; rather, the AI is used to mitigate a known hazard. The event thus describes a plausible future harm scenario where the AI system's use is central to managing a significant risk. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to preventing or managing harm related to underwater mines, but no harm has yet occurred due to the AI system itself.[AI generated]

Baykar Unveils AI-Enabled Autonomous Loitering Munition 'Mızrak'
Turkish defense company Baykar has unveiled the Mızrak, an AI-supported autonomous loitering munition with a range exceeding 1,000 km and significant lethal capabilities. Debuting at SAHA 2026, the system's autonomous targeting and operational flexibility raise concerns about future risks of harm from AI-enabled military weapons.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as having AI-supported autonomous capabilities in a military weapon system. Although no incident of harm is reported, the nature of the system as an autonomous lethal munition with advanced AI features implies a credible risk of causing injury or harm in future use. The development and public unveiling of such a system fit the definition of an AI Hazard, as it could plausibly lead to AI Incidents involving harm to persons or communities in conflict. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's capabilities and potential impact.[AI generated]
Meta Faces Legal Action Over AI-Driven Harms to Children in New Mexico
Meta is considering shutting down its social media services in New Mexico after being found liable for using AI-driven features that harmed children's mental health and facilitated child sexual exploitation. State prosecutors demand platform changes to address addictive features, age verification, and privacy protections for children.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
Meta's platforms employ AI systems that influence user experience and content exposure, which have been found to harm children's mental health and safety. The legal case and penalties indicate that harm has already occurred due to the AI system's use. The event directly relates to AI system use causing violations of rights and harm to a vulnerable group (children). Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm, and the legal actions are responses to that harm.[AI generated]

Meta's AI Smart Glasses Lead to Worker Harm and Privacy Violations in Kenya
Meta terminated its contract with Kenyan firm Sama after over 1,100 workers, who trained AI systems using footage from Ray-Ban smart glasses, reported exposure to graphic and private content. The layoffs followed whistleblowing about privacy violations and poor labor conditions, raising concerns over AI training practices and worker well-being.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Meta's smart glasses and associated AI training processes). The development and use of these AI systems required human review of sensitive personal data, which led to privacy harms and labor rights violations. The firing of workers after they spoke out suggests potential retaliation, further implicating labor rights issues. Regulatory investigations confirm the recognition of these harms. Therefore, this event meets the definition of an AI Incident due to direct and indirect harms caused by the AI system's use and associated labor practices.[AI generated]

First Prosecution for AI-Generated Child Abuse Images in Germany
Authorities in Baden-Württemberg, Germany, have charged a 59-year-old man from Karlsruhe with creating highly realistic child sexual abuse images using AI programs. This marks the first time German prosecutors have filed charges based on AI-generated child pornography, highlighting the technology's role in producing illegal and harmful content.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create realistic child sexual abuse images, which is a direct violation of laws protecting fundamental rights and causes significant harm. The AI system's use directly led to the creation and distribution of illegal and harmful content. This meets the criteria for an AI Incident due to the direct harm and legal violations involved.[AI generated]

AI-Generated Deepfakes and Online Abuse Drive Women from Public Life
A UN Women report reveals that AI-generated deepfakes and technologically advanced online abuse are increasingly targeting women journalists, activists, and human rights defenders globally. These AI-enabled attacks have led to psychological harm, self-censorship, and withdrawal from public life, undermining women's rights and participation.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI apps used to create non-consensual intimate images and deepfakes, which have caused realized harm including mental health issues (depression, anxiety, PTSD) and social harm (self-censorship, job loss). These harms fall under violations of human rights and harm to communities. The AI systems' malicious use is a direct contributing factor to these harms. The discussion of legal measures further supports the recognition of these harms as incidents rather than potential hazards or complementary information.[AI generated]

Chinese Court Rules AI-Driven Dismissal Unlawful
A Chinese court in Hangzhou ruled that a tech company’s dismissal of an employee, whose job was replaced by AI systems, was unlawful. The court emphasized that automation alone does not justify termination under labor law, affirming the employee’s rights and awarding compensation for wrongful dismissal.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in replacing the employee's tasks, leading to his dismissal and a legal ruling on labor rights violations. The AI system's use directly caused harm to the employee's employment status and rights, fitting the definition of an AI Incident due to violation of labor rights (a breach of obligations under applicable law).[AI generated]

AI Uncovers Long-Standing Banking Vulnerabilities, Prompting Global Warning
AI systems have uncovered long-standing vulnerabilities in banking systems, serving as a global wake-up call, according to Sheetal Chopra of India's NIELIT. While no harm has occurred yet, the discovery highlights the urgent need for vigilance and preparedness as AI rapidly exposes systemic risks worldwide.[AI generated]
Industries:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is clear as it is used to discover vulnerabilities in banking systems. The event stems from the use of AI in identifying these risks. However, the article does not describe any realized harm such as breaches, financial loss, or disruption caused by these vulnerabilities. The focus is on the potential risks and the need for preparedness, which aligns with the definition of an AI Hazard—an event where AI's involvement could plausibly lead to harm but no incident has yet occurred. Hence, this is classified as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

AI-Driven Cybercrime Causes 389% Surge in Ransomware Victims
Fortinet's 2026 Global Threat Landscape Report reveals a 389% year-over-year increase in ransomware victims, driven by cybercriminals' use of AI-powered tools like WormGPT, FraudGPT, and BruteForceAI. These AI-enabled attacks have caused significant harm across sectors, highlighting the growing threat of agentic AI in global cybercrime.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The involvement of AI in enabling cybercrime, particularly ransomware attacks, directly leads to harm by compromising data, causing financial loss, and disrupting operations. Since the report documents realized harm from AI-enabled cybercrime, this qualifies as an AI Incident under the framework, as the AI system's use has directly contributed to significant harm.[AI generated]

Bot Auto Completes First Fully Humanless Commercial Truck Delivery in Texas
Bot Auto, an autonomous trucking startup, successfully completed the first fully humanless commercial truck delivery in Texas, transporting freight 230 miles without a safety driver, remote operator, or in-cab observer. The AI-driven truck operated independently, marking a milestone in commercial autonomous vehicle deployment.[AI generated]
Industries:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous trucks actively used in commercial deliveries without human drivers or safety operators, indicating AI system use. No harm or incident is reported, so it is not an AI Incident. However, the deployment of such systems without human oversight plausibly could lead to harm in the future, such as accidents or infrastructure disruption, qualifying it as an AI Hazard. The article does not focus on responses, legal actions, or updates to past incidents, so it is not Complementary Information. It is not unrelated as it clearly involves AI systems and their use.[AI generated]

AI Chatbot Interaction Leads to Discovery of Child Sexual Abuse in Paraná
In São José dos Pinhais, Paraná, Brazil, a 12-year-old girl used an AI chatbot to discuss her sexual abuse, prompting her family to discover the crime. The AI's response and the chat history led to police involvement and the arrest of the suspect, highlighting the AI system's pivotal role in uncovering the abuse.[AI generated]
Industries:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The AI system was explicitly mentioned as being used by the child to ask a question that led to the family discovering the abuse. The harm (sexual abuse) was ongoing and is a clear violation of fundamental human rights. The AI's response helped the child understand that the abuse was not her fault, which was crucial in revealing the abuse. Hence, the AI system's use directly contributed to addressing a serious harm, meeting the criteria for an AI Incident.[AI generated]

XTEND Develops AI-Powered Autonomous Defense Systems for Middle East Client
XTEND, an AI robotics company, secured a $2.2 million contract to develop advanced autonomous aerial defense systems for a Middle Eastern defense customer. These AI-powered systems, designed to counter airborne threats, raise concerns about potential future harm due to their autonomous and weaponized nature, though no incidents have yet occurred.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (XTEND's AI operating system for drones and robots) that is deployed and operational in sensitive and potentially hazardous environments, including military conflict zones and disaster areas. While the AI system is used in applications that could cause harm (e.g., loitering munitions), the article does not describe any actual harm, injury, violation of rights, or disruption caused by the AI system. It mainly discusses the technology, its capabilities, partnerships, contracts, and business developments. Therefore, the event does not qualify as an AI Incident. However, given the AI system's deployment in military and conflict contexts with autonomous capabilities, there is a plausible risk of future harm stemming from its use. This aligns with the definition of an AI Hazard, as the AI system's development and use could plausibly lead to harms such as injury, violation of rights, or harm to communities. The article does not focus on responses, updates, or governance measures, so it is not Complementary Information. It is not unrelated because it clearly involves AI systems with potential for harm.[AI generated]

EU Accuses Meta of Failing to Prevent Underage Access to Facebook and Instagram
The European Commission found that Meta's AI-driven age verification systems on Facebook and Instagram are ineffective, allowing 10–12% of children under 13 to access the platforms. This violates the EU Digital Services Act and exposes minors to potential harm, highlighting failures in Meta's AI-based protections for children.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of age verification and content moderation mechanisms on Meta's platforms. These systems have failed to reliably prevent underage users from accessing the services, leading to exposure to potentially harmful content. This constitutes a violation of legal obligations under the Digital Services Act and results in harm to minors' health and well-being, fulfilling the criteria for an AI Incident. The harm is indirect but real, as the AI system's malfunction or inadequacy is a contributing factor to the exposure of minors to risks. The event is not merely a potential hazard or complementary information but a current issue with regulatory consequences and recognized harm.[AI generated]

White House Opposes Anthropic's Expansion of Mythos AI Access Due to Cybersecurity Risks
Anthropic's Mythos AI model, capable of autonomously finding software vulnerabilities and enabling cyberattacks, faces opposition from the White House over plans to expand access. US officials cite concerns about misuse by hackers or foreign governments and potential impact on government operations, prompting restricted release to select organizations.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article focuses on the potential future risks posed by the AI system Claude Mythos, emphasizing the severe consequences if such technology falls into the wrong hands. Since no actual harm or incident has occurred yet, but there is a plausible risk of significant harm in the future, this qualifies as an AI Hazard. The discussion is about the plausible future impact rather than a realized incident or a response to one.[AI generated]

AI-Driven Bot Attacks Surge 12.5x, Dominate Internet Traffic in 2025
According to Thales' 2026 Bad Bot Report, AI-driven bot attacks surged 12.5 times in 2025, with bots now making up over half of all internet traffic. These AI bots increasingly target APIs and identity systems, causing widespread security breaches, data theft, and account takeovers across industries globally.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions AI-driven bots causing a surge in malicious internet traffic and attacks, including account takeovers in financial services, which constitute harm to property and communities. The AI systems' use in these attacks directly leads to realized harm, fitting the definition of an AI Incident. The involvement of AI in the bots' sophisticated behavior and the resulting malicious outcomes confirms this classification.[AI generated]

























