The event explicitly involves the use of AI systems (machine learning software for underwater mine detection) in a defense application. While no actual harm or incident is reported, the deployment of AI for mine detection in a strategic and potentially hazardous environment implies a plausible risk of harm if the system malfunctions or is misused. However, since the article only reports the contract signing and intended use without any realized harm or malfunction, it constitutes an AI Hazard rather than an AI Incident. The event highlights a credible future risk related to AI use in military mine detection operations.[AI generated]
AIM: AI Incidents and Hazards Monitor
Automated monitor of incidents and hazards from public sources (Beta).
AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Advanced Search Options
As percentage of total AI events
Show summary statistics of AI incidents & hazards

US Military AI Use Causes Civilian Casualties and Raises Global Security Risks
The US Department of Defense has rapidly expanded AI deployment in military operations, including mine detection in the Strait of Hormuz and combat targeting. An AI-enabled target recognition error reportedly led to over 160 civilian deaths in Iran, highlighting the risks of AI misuse, lack of regulation, and potential violations of international law.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?

High School Students Use AI Deepfake Technology to Create and Distribute Sexual Images, Nearly 20 Victims in Taichung
Two male high school students in Taichung, Taiwan, used AI deepfake technology to create and distribute non-consensual sexual images of nearly 20 female classmates. The incident caused significant psychological harm and privacy violations. Authorities and schools have launched investigations, and university admission for one perpetrator is under review.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deepfake) to produce and spread harmful manipulated images, resulting in realized harm to individuals (psychological distress and violation of rights). This meets the criteria for an AI Incident because the AI system's use directly led to harm to persons and violations of rights. The involvement of AI is clear and central to the harm caused, and the incident is ongoing with active investigation and response.[AI generated]

AI-Generated Fake Sensitive Images Cause Harm Among Students in Đồng Nai
In Đồng Nai, Vietnam, a male student used AI software to create fake sensitive images of a female classmate, which were then spread on social media due to personal conflict. The incident caused psychological and reputational harm, prompting police intervention and highlighting risks of AI misuse among students.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology to generate fake images, which were then shared and caused harm to the victim's psychological well-being and reputation. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person and harm to the community through misinformation and reputational damage. The involvement of law enforcement and educational responses further confirms the recognition of harm caused by AI misuse.[AI generated]

Pentagon Signs AI Agreements with Tech Giants for Secret Military Operations
The U.S. Department of Defense has signed agreements with seven major tech companies—including Google, Microsoft, Amazon, Nvidia, OpenAI, SpaceX, and Reflection—to use their AI technologies in secret military operations, such as mission planning and weapons targeting. The exclusion of Anthropic, due to ethical disputes, highlights ongoing concerns about AI's role in warfare and potential risks to civilians.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in military operations, including weapon targeting and mission planning, which clearly involves AI system use. While no direct or indirect harm has been reported yet, the deployment of AI in autonomous or semi-autonomous weapons and surveillance systems carries a plausible risk of causing harm to persons, communities, or violating human rights in the future. The article also mentions controversy over the use of AI tools for surveillance and autonomous killing, underscoring the potential for harm. Since no actual harm or incident is described, but the potential for harm is credible and significant, the classification is AI Hazard.[AI generated]

AI-Restored Film Screening in South Korea Banned After Severe Visual Distortions
A South Korean distributor used AI to restore the classic film 'A City of Sadness' without proper authorization, resulting in severe visual distortions—such as changing finger counts and facial features—that degraded the film's quality. Public backlash led to the film's screening being banned and raised concerns over unauthorized AI use in cultural works.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system was used to restore the film, but the restoration was unauthorized and caused harm by infringing on intellectual property rights and degrading the film's quality, which harms the community of viewers. The AI system's use in this unauthorized manner directly led to these harms. Therefore, this qualifies as an AI Incident due to violation of intellectual property rights and harm to the community through degraded content.[AI generated]
US Considers Faster Patch Deadlines Due to AI-Driven Cyber Threats
US cybersecurity officials are considering reducing the deadline for fixing critical government IT vulnerabilities from two weeks to three days. This policy shift is driven by concerns that advanced AI tools, such as Anthropic's Mythos and OpenAI's GPT-5.4-Cyber, enable hackers to exploit flaws much faster, increasing cybersecurity risks.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (advanced AI models like Mythos and GPT-5.4-Cyber) being used by hackers to identify and exploit vulnerabilities faster than before. This represents a credible threat that could plausibly lead to harm, such as disruption of critical infrastructure or data breaches. However, the article does not report any actual harm or incident resulting from this AI use, only the potential and the policy response being considered. Therefore, this event fits the definition of an AI Hazard, as it concerns a plausible future harm stemming from AI-enabled hacking capabilities.[AI generated]
AI Systems Enable Early Wildfire Detection and Response in Western US
AI-powered cameras deployed by utilities and fire agencies in Arizona and Colorado detected smoke early, enabling rapid firefighting response and containment of wildfires, such as the Diamond Fire. This use of AI has directly prevented harm to people and property in wildfire-prone Western US states.[AI generated]
Industries:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems deployed for wildfire smoke detection that have successfully identified fires earlier than traditional methods, leading to quicker firefighting responses and containment of fires before they grow large. This use of AI has directly prevented harm to people, property, and communities, fulfilling the criteria for an AI Incident. Although the article also discusses limitations and challenges, the primary focus is on realized benefits and harm prevention through AI use, not just potential risks or future hazards. Therefore, it is classified as an AI Incident due to the direct role of AI in preventing harm.[AI generated]
:max_bytes(150000):strip_icc():focal(722x320:724x322)/kathy-hilton-bravocon-2025-111725-f5e2ee7664a7414cb9081692cd0dea0d.jpg)
AI-Generated Deepfake Diet Ads Cause Health Harm to Kathy Hilton
Kathy Hilton was misled by AI-generated deepfake ads featuring fake celebrity endorsements for a Jell-O diet, leading her to try the diet and suffer negative health effects. The incident highlights the harm caused by deceptive AI-generated content impersonating public figures to promote unsafe products.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake videos impersonating celebrities to promote a false diet. Kathy Hilton was directly harmed by following the diet based on the AI-generated ad, which caused physical health issues. The AI system's misuse directly led to harm to a person, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in causing the harm through deceptive content.[AI generated]

US Navy Deploys AI to Accelerate Mine Detection in Strait of Hormuz
The US Navy has contracted Domino Data Lab for nearly $100 million to develop AI systems that rapidly detect underwater mines in the Strait of Hormuz. The AI integrates multi-sensor data, enabling faster and more accurate mine identification, aiming to enhance maritime security and protect global trade routes.[AI generated]
Industries:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system developed by Domino Data Lab to detect underwater mines, which are a direct threat to maritime safety and global trade. The AI system's role is to speed up and improve mine detection, which is critical to preventing harm to personnel, infrastructure, and economic activity. No actual harm or incident caused by the AI system is reported; rather, the AI is used to mitigate a known hazard. The event thus describes a plausible future harm scenario where the AI system's use is central to managing a significant risk. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to preventing or managing harm related to underwater mines, but no harm has yet occurred due to the AI system itself.[AI generated]

AI-Generated Criminal Memes Cause Secondary Harm to Victims in South Korea
AI systems are being used to create realistic images and videos of notorious criminals, which are widely shared as entertainment online. This trivializes serious crimes and inflicts secondary trauma on victims and their families. The incident has sparked controversy and calls for regulation in South Korea.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic videos and images that directly cause harm by trivializing serious crimes and potentially causing secondary harm to victims and communities. The AI-generated content is actively spreading and consumed widely, indicating realized harm rather than just potential. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of rights (harm category c and d). The legal and societal challenges further underscore the significance of the harm caused.[AI generated]

AI-Generated Deepfake Videos Used for Celebrity Impersonation and Scams in Vietnam
Vietnamese director Lý Hải and his wife warned about AI-generated fake videos and audio impersonating them to promote unverified products and scams. The sophisticated deepfakes deceive viewers, especially the elderly, leading to financial loss and reputational harm. Authorities have intervened in some cases, highlighting the growing misuse of AI for fraud.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate realistic fake videos (deepfakes) impersonating a real person without consent, which is a direct misuse of AI technology. This misuse has already led to harm by deceiving consumers into potentially fraudulent purchases, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The article explicitly states that the AI-generated videos are being used to sell products deceptively, causing real harm, not just a potential risk.[AI generated]

Baykar Unveils AI-Enabled Autonomous Loitering Munition 'Mızrak'
Turkish defense company Baykar has unveiled the Mızrak, an AI-supported autonomous loitering munition with a range exceeding 1,000 km and significant lethal capabilities. Debuting at SAHA 2026, the system's autonomous targeting and operational flexibility raise concerns about future risks of harm from AI-enabled military weapons.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as having AI-supported autonomous capabilities in a military weapon system. Although no incident of harm is reported, the nature of the system as an autonomous lethal munition with advanced AI features implies a credible risk of causing injury or harm in future use. The development and public unveiling of such a system fit the definition of an AI Hazard, as it could plausibly lead to AI Incidents involving harm to persons or communities in conflict. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's capabilities and potential impact.[AI generated]
Meta Faces Legal Action Over AI-Driven Harms to Children in New Mexico
Meta is considering shutting down its social media services in New Mexico after being found liable for using AI-driven features that harmed children's mental health and facilitated child sexual exploitation. State prosecutors demand platform changes to address addictive features, age verification, and privacy protections for children.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
Meta's platforms employ AI systems that influence user experience and content exposure, which have been found to harm children's mental health and safety. The legal case and penalties indicate that harm has already occurred due to the AI system's use. The event directly relates to AI system use causing violations of rights and harm to a vulnerable group (children). Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm, and the legal actions are responses to that harm.[AI generated]

Meta's AI Smart Glasses Lead to Worker Harm and Privacy Violations in Kenya
Meta terminated its contract with Kenyan firm Sama after over 1,100 workers, who trained AI systems using footage from Ray-Ban smart glasses, reported exposure to graphic and private content. The layoffs followed whistleblowing about privacy violations and poor labor conditions, raising concerns over AI training practices and worker well-being.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Meta's smart glasses and associated AI training processes). The development and use of these AI systems required human review of sensitive personal data, which led to privacy harms and labor rights violations. The firing of workers after they spoke out suggests potential retaliation, further implicating labor rights issues. Regulatory investigations confirm the recognition of these harms. Therefore, this event meets the definition of an AI Incident due to direct and indirect harms caused by the AI system's use and associated labor practices.[AI generated]

First Prosecution for AI-Generated Child Abuse Images in Germany
Authorities in Baden-Württemberg, Germany, have charged a 59-year-old man from Karlsruhe with creating highly realistic child sexual abuse images using AI programs. This marks the first time German prosecutors have filed charges based on AI-generated child pornography, highlighting the technology's role in producing illegal and harmful content.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create realistic child sexual abuse images, which is a direct violation of laws protecting fundamental rights and causes significant harm. The AI system's use directly led to the creation and distribution of illegal and harmful content. This meets the criteria for an AI Incident due to the direct harm and legal violations involved.[AI generated]

AI-Generated Deepfakes and Online Abuse Drive Women from Public Life
A UN Women report reveals that AI-generated deepfakes and technologically advanced online abuse are increasingly targeting women journalists, activists, and human rights defenders globally. These AI-enabled attacks have led to psychological harm, self-censorship, and withdrawal from public life, undermining women's rights and participation.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI apps used to create non-consensual intimate images and deepfakes, which have caused realized harm including mental health issues (depression, anxiety, PTSD) and social harm (self-censorship, job loss). These harms fall under violations of human rights and harm to communities. The AI systems' malicious use is a direct contributing factor to these harms. The discussion of legal measures further supports the recognition of these harms as incidents rather than potential hazards or complementary information.[AI generated]

Chinese Court Rules AI-Driven Dismissal Unlawful
A Chinese court in Hangzhou ruled that a tech company’s dismissal of an employee, whose job was replaced by AI systems, was unlawful. The court emphasized that automation alone does not justify termination under labor law, affirming the employee’s rights and awarding compensation for wrongful dismissal.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in replacing the employee's tasks, leading to his dismissal and a legal ruling on labor rights violations. The AI system's use directly caused harm to the employee's employment status and rights, fitting the definition of an AI Incident due to violation of labor rights (a breach of obligations under applicable law).[AI generated]

AI Uncovers Long-Standing Banking Vulnerabilities, Prompting Global Warning
AI systems have uncovered long-standing vulnerabilities in banking systems, serving as a global wake-up call, according to Sheetal Chopra of India's NIELIT. While no harm has occurred yet, the discovery highlights the urgent need for vigilance and preparedness as AI rapidly exposes systemic risks worldwide.[AI generated]
Industries:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is clear as it is used to discover vulnerabilities in banking systems. The event stems from the use of AI in identifying these risks. However, the article does not describe any realized harm such as breaches, financial loss, or disruption caused by these vulnerabilities. The focus is on the potential risks and the need for preparedness, which aligns with the definition of an AI Hazard—an event where AI's involvement could plausibly lead to harm but no incident has yet occurred. Hence, this is classified as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

AI-Driven Cybercrime Causes 389% Surge in Ransomware Victims
Fortinet's 2026 Global Threat Landscape Report reveals a 389% year-over-year increase in ransomware victims, driven by cybercriminals' use of AI-powered tools like WormGPT, FraudGPT, and BruteForceAI. These AI-enabled attacks have caused significant harm across sectors, highlighting the growing threat of agentic AI in global cybercrime.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The involvement of AI in enabling cybercrime, particularly ransomware attacks, directly leads to harm by compromising data, causing financial loss, and disrupting operations. Since the report documents realized harm from AI-enabled cybercrime, this qualifies as an AI Incident under the framework, as the AI system's use has directly contributed to significant harm.[AI generated]

Bot Auto Completes First Fully Humanless Commercial Truck Delivery in Texas
Bot Auto, an autonomous trucking startup, successfully completed the first fully humanless commercial truck delivery in Texas, transporting freight 230 miles without a safety driver, remote operator, or in-cab observer. The AI-driven truck operated independently, marking a milestone in commercial autonomous vehicle deployment.[AI generated]
Industries:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous trucks actively used in commercial deliveries without human drivers or safety operators, indicating AI system use. No harm or incident is reported, so it is not an AI Incident. However, the deployment of such systems without human oversight plausibly could lead to harm in the future, such as accidents or infrastructure disruption, qualifying it as an AI Hazard. The article does not focus on responses, legal actions, or updates to past incidents, so it is not Complementary Information. It is not unrelated as it clearly involves AI systems and their use.[AI generated]

























