The event involves the development and planned use of an AI system (LatticeOS) for autonomous and semi-autonomous military operations, including swarm control and counter-drone activities. Although no harm has yet occurred, the deployment of AI in lethal or military command systems carries credible risks of injury, violation of rights, or disruption, making this a plausible future hazard. Therefore, this qualifies as an AI Hazard rather than an Incident or Complementary Information, as the article focuses on the system's development and intended operational use without reporting actual harm or incident.[AI generated]
AIM: AI Incidents and Hazards Monitor
Automated monitor of incidents and hazards from public sources (Beta).
AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Advanced Search Options
As percentage of total AI events
Show summary statistics of AI incidents & hazards

Hyundai Rotem and Anduril Collaborate on AI-Driven Military Command Systems
Hyundai Rotem and U.S. defense tech firm Anduril have signed an agreement in Seoul to jointly develop AI-based command and control systems for military vehicles, drones, and robots. The collaboration aims to integrate Anduril's Lattice AI OS into unmanned platforms, enabling autonomous operations and swarm control, raising future risks of AI-enabled autonomous weapon systems.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
Unauthorized Use of AI-Generated Celebrity Likeness in Livestream Sales Leads to Detention in China
In Datong, China, a netizen named Xing illegally used AI tools to create a digital likeness of KMT chairperson Cheng Liwen for livestream sales without authorization. This misuse of AI for commercial gain infringed on personal rights, disrupted online order, and resulted in administrative detention by local police.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI tool to generate a digital human likeness (AI digital person) of a real individual without authorization, which was then used in live-streaming commerce. This unauthorized use infringes on the person's rights and caused social harm by disturbing network order and misleading the public. The legal action taken confirms the harm and violation of laws. The AI system's misuse directly led to these harms, fitting the definition of an AI Incident involving violations of rights and harm to communities.[AI generated]

AI-Powered DeepLoad Malware Targets Nigerian Institutions
Nigeria's National Information Technology Development Agency (NITDA) has warned of an active AI-powered malware, DeepLoad, targeting government agencies, banks, businesses, and individuals. The malware uses social engineering to infiltrate systems, steal sensitive data, evade antivirus detection, and enable financial fraud and operational disruptions across Nigeria.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The DeepLoad malware explicitly incorporates AI-generated code to evade antivirus detection and maintain persistence, qualifying it as an AI system. The malware's active infections have caused direct harms including credential theft, financial fraud, system compromise, and risks to national security, fulfilling the criteria for an AI Incident. The advisory details realized harms and ongoing attacks, not just potential risks, confirming this classification.[AI generated]

IT Contractor Creates Deepfake Videos from Stolen School Staff Photos in Busan
A male IT contractor in Busan, South Korea, illegally accessed 194 female school staff members' PCs, stealing over 220,000 personal files and using AI deepfake technology to create manipulated sexual videos. The incident, uncovered after a USB was found, highlights privacy violations and misuse of AI for harmful content creation.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deepfake generation) to create harmful synthetic sexual videos without consent, which is a violation of human rights and privacy. The AI system's use directly led to harm through the creation and possession of illicit content. The incident is not merely a potential risk but a realized harm, as the deepfake videos were produced and stored. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

Trump Shares AI-Generated Image Targeting Biden and Family
Donald Trump posted an AI-generated image on Truth Social depicting Joe Biden asleep in the Oval Office and his son Hunter using drugs, alongside other political figures. The manipulated image, widely shared online, raises concerns about AI-driven misinformation and reputational harm in U.S. political discourse.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fabricated image involving public figures, which is a direct use of AI to create misleading content. This can cause harm to communities by spreading misinformation and potentially violating reputations, which falls under harm to communities. Since the image is actively used by a prominent figure to attack others, the harm is realized rather than potential. Therefore, this qualifies as an AI Incident due to the direct role of AI in generating harmful content that impacts social and political communities.[AI generated]

AI-Generated Deepfakes Cause Harm and Challenge Law Enforcement in Germany
AI-generated deepfake images and videos have led to reputational harm, digital violence, and violations of personal rights in Germany. High-profile cases, such as manipulated content of public figures, highlight the challenges faced by police and justice officials, who struggle with detection, legal gaps, and identifying perpetrators.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images and videos that have directly harmed individuals by discrediting them and spreading manipulated content, fulfilling the criteria for harm to communities and violations of personal rights. The article explicitly states that such harms are happening and that law enforcement is actively dealing with these AI-generated manipulations. Therefore, this is an AI Incident rather than a hazard or complementary information.[AI generated]

ASU Faculty Protest AI Platform's Unauthorized Use of Teaching Materials
Arizona State University's AI-powered platforms, Atom and ASU Atomic, repurposed faculty teaching materials without their consent to generate personalized online courses. Faculty expressed concerns over intellectual property violations, lack of consultation, and inaccuracies in AI-generated content, potentially harming educational quality and academic reputations.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The AI system (ASU Atomic) is explicitly described as using AI to generate educational content by combining and modifying faculty lectures and materials. The faculty's concerns about inaccuracies and misinformation indicate harm to the quality and integrity of education, which can be considered harm to communities and a violation of intellectual property rights. The lack of faculty consultation and compensation further supports the violation of rights. Since the harm is occurring and linked directly to the AI system's use, this event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

Samsung Galaxy Watch Uses AI to Predict Fainting and Prevent Injuries
Samsung, in collaboration with Chung-Ang University Gwangmyeong Hospital in South Korea, has developed an AI-powered feature for the Galaxy Watch 6 that predicts vasovagal syncope (fainting) episodes. By analyzing biosignals, the AI system can warn users before fainting, potentially reducing injuries from sudden falls.[AI generated]
Industries:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system (the algorithm analyzing bio-signals from the smartwatch) is explicitly involved in predicting a medical condition that can lead to physical harm (injuries from falls). The AI's use directly contributes to harm prevention by providing early alerts, thus addressing potential injury risks. Since the AI system's use is linked to preventing injury and improving health outcomes, and the event reports successful prediction and clinical validation, this constitutes an AI Incident involving harm to health (a).[AI generated]

AI-Powered TUNGA-X Interceptor Drone Unveiled in Turkey
STM introduced the TUNGA-X, an AI-enabled autonomous interceptor drone, at the SAHA 2026 defense expo in Istanbul. Designed to counter low-cost kamikaze drones, TUNGA-X uses AI for real-time target detection and interception. While no harm has occurred, its autonomous lethal capabilities present plausible future risks.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The TUNGA-X system is an AI system as it uses AI for autonomous flight, target detection, and engagement. The event concerns the development and deployment of an autonomous weapon system designed to neutralize threats, which inherently carries risks of harm (injury, property damage, or escalation in conflict). Although no harm has yet occurred or been reported, the system's autonomous lethal capabilities mean it could plausibly lead to AI Incidents in the future. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their potential impacts.[AI generated]

AI Accounting App Issues Offensive Comments, Causing User Distress
The Feiya AI accounting app in China generated culturally insensitive and offensive remarks when a user logged a clothing purchase for their father, likening it to funeral attire. The incident caused emotional harm, leading to user complaints and membership cancellations. The company apologized, citing an AI model flaw, and implemented urgent fixes and stricter content moderation.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system (the AI chatbot in the accounting app) was involved and malfunctioned by generating inappropriate and offensive content, causing harm to the user's emotional well-being. The harm is indirect but real, as the user was upset and offended by the AI's replies. The platform acknowledged the issue, took responsibility, and implemented fixes. This fits the definition of an AI Incident because the AI's malfunction directly led to harm (emotional harm to the user).[AI generated]

AI-Generated Fake Rabbis Spread Antisemitism on TikTok
A coordinated network of at least 49 TikTok accounts used generative AI to create fake rabbis who spread antisemitic stereotypes and conspiracy theories. These AI-generated avatars amassed over 950,000 followers and 10 million likes, amplifying hate and misinformation by impersonating credible Jewish voices and deceiving audiences.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly describes AI-generated fake accounts used to spread antisemitic content, which is a clear violation of human rights and causes harm to communities. The AI system's role in generating and disseminating this content is pivotal to the harm occurring. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.[AI generated]
AI-Powered Apple Watch App Trial Aims to Detect Infections in Pediatric Cancer Patients
Researchers at Murdoch Children's Research Institute in Australia are trialing an AI-powered app that analyzes Apple Watch health data to detect early signs of infection in children undergoing cancer treatment. The system aims to enable earlier intervention for immunocompromised patients, though no harm or malfunction has been reported.[AI generated]
Industries:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (the Apple Watch app using AI to analyze health data) being used in a medical context. However, it describes a trial and exploration phase without any realized harm or malfunction. The AI system's use could plausibly lead to improved health outcomes, but no incident or harm has been reported. Therefore, this is an AI Hazard as it plausibly could lead to harm prevention or improved care, but no harm or incident has yet occurred.[AI generated]

TikTok Algorithm Systematically Favored Republican Content During 2024 US Elections
A study published in Nature found that TikTok's AI-driven recommendation algorithm systematically prioritized pro-Republican content in New York, Texas, and Georgia ahead of the 2024 US presidential election. Researchers using dummy accounts observed significant partisan bias, raising concerns about the algorithm's impact on political information exposure and democratic fairness.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: TikTok's recommendation algorithm, which uses AI to curate content for users. The study demonstrates that the AI system's use has directly led to a significant harm—systematic political bias in content exposure—which can be considered harm to communities by skewing political information and potentially influencing election outcomes. This constitutes a violation of the right to access balanced information and can undermine democratic processes. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm in the form of biased political information dissemination during a critical election period.[AI generated]

Disney's Facial Recognition System Raises Privacy Concerns in California
Disney has implemented AI-powered facial recognition at its California resorts, converting visitors' biometric features into unique digital values for identity verification. While Disney claims data is deleted within 30 days, critics warn of privacy risks, surveillance normalization, and potential misuse of biometric data, sparking debate over human rights and data security.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) in a real-world setting (Disney parks) for biometric identification and tracking. Although the article does not report a concrete incident of harm, it outlines credible risks such as privacy erosion, potential misuse of biometric data, algorithmic bias, and security vulnerabilities that could plausibly lead to harms like violations of privacy rights and data breaches. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harms, but no direct harm has yet been documented.[AI generated]

French Cybersecurity Sector Warns of AI-Driven Vulnerability Surge
The Campus Cyber, a major French cybersecurity organization, has issued warnings about Anthropic's new AI model, Mythos, which can rapidly discover critical software vulnerabilities. Experts fear this capability could overwhelm cybersecurity teams and increase systemic risks, urging urgent preparedness to prevent potential large-scale cyberattacks in France and Europe.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (the Mythos AI model) and discusses their use in discovering vulnerabilities that could lead to cyberattacks. No direct harm or incident has yet occurred, but the potential for harm is clearly articulated and plausible, fitting the definition of an AI Hazard. The event is not a realized incident, nor is it merely complementary information since the main focus is on the credible risk posed by AI's capabilities in cybersecurity. Therefore, it is best classified as an AI Hazard.[AI generated]

Actress Sues Over AI-Generated Likeness in 'Avatar' Films
Actress Q'orianka Kilcher sued James Cameron, Disney, and Lightstorm Entertainment, alleging her facial features were used without consent via AI-driven digital modeling to create the character Neytiri in the 'Avatar' franchise. The lawsuit cites violation of California's deepfake pornography statute and unauthorized use of biometric data.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event describes a direct harm caused by the use of AI or digital technology to replicate a person's facial features without permission, leading to a violation of her rights. The AI system's involvement is in the creation of the digital character's face, which is central to the harm claimed. The harm is realized, not just potential, as the character has been used in blockbuster films generating significant profits. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of rights (right of publicity and identity), which is a breach of applicable law protecting fundamental rights.[AI generated]

AI-Generated Deepfake Video Fuels Misinformation After Tainan Policewoman's Death
Following a fatal accident involving a policewoman in Tainan, AI-generated deepfake videos misrepresented the actions of the suspect, a female student, portraying her as indifferent. These manipulated videos, allegedly originating from China, spread widely online, inciting public outrage and reputational harm, and raising concerns about AI-driven misinformation and social disruption.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in creating a fabricated video that misleads the public about a sensitive incident, causing reputational harm and social disruption. The harm is realized as the video attracted millions of views and led to public outrage and online harassment. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and violations of rights through misinformation and emotional manipulation. Therefore, the event qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]

Yongin City Expands AI-Based Pothole Detection, Reducing Road Hazards and Complaints
Yongin City, South Korea, expanded its AI-based pothole monitoring system to 300 vehicles, integrating real-time road hazard detection with public complaint management. This led to a 19% drop in complaints and a 25% reduction in compensation payouts, demonstrating significant harm prevention and improved road safety through AI deployment.[AI generated]
Industries:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to detect potholes and road hazards, which directly contributes to reducing risks on the roads. This use of AI has led to tangible benefits such as fewer complaints and lower compensation costs, implying a reduction in harm related to road safety. Since the AI system's use has directly led to harm reduction and improved safety, this qualifies as an AI Incident under the framework, as it involves the use of AI leading to a positive impact on preventing injury or harm to people and property.[AI generated]

AI-Generated Fake Image Damages Brand Reputation in Taiwan
A university student in Taiwan used AI to create a fake image showing a mouse in a clothing brand's package, falsely implying hygiene issues. The brand, ALT, suffered reputational harm and is seeking NT$10 million in damages, pursuing legal action against the student for malicious use of AI-generated content.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate a fake image that falsely depicts a harmful scenario (a mouse in a product package) damaging the brand's reputation. The harm is realized as reputational and commercial damage, which fits under harm to property or business interests. The AI-generated image is central to the incident, and the brand is taking legal action due to this harm. Hence, this is an AI Incident as the AI system's use directly led to harm.[AI generated]

Warnings Issued Over Risks of Relying on AI for Financial Advice in the UK
Azets Wealth Management, a UK accountancy firm, warns that relying solely on AI for financial or investment advice could lead to costly mistakes, as AI tools may not reflect recent tax changes or individual circumstances. The firm urges users to seek professional advice alongside AI-generated information.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
While the article discusses the use of AI in financial advice and cautions against overreliance due to possible inaccuracies, it does not describe any realized harm, malfunction, or misuse of an AI system leading to injury, rights violations, or other harms. The warning is about plausible risks and the current unreliability of AI in this domain, which constitutes a potential risk rather than an actual incident. Therefore, this qualifies as an AI Hazard, reflecting a credible risk of harm if AI is misused for investment advice without proper oversight.[AI generated]

























