The article discusses a proposed bill aiming to pause AI data center development due to concerns about environmental harm and societal impacts. This is a precautionary measure reflecting plausible future harm from AI systems' infrastructure growth, but no actual harm or incident has occurred. Therefore, it qualifies as an AI Hazard because it highlights a credible risk that could plausibly lead to harm if unchecked, rather than an AI Incident or Complementary Information. It is not unrelated because it directly concerns AI systems and their societal impact.[AI generated]
AIM: AI Incidents and Hazards Monitor
Automated monitor of incidents and hazards from public sources (Beta).
AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Advanced Search Options
As percentage of total AI events
Show summary statistics of AI incidents & hazards
US Lawmakers Propose Moratorium on AI Data Center Expansion
US lawmakers Bernie Sanders and Alexandria Ocasio-Cortez have introduced a bill to pause new AI data center construction nationwide until federal safeguards are established. The legislation aims to address potential environmental, economic, and societal harms from unchecked AI infrastructure growth, reflecting growing concerns about AI's broader impacts.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Why's our monitor labelling this an incident or hazard?

LIG Nex1 and Palantir Sign MOU for AI-Enabled Defense Systems Development
LIG Nex1 and Palantir Technologies signed a memorandum of understanding to jointly develop integrated air defense and unmanned systems using AI software and hardware. The collaboration aims to enhance defense capabilities in South Korea, UAE, and other export markets, raising potential future risks associated with military AI deployment.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of AI systems integrated into military air defense solutions, which inherently carry risks of harm such as injury, disruption, or escalation of conflict. Although no direct or indirect harm has been reported so far, the nature of the AI system's intended use in defense and potential autonomous or semi-autonomous decision-making in combat scenarios plausibly could lead to AI Incidents in the future. The article does not describe any realized harm or malfunction, so it does not meet the criteria for an AI Incident. It is not merely complementary information because the main focus is on the new collaboration and development with potential risk implications. Hence, the classification as AI Hazard is appropriate.[AI generated]

Three Charged in Plot to Illegally Export Advanced AI Chips to China
A Chinese national and two Americans were charged by the U.S. Department of Justice for conspiring to illegally export millions of dollars' worth of advanced AI chips, including NVIDIA GPUs, to China via Thailand. The defendants allegedly falsified documents and used shell companies to circumvent U.S. export controls, raising national security concerns.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves advanced AI chips used in AI systems, and the illegal export violates U.S. export control laws, which are legal frameworks protecting intellectual property and national security. The involvement of AI technology and the breach of legal obligations constitute a violation of rights under applicable law, meeting the criteria for an AI Incident. The harm is indirect but significant, as it undermines legal protections and could facilitate unauthorized AI development or deployment in a restricted country. Hence, it is not merely a potential hazard or complementary information but an actual incident involving AI-related harm.[AI generated]

AI-Generated Deepfakes Used in Disinformation Campaigns Targeting Turkey
Turkey's Directorate of Communications' Disinformation Combat Center warned of a surge in AI-generated deepfake videos, images, and audio used in disinformation campaigns amid regional tensions. These manipulative contents, including a provocative video targeting President Erdoğan, threaten national security and social unity, prompting official advisories for public vigilance.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technologies to create deepfake visual, audio, and video content for disinformation purposes, which is a clear involvement of AI systems. Although no direct harm is reported as having occurred, the warning about increased disinformation activities and their potential to disrupt national security and social cohesion indicates a plausible risk of harm. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm but has not yet directly caused it. The event is not a Complementary Information piece because it focuses on the warning about potential harm rather than updates or responses to past incidents.[AI generated]

Anthropic Introduces Claude Code 'Auto Mode' with Safety Guardrails Amid Potential AI Risks
Anthropic has launched 'auto mode' for its Claude Code AI coding assistant, allowing it to autonomously execute multi-step coding tasks. While designed to boost productivity, the feature introduces credible risks such as data loss or malicious code execution, prompting Anthropic to implement safety classifiers and recommend controlled use.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the new auto mode allows Claude to act independently, including controlling a Mac computer to perform tasks autonomously. This clearly involves an AI system making decisions and acting without human intervention. However, the article does not describe any realized harm such as injury, rights violations, or property damage caused by this feature. It discusses potential risks and the imperfect nature of the safety checks, implying a credible risk of future harm if the system malfunctions or misjudges actions. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to harm but no incident has yet occurred.[AI generated]

German Army Plans AI Integration for Faster Battlefield Decisions
The German army, led by Lt. Gen. Christian Freuding, is developing AI tools to accelerate wartime decision-making by rapidly analyzing battlefield data, drawing on lessons from Ukraine. While AI will serve as an advisory aid with human oversight, its deployment in military operations poses credible future risks if misused or malfunctioning.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems for military decision-making, which could plausibly lead to significant harms if misused or malfunctioning in wartime. However, the article only reports plans and intentions without any actual harm or incident occurring yet. Therefore, it fits the definition of an AI Hazard, as the AI systems' deployment could plausibly lead to harms such as injury, disruption, or violations of rights in conflict scenarios, but no direct or indirect harm has yet materialized.[AI generated]

Ukrainian Company Develops AI-Powered Interceptor Drone UEB-1
Ukrainian company OSIRIS AI has developed the UEB-1 interceptor drone, which uses artificial intelligence for autonomous target prediction, tracking, and interception of high-speed aerial threats. Publicly demonstrated in Düsseldorf, the AI-enabled drone poses potential risks if deployed in military or security contexts, though no harm has yet occurred.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The drone is explicitly described as using artificial intelligence for target prediction and tracking, qualifying it as an AI system. Its development and intended use for interception and potential combat roles indicate a credible risk of causing harm, such as damage to property or escalation of conflict, even though no incident of harm is reported yet. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident involving harm due to the AI system's use in military operations. There is no indication of realized harm or incident in the article, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the development of a potentially harmful AI system.[AI generated]

Yongin City Conducts Safety Checks for Autonomous Bus Pilot Project
Yongin City, South Korea, began safety inspections and test runs for its autonomous bus pilot project, involving AI-driven vehicles operating between local landmarks. City officials, including the mayor, emphasized passenger safety and system reliability. No incidents have occurred, but the project highlights potential AI-related risks during public transport trials.[AI generated]
AI principles:
Industries:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in autonomous vehicles, but no harm or incident has occurred yet. The article discusses ongoing testing and safety measures to prevent harm. Therefore, it represents a plausible future risk scenario where AI system malfunction or failure could lead to harm, but no actual harm has been reported. This fits the definition of an AI Hazard, as the autonomous driving AI system's use could plausibly lead to an AI Incident if failures occur during operation. It is not Complementary Information because it is not an update or response to a past incident, nor is it unrelated since it clearly involves AI systems in a real-world application with potential safety implications.[AI generated]

German Court Bans AI-Based Biometric Checks in Online Exams
A German court ruled that using AI-powered facial recognition for identity verification in online university exams violates GDPR by unlawfully processing biometric data. The court recognized psychological harm to a student and awarded compensation, establishing that such AI proctoring practices breach fundamental rights.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a 'KI-gestützte Software' (AI-supported software) performing automated biometric facial recognition to verify exam takers' identities. The court found this processing unlawful under GDPR, constituting a violation of fundamental rights and causing immaterial harm (psychological distress). Since the AI system's use directly caused harm recognized by the court, this qualifies as an AI Incident under the framework, specifically a violation of human rights and immaterial harm to a person.[AI generated]

Ford Recalls 254,640 SUVs in US Over AI-Driven Safety Feature Malfunction
Ford is recalling 254,640 SUVs in the US due to a software defect in AI-powered image processing modules, causing loss of rearview camera and advanced driver assistance features. The malfunction increases crash risk, prompting a recall and free software update to restore safety functions.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
ADAS features rely on AI systems for real-time processing and decision-making to enhance vehicle safety. The software issue causing loss of these features directly impacts the safety of drivers and passengers, representing a harm to health and safety. Since the malfunction has already occurred and prompted a recall, this constitutes an AI Incident due to the realized harm risk from the AI system's malfunction.[AI generated]

Baltimore Sues Elon Musk's xAI Over Grok Deepfake Harms
The city of Baltimore has sued Elon Musk's xAI and X Corp., alleging their AI chatbot Grok generates and distributes nonconsensual sexually explicit deepfake images, including those of children. The lawsuit claims Grok lacks adequate safeguards, causing widespread harm and violating consumer protection laws.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The Grok platform is an AI system capable of generating deepfake images, which are being used to create harmful sexualized content without consent, including illegal child sexual abuse material. This has caused psychological harm and harassment to residents, constituting realized harm. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's outputs.[AI generated]

AI-Generated Fake Law Enforcement Used in Romanian Influence Campaign
Romania's National Cyber Security Directorate (DNSC) warns of an ongoing influence campaign using AI-generated personas falsely presented as police or gendarmes. The campaign micro-targets social media users, exploits emotions, spreads misinformation, and tests public reactions, undermining trust and facilitating fraud. The harm is realized and ongoing.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating fake characters used in a disinformation campaign that is actively influencing and manipulating the population, causing harm to communities. The use of AI-generated personas to deceive and micro-target users directly leads to harm through misinformation and potential fraud. Therefore, this meets the criteria for an AI Incident as the AI system's use has directly led to harm to communities and potential violations of rights through deception and fraud.[AI generated]

Greek Singer Alkistis Protopsalti Targeted by AI-Generated Deepfake Scam
Greek singer Alkistis Protopsalti was targeted by an online scam involving an AI-generated deepfake video falsely showing her endorsing products without her consent. The video circulated on social media, prompting her to take immediate legal action and alert authorities to protect her reputation and warn the public.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to generate a deepfake video that directly leads to harm by deceiving consumers and causing financial fraud, which constitutes harm to communities and individuals. The AI system's use in creating the fake video is central to the incident. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI-generated content.[AI generated]

Epirus, General Dynamics, and Kodiak AI Unveil Autonomous Counter-Drone Weapon System
Epirus, General Dynamics Land Systems, and Kodiak AI have introduced the Leonidas Autonomous Ground Vehicle, a mobile platform combining AI-powered autonomous driving and high-power microwave technology for counter-drone defense. The system, intended for critical defense and homeland security missions, poses potential risks if misused or malfunctioning.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the development and deployment of an AI system (Kodiak Driver) integrated into a counter-UAS platform that autonomously detects and neutralizes drone threats. While the system is designed for defense and safety, the article does not report any realized harm or incidents caused by the AI system. However, given the nature of the system—an autonomous weaponized platform capable of neutralizing drones—there is a plausible risk of harm if misused or malfunctioning, such as unintended damage or escalation in conflict scenarios. Therefore, this event represents an AI Hazard due to the credible potential for harm stemming from the autonomous AI-enabled counter-UAS system, even though no harm has yet occurred or been reported.[AI generated]

AI-Generated Deepfake X-Rays Deceive Radiologists and AI Systems
A multi-center study found that radiologists and advanced AI models cannot reliably distinguish AI-generated deepfake X-ray images from authentic ones. This vulnerability exposes healthcare to risks such as misdiagnosis, fraudulent litigation, and cybersecurity threats, highlighting the urgent need for improved detection tools and training.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (ChatGPT, RoentGen, and other LLMs) generating synthetic medical images that are indistinguishable from real ones by experts and AI detectors. This misuse or potential malicious use of AI-generated deepfakes can directly lead to harms such as fraudulent litigation, clinical misdiagnosis, and cybersecurity attacks causing clinical chaos, all of which are harms to health and communities. The study documents these risks with evidence of actual AI-generated images deceiving professionals, thus meeting the criteria for an AI Incident rather than a mere hazard or complementary information.[AI generated]

Music Publishers Sue Anthropic Over AI Copyright Infringement
Universal Music Group, Concord, and ABKCO have sued Anthropic, alleging its AI chatbot Claude was trained on and reproduces copyrighted song lyrics without permission. The publishers argue this infringes their intellectual property rights and competes with their market, challenging the AI's 'fair use' defense in California court.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude) used to generate content based on copyrighted song lyrics without permission, which the publishers claim infringes their copyrights. This is a direct violation of intellectual property rights, a category of harm defined under AI Incidents. The lawsuit and allegations indicate that the AI's use has already caused harm by reproducing copyrighted material and competing with the original market. Although the legal outcome is pending, the described harm is materialized and not merely potential, making this an AI Incident rather than a hazard or complementary information.[AI generated]

AI Misuse Drives Surge in Child Sexual Abuse Content Online
In 2025, the Internet Watch Foundation reported a 260-fold increase in AI-generated child sexual abuse material online, with over 8,000 images and videos identified. Most videos were classified as the most severe under UK law, highlighting AI's role in producing increasingly extreme and realistic illegal content.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (generative AI models) to produce illegal and harmful content (CSAM), which constitutes a violation of human rights and causes significant harm to children and communities. The AI systems' outputs have directly led to the dissemination of harmful material, fulfilling the criteria for an AI Incident. The article also references ongoing harm and the need for regulatory responses, confirming that the harm is realized and ongoing rather than merely potential.[AI generated]

First Conviction in Cyprus for AI-Generated Child Sexual Abuse Material
A Limassol court in Cyprus issued the country's first conviction for crimes involving child sexual abuse material created or distributed using artificial intelligence. Two young individuals pleaded guilty and received suspended prison sentences. The case highlights the growing concern over AI-facilitated exploitation and the legal system's response.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the offenses involve sexual abuse material of minors created or distributed through AI, indicating the use of an AI system in committing the crime. The court conviction confirms that harm has occurred, specifically violations of rights and serious social harm. This meets the criteria for an AI Incident, as the AI system's use directly led to harm. The legal response and sentencing further confirm the materialization of harm rather than a potential risk or complementary information.[AI generated]

Gwangmyeong City Deploys AI Fire Prevention System in Traditional Markets
Gwangmyeong City, South Korea, has partnered with Slano and local markets to install 500 AIoT devices for real-time fire detection and prevention. The AI system analyzes sensor data to predict fire risks, enabling early alerts and comprehensive safety management, aiming to reduce injury and property damage in crowded market environments.[AI generated]
Industries:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, analyzing real-time sensor data to detect fire hazards before they escalate. The system's use is intended to prevent harm to people and property by early detection and alerting relevant parties. Since the AI system's deployment directly addresses and mitigates risks of injury and property damage, this qualifies as an AI Incident under the definition of an event where AI use has directly led to harm prevention and safety enhancement.[AI generated]

IXOPAY and Zip Launch Framework to Address AI Risks in Agentic Commerce
IXOPAY and Zip have launched a joint initiative to develop a "Unified Trust Layer" framework aimed at addressing trust, identity, and liability challenges in AI-driven, agent-initiated commerce. The framework seeks to mitigate potential risks such as fraud as AI agents increasingly conduct autonomous transactions. No actual harm has occurred yet.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of autonomous AI agents conducting transactions (agentic commerce), and the framework is designed to mitigate risks associated with these AI systems. However, the article does not report any realized harm, injury, rights violations, or disruptions caused by AI systems. Instead, it presents a cooperative initiative to address potential risks and improve trust infrastructure in anticipation of future challenges. Therefore, this is a case of an AI Hazard, as the framework aims to prevent plausible future harms related to AI-driven commerce, but no actual AI Incident has occurred yet.[AI generated]

























