aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 13695 incidents & hazards
Thumbnail Image

Waymo Robotaxi Impedes Emergency Response and Is Shot at During Austin Shootings

2026-03-02
United States

In Austin, Texas, a Waymo self-driving taxi blocked emergency vehicles during a fatal mass shooting, briefly delaying ambulance access. In a separate incident, another Waymo robotaxi was shot at while carrying a passenger, causing vehicle damage but no injuries. Both incidents highlight safety and reliability concerns for autonomous vehicles in critical situations.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
General public
Harm types:
Public interest
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves a Waymo robotaxi, an AI system for autonomous driving. The AI system's malfunction (stalling and confusion in moving out of the way) directly caused a delay in emergency responders reaching victims of a terror attack, thus disrupting critical emergency services. Although the delay was brief and did not ultimately affect patient outcomes, the AI system's failure to act appropriately in this high-stakes context meets the criteria for an AI Incident due to disruption of critical infrastructure management and operation. The presence of harm (disruption) and direct causation by the AI system's malfunction justifies classification as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Blind YouTuber Applies for Neuralink AI Vision Restoration Trial

2026-03-02
United States

Blind Korean YouTuber 'Oneshot Hansol' has applied to participate in Neuralink's clinical trial for 'Blindsight,' an AI-powered brain implant aiming to restore vision by stimulating the visual cortex. While no harm has occurred, concerns about privacy, hacking, and social inequality have been raised regarding the technology's future use.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Healthcare, drugs, and biotechnology
Affected stakeholders:
ConsumersGeneral public
Harm types:
Human or fundamental rightsPublic interest
Severity:
AI hazard
Business function:
Research and development
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (Neuralink's brain implant technology and robotic surgery) in a clinical trial aimed at restoring vision to a blind person. While the technology is promising and intended for health benefits, the article does not report any actual harm or injury yet. The participant expresses concerns about potential misuse or hacking, indicating plausible future risks. Since no harm has occurred but plausible harm could arise from the AI system's use, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

Supreme Court Flags Use of AI-Generated Fake Judgments in Indian Trial Court

2026-03-02
India

The Supreme Court of India has taken serious note of a trial court's reliance on AI-generated fake or non-existent judgments in a civil dispute, warning that such conduct constitutes judicial misconduct and undermines the integrity of the legal process. The court is examining the consequences and accountability of this misuse.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Government, security, and defence
Affected stakeholders:
General publicGovernment
Harm types:
Public interestReputational
Severity:
AI incident
Business function:
Compliance and justice
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system was used to generate fake judicial judgments that were relied upon by a trial court, leading to a direct impact on the integrity of the adjudicatory process. This constitutes a violation of legal obligations and undermines the fundamental rights to fair judicial process, fitting the definition of an AI Incident due to realized harm caused by the AI system's outputs in a legal context.[AI generated]

Thumbnail Image

Seoul's AI System Rapidly Deletes Digital Sexual Crime Content Nationwide

2026-03-02
Korea

Seoul City developed an AI system that detects and deletes illegal digital sexual exploitation content online, reducing removal time from 3 hours to 6 minutes and increasing accuracy. The technology, credited with significantly increasing deleted harmful content, is now being distributed free to institutions across South Korea to better protect victims.[AI generated]

Industries:
Government, security, and defence
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The AI system is explicitly described as being used to detect and remove illegal and harmful content related to digital sexual crimes, which directly protects victims from ongoing harm. The use of AI here is central to reducing harm and supporting victim protection, thus the event involves the use of an AI system that has directly led to harm mitigation. Therefore, this qualifies as an AI Incident because the AI system's use is directly linked to preventing and addressing violations of human rights and harm to individuals and communities.[AI generated]

Thumbnail Image

Bengaluru Techie Fires Cook After AI Surveillance Detects Theft

2026-03-02
India

A Bengaluru tech professional, Pankaj Tanwar, used an AI-powered surveillance system in his kitchen to monitor his cook. The AI, integrating vision and language models, detected the cook taking fruits without permission, leading to her dismissal. The incident sparked online debate over privacy, ethics, and labor rights.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Consumer products
Affected stakeholders:
Workers
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Recognition/object detectionEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved, as the techie uses a vision AI model and language model chatbot to monitor and report the cook's actions. The AI system's use directly led to the firing of the cook for stealing, which is a harm related to labor rights and privacy violations. The AI system's role is pivotal in detecting and documenting the theft, which otherwise might have gone unnoticed. Hence, this is an AI Incident involving harm to labor rights and privacy through AI-enabled surveillance and consequent employment action.[AI generated]

Thumbnail Image

AI-Driven Online Financial Scams Surge in Bulgaria

2026-03-02
Bulgaria

European financial regulators warn of a sharp rise in online financial scams in Bulgaria, enabled by AI-generated fake messages, profiles, voices, and videos. Criminals use these technologies to impersonate trusted individuals, leading to financial loss, identity theft, and psychological harm among victims.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Financial and insurance servicesDigital security
Affected stakeholders:
Consumers
Harm types:
Economic/PropertyPsychologicalHuman or fundamental rights
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI-generated content (voices, videos, messages) by scammers to perpetrate financial frauds that have already caused harm such as financial loss and psychological stress. The AI systems' use is central to the harm, as they enable more convincing and effective scams. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons (financial and psychological). The article is not merely a warning or potential risk but describes realized harms due to AI-enabled scams.[AI generated]

Thumbnail Image

AI Deepfake Voice Scams Target 1 in 4 Americans, Causing Financial and Emotional Harm

2026-03-02
United States

AI-generated deepfake voice calls have targeted one in four Americans in the past year, leading to significant financial losses and emotional distress, especially among seniors. The widespread use of AI in these scams has eroded trust in mobile networks and prompted calls for stricter regulation and carrier accountability.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Digital securityIT infrastructure and hosting
Affected stakeholders:
General public
Harm types:
Economic/PropertyPsychologicalReputational
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI voice deepfakes being used in fraudulent calls that have directly led to financial harm to victims, particularly older adults who have lost significant amounts of money. The AI system's use in cloning voices for scams is a direct cause of harm. The event involves the use of AI systems (deepfake voice generation) leading to realized harm (financial losses and erosion of trust), meeting the criteria for an AI Incident. The discussion of regulatory demands and carrier responsibility is complementary but does not change the primary classification.[AI generated]

Thumbnail Image

China Raises Concerns Over US Plans for AI-Powered Cyber Operations

2026-03-02
China

China has expressed strong concerns after reports that the US Department of Defense is exploring partnerships with major AI firms to develop AI-powered cyber tools for automated reconnaissance and potential cyberattacks targeting China's critical infrastructure. Beijing warns of heightened cybersecurity risks and vows to take necessary protective measures.[AI generated]

AI principles:
SafetyRespect of human rights
Industries:
Digital securityGovernment, security, and defence
Affected stakeholders:
GovernmentGeneral public
Harm types:
Public interestEconomic/Property
Severity:
AI hazard
Business function:
ICT management and information security
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Event/anomaly detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-powered cyber tools being discussed for reconnaissance and cyber operations, indicating AI system involvement. The concerns raised by China about potential cyberattacks and destabilization reflect a credible risk of harm to critical infrastructure and cybersecurity, which aligns with the definition of an AI Hazard. Since no actual harm or cyber incident caused by these AI tools is reported, and the focus is on potential future risks and geopolitical tensions, the event fits the classification of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

Google Chrome Gemini AI Vulnerability Exposes Users to Surveillance and Data Theft

2026-03-02
United States

A high-severity vulnerability in Google Chrome's Gemini AI assistant allowed malicious browser extensions to exploit the AI panel's elevated privileges, enabling unauthorized access to users' cameras, microphones, local files, and sensitive data. Discovered by Palo Alto Networks' Unit 42, the flaw was patched by Google in January 2026.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Digital security
Affected stakeholders:
Consumers
Harm types:
Human or fundamental rights
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Interaction support/chatbots
Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly mentioned as the Gemini agentic AI feature in Google Chrome. The vulnerability allowed malicious extensions to exploit the AI system's permissions and perform unauthorized actions, directly leading to harms such as spying on users, stealing data, and phishing. These harms fall under injury to privacy and security of persons, which is a violation of rights and harm to individuals. Since the vulnerability was actively exploitable and caused realized harm, this qualifies as an AI Incident. The article also discusses broader security implications and mitigation efforts, but the primary focus is on the realized harm from the vulnerability exploitation.[AI generated]

Thumbnail Image

Telkom Indonesia Warns of Data Leakage Risks from Public AI Use

2026-03-02
Indonesia

PT Telkom Indonesia cautioned employees against uploading internal company documents to public AI platforms like ChatGPT and Gemini, citing risks of sensitive data being stored on external servers and potential data leakage. The company is developing an internal AI chatbot to mitigate these risks.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
IT infrastructure and hostingDigital security
Affected stakeholders:
Business
Harm types:
Human or fundamental rights
Severity:
AI hazard
Business function:
ICT management and information security
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The event involves AI systems (public AI applications like ChatGPT) and concerns the use of these systems in a way that could plausibly lead to data leakage, a form of harm to property or business interests. Since no actual data breach or harm has been reported yet, but the risk is credible and foreseeable, this qualifies as an AI Hazard. The article is a warning and advisory about potential harm rather than a report of an incident or a complementary update on a past event.[AI generated]

Thumbnail Image

SERAP Calls for Investigation into Big Tech's Algorithmic Harms in Nigeria

2026-03-01
Nigeria

The Socio-Economic Rights and Accountability Project (SERAP) has urged Nigeria's FCCPC to investigate major tech companies, including Google, Meta, and others, over alleged harms caused by opaque AI-driven algorithms. SERAP cites concerns about algorithmic discrimination, privacy violations, consumer harm, and threats to media freedom and democracy in Nigeria.[AI generated]

AI principles:
FairnessPrivacy & data governance
Industries:
Media, social platforms, and marketing
Affected stakeholders:
ConsumersGeneral public
Harm types:
Economic/PropertyHuman or fundamental rightsPublic interest
Severity:
AI hazard
AI system task:
Organisation/recommenders
Why's our monitor labelling this an incident or hazard?

The article involves AI systems in the form of opaque algorithms used by major digital platforms that influence information and market competition. However, it does not report a specific AI Incident where harm has already occurred; rather, it highlights concerns about possible algorithmic discrimination and consumer harm that could plausibly lead to violations of rights and market abuses. Therefore, this is best classified as an AI Hazard, as it concerns credible risks and calls for investigation and regulatory action to prevent harm.[AI generated]

Thumbnail Image

Critical OpenClaw AI Vulnerability Allows Malicious Websites to Hijack Local AI Agents

2026-03-01

A critical vulnerability in the OpenClaw AI agent framework, dubbed ClawJacked, allowed malicious websites to hijack locally running AI agents via WebSocket connections. Exploited in the wild, this flaw enabled attackers to gain unauthorized control, access sensitive data, and distribute malware, impacting developers and enterprises globally. The issue has since been patched.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Digital security
Affected stakeholders:
WorkersBusiness
Harm types:
Economic/PropertyHuman or fundamental rightsReputational
Severity:
AI incident
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (OpenClaw AI agents) whose design flaw and exploitation have directly led to harm in enterprise environments, including unauthorized access and control over AI agents, which can trigger actions across SaaS, cloud, and internal tools. This constitutes a violation of security and potentially human rights or organizational integrity, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as malware campaigns exploiting this flaw have been documented. Therefore, this is classified as an AI Incident.[AI generated]

Thumbnail Image

AI-Orchestrated Strike Kills Iranian Leader in Tehran

2026-03-01
Iran

A coalition of advanced AI systems, including Palantir's Gotham, Anthropic's Claude, and Anduril's autonomous platforms, orchestrated a targeted military operation in Tehran that resulted in the death of Iran's Supreme Leader, Ali Khamenei, and senior officials. The AI systems autonomously integrated intelligence, disabled defenses, and directed lethal drone strikes, marking a historic AI-led kill chain.[AI generated]

AI principles:
Respect of human rightsAccountability
Industries:
Government, security, and defence
Affected stakeholders:
Government
Harm types:
Physical (death)Public interest
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisationReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The event explicitly involves multiple AI systems used in a lethal military operation that directly led to the death of a person, which is a clear harm to human life. The AI systems were not merely supportive tools but were central to decision-making, intelligence processing, and autonomous or semi-autonomous execution of the strike. This meets the definition of an AI Incident because the AI's development, use, and malfunction (if any) directly led to harm (death). The article does not describe a potential or plausible future harm but an actual realized harm caused by AI systems. Hence, the classification is AI Incident.[AI generated]

Thumbnail Image

US Military Deploys AI-Enabled LUCAS Suicide Drones Against Iran

2026-03-01
United States

The US military, via its Task Force Scorpion Strike, deployed AI-enabled LUCAS suicide drones—reverse-engineered from Iran’s Shahed-136—in combat against Iranian targets. These autonomous, low-cost drones were used for the first time in large-scale strikes, demonstrating direct harm caused by AI systems in military operations.[AI generated]

AI principles:
AccountabilityRespect of human rights
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
GovernmentGeneral public
Harm types:
Physical (death)Physical (injury)
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI services from Anthropic in a military attack involving advanced weapons and suicide drones. The suicide drones are AI-enabled systems used in combat, which directly relates to harm through lethal military action. The involvement of AI in the operation, even if the exact role is not fully detailed, is clearly linked to the use of autonomous or semi-autonomous weaponry causing or capable of causing injury or death. This fits the definition of an AI Incident because the AI system's use in the attack has directly led to harm in a conflict setting.[AI generated]

Thumbnail Image

Australia Threatens to Block AI Services Over Age Verification Failures

2026-03-01
Australia

Australia's internet regulator warned it may require search engines and app stores to block AI services, such as chatbots, that fail to implement age verification and restrict harmful content for minors. This follows widespread non-compliance with new rules aimed at protecting youth from exposure to harmful AI-generated material.[AI generated]

AI principles:
SafetyAccountability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Children
Harm types:
Psychological
Severity:
AI hazard
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The article involves AI systems explicitly (AI chatbots, search engines with AI capabilities) and discusses their use and potential misuse leading to harm, particularly to minors' mental health and exposure to harmful content. Although no specific AI Incident (realized harm) is reported, the regulatory warnings and the lack of compliance by many AI services indicate a credible risk of harm. The focus is on preventing future harm through regulation, fitting the definition of an AI Hazard. The article is not merely complementary information because it centers on the potential for harm and regulatory action rather than just updates or responses to past incidents.[AI generated]

Thumbnail Image

Potential AI-Enabled Satellite Warfare Risks Between US and China

2026-03-01
China

The article discusses the potential risk of the US attacking China's AI-enabled Beidou satellite system, which is crucial for military navigation and guidance. It highlights the strategic importance of AI in satellite defense and the possible consequences of AI-driven satellite warfare, though no actual incident has occurred.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Government, security, and defence
Affected stakeholders:
GovernmentGeneral public
Harm types:
Public interest
Severity:
AI incident
AI system task:
Event/anomaly detectionForecasting/prediction
Why's our monitor labelling this an incident or hazard?

The event involves the use and misuse of AI systems (e.g., AI analysis of open-source intelligence) and advanced surveillance technologies to obtain sensitive military and strategic information, which constitutes a violation of national security and breaches fundamental rights to confidentiality and sovereignty. The harms described are actual and ongoing, including espionage and insider leaks, which directly harm China's military and strategic interests. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly and indirectly led to significant harm to a nation's security and strategic assets.[AI generated]

Thumbnail Image

AI-Generated Misinformation Disrupts New Zealand Elections and Disaster Response

2026-03-01
New Zealand

In New Zealand, generative AI has been used to create and spread misleading images and political content, including fake images of a landslide and AI-generated attack ads. This has led to public confusion, misinformation during a national disaster, and potential harm to election integrity, while current regulations lag behind.[AI generated]

AI principles:
Democracy & human autonomyTransparency & explainability
Industries:
Media, social platforms, and marketingGovernment, security, and defence
Affected stakeholders:
General public
Harm types:
Public interestReputationalPsychological
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated fake images and deepfake videos that have been widely shared and have misled people, including during a national disaster and in political campaigns. This demonstrates direct involvement of AI systems in producing misleading content that harms communities by spreading misinformation and undermining trust in democratic processes. The harms are realized, not just potential, and the AI system's use in generating false political ads and misinformation is central to the event. Therefore, this is an AI Incident. The discussion of legal inadequacies and calls for reform support the context but do not change the classification, as the primary focus is on ongoing harm caused by AI-generated misinformation in elections.[AI generated]

Thumbnail Image

Public Boycott of OpenAI After Pentagon AI Deal Raises Military AI Ethics Concerns

2026-03-01
United States

A mass online boycott campaign, "QuitGPT," has mobilized over 1.5 million people to protest OpenAI's agreement with the U.S. Pentagon to deploy AI models in classified military networks. The campaign highlights public fears of potential misuse, such as autonomous weapons and mass surveillance, and follows Anthropic's refusal to grant similar access.[AI generated]

AI principles:
Respect of human rightsSafety
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Public interestHuman or fundamental rights
Severity:
AI hazard
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (OpenAI's models) in a military context, which is explicitly stated. The protest and boycott are reactions to the potential risks associated with this use, such as mass surveillance and autonomous weapons, which are credible and plausible future harms. No actual harm has been reported yet, so it does not qualify as an AI Incident. The article focuses on the potential for harm and societal response rather than reporting a realized harm or incident. Hence, the classification as AI Hazard is appropriate.[AI generated]

Thumbnail Image

AI-Powered WaTracker App Circumvents WhatsApp Privacy Controls

2026-03-01
United States

iToolab's WaTracker 1.3.3 uses AI to intercept and store WhatsApp's 'view once' media, allowing repeated access to photos, videos, and messages intended to be ephemeral. This undermines user privacy and violates WhatsApp's intended protections, resulting in ongoing privacy and rights violations. The incident is centered in New York, USA.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Digital securityMedia, social platforms, and marketing
Affected stakeholders:
Consumers
Harm types:
Human or fundamental rights
Severity:
AI incident
Business function:
Other
AI system task:
Other
Why's our monitor labelling this an incident or hazard?

WaTracker uses AI or algorithmic methods to track and analyze WhatsApp user activity and to recover "view once" media, which is designed to be ephemeral. This functionality involves processing and interpreting WhatsApp data in ways that circumvent user privacy and WhatsApp's intended controls. The app's use directly leads to violations of privacy and potentially breaches user rights, constituting harm to individuals' rights and privacy. Therefore, the event describes an AI system whose use has directly led to violations of rights, qualifying it as an AI Incident.[AI generated]

Thumbnail Image

UK Teen's Suicide Linked to Harmful AI-Driven Social Media Algorithms

2026-03-01
United Kingdom

British teenager Molly Russell died by suicide after being exposed to pro-suicide content recommended by social media algorithms. Her father is campaigning for accountability and regulatory change, highlighting the role of AI-driven recommendation systems in amplifying harmful material to vulnerable users. The incident occurred in the United Kingdom.[AI generated]

AI principles:
SafetyAccountability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Children
Harm types:
Physical (death)Psychological
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Organisation/recommenders
Why's our monitor labelling this an incident or hazard?

The social media platforms use AI systems to curate and recommend content, including harmful pro-suicide material. The algorithms' addictive nature and targeting of vulnerable individuals directly relate to the harm suffered. The event describes a realized harm (the teenager's death) linked to AI system use, thus qualifying as an AI Incident under the framework.[AI generated]