aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 14496 incidents & hazards
Thumbnail Image

Meta Implements Employee Activity Tracking to Train AI Models

2026-04-21
United States

Meta is installing tracking software on U.S.-based employees' computers to log keystrokes, mouse movements, and screen content for AI training. The initiative, aimed at improving AI agents' ability to perform work tasks, raises concerns about employee privacy and potential labor rights violations.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Business processes and support services
Affected stakeholders:
Workers
Harm types:
Human or fundamental rights
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI system being trained with employee activity data collected via a new tracking tool. While employees express concerns about potential job cuts and privacy implications, no actual harm or rights violations have been documented as having occurred. The tracking for AI training purposes could plausibly lead to harms such as labor rights violations or privacy breaches, making this an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the new tracking tool and its implications, not on responses or updates to prior incidents. It is not Unrelated because the event clearly involves AI system development and use with potential for harm.[AI generated]

Thumbnail Image

AI-Generated Influencer 'Emily Hart' Used to Scam MAGA Supporters

2026-04-21
India

A 22-year-old Indian medical student used Google's Gemini AI to create a fake influencer persona, 'Emily Hart,' targeting American MAGA supporters with AI-generated images and content. The account amassed thousands of followers and generated significant income through subscriptions and merchandise before being banned for fraudulent activity, causing financial and social harm.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Consumers
Harm types:
Economic/PropertyPsychological
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of generative AI systems to create a fake influencer persona that deceived users and generated income through fraudulent means. The AI system's outputs were central to the deception and monetization, directly causing harm to the users who were misled and financially exploited. The account was removed for fraudulent activity, confirming the harm occurred. This fits the definition of an AI Incident as the AI system's use directly led to harm (financial and trust-related) to groups of people.[AI generated]

Thumbnail Image

Sullivan & Cromwell Apologizes for AI-Generated Errors in Court Filing

2026-04-21
United States

Sullivan & Cromwell, a leading Wall Street law firm, apologized to a federal judge after submitting a court filing containing numerous fabricated legal citations generated by an AI system. The errors, discovered by an opposing firm, led to a review of the firm's internal processes and raised concerns about AI reliability in legal practice.[AI generated]

AI principles:
Robustness & digital securityTransparency & explainability
Industries:
Other
Affected stakeholders:
Business
Harm types:
ReputationalPublic interest
Severity:
AI incident
Business function:
Compliance and justice
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI to generate legal citations, which were fabricated ('hallucinations'), leading to errors in a court filing. This directly caused harm by misleading the court and opposing counsel, constituting a violation of legal and professional standards. The AI system's malfunction or misuse is central to the incident. The harm is realized, not just potential, as the false citations were submitted and discovered, prompting an apology and review. Hence, it meets the criteria for an AI Incident under violations of legal obligations and harm to the judicial process.[AI generated]

Thumbnail Image

U.S. Establishes AI-Powered Autonomous Military Force for Latin America

2026-04-21
United States

The U.S. Army has announced the creation of an autonomous military force using AI to support Southern Command operations in Central and South America and the Caribbean. The initiative aims to combat drug cartels and respond to crises, raising concerns about potential future harm from AI-enabled autonomous weapons systems.[AI generated]

AI principles:
SafetyRespect of human rights
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Physical (death)Physical (injury)Human or fundamental rights
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI-based autonomous and semi-autonomous systems for military purposes, which qualifies as AI system involvement. The event concerns the development and planned deployment of these systems, not a realized harm. However, autonomous weapons and military AI systems inherently carry credible risks of causing injury, disruption, or other harms. Since no actual harm is reported yet, but the plausible future harm is clear, this fits the definition of an AI Hazard. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it directly involves AI systems with potential for harm.[AI generated]

Thumbnail Image

Bundesbank Warns of Cybersecurity Risks from Anthropic's Mythos AI Model

2026-04-21
Germany

Joachim Nagel, president of Germany's Bundesbank, warned that Anthropic's advanced AI model, Mythos, could identify and exploit vulnerabilities in European banking software, posing significant cybersecurity risks. He urged for broader oversight and access to the technology to prevent misuse and protect financial stability.[AI generated]

AI principles:
Robustness & digital securityAccountability
Industries:
Financial and insurance services
Affected stakeholders:
BusinessGeneral public
Harm types:
Economic/PropertyPublic interest
Severity:
AI hazard
Business function:
ICT management and information security
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI system (Anthropic's Mythos) capable of identifying and exploiting software vulnerabilities. The Bundesbank chief warns about the potential for malicious use, which could lead to disruption of critical infrastructure (financial institutions). Since no actual incident has occurred yet but there is a credible risk of harm, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

AI-Generated Deepfake of Wendie Renard Used in Investment Scam

2026-04-21
France

A deepfake video generated by AI, impersonating French footballer Wendie Renard, was circulated online to promote a fraudulent AI investment scheme, particularly targeting residents of Martinique. Renard filed a legal complaint for identity theft and warned the public about the scam's risks and reputational harm.[AI generated]

AI principles:
Transparency & explainabilityRespect of human rights
Industries:
Financial and insurance servicesMedia, social platforms, and marketing
Affected stakeholders:
ConsumersOther
Harm types:
Economic/PropertyReputationalHuman or fundamental rights
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system generating a deepfake video, which is a clear AI application. The deepfake has been used maliciously to impersonate Wendie Renard and promote a scam, directly causing reputational harm and posing financial risks to people targeted by the video. The harm is realized (identity theft, potential financial fraud), and the AI system's role is pivotal in enabling this harm. Therefore, this qualifies as an AI Incident under the framework.[AI generated]

Thumbnail Image

DHS Plans AI-Powered Smart Glasses for Real-Time Biometric Surveillance

2026-04-21
United States

The U.S. Department of Homeland Security is developing AI-powered smart glasses for immigration enforcement agents, enabling real-time biometric identification and access to watchlist data in the field. The project, slated for deployment by 2027, raises significant concerns about privacy, civil liberties, and potential misuse of AI surveillance technologies.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Human or fundamental rights
Severity:
AI hazard
Business function:
Compliance and justice
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the development and intended use of AI systems (smart glasses with facial recognition and biometric databases) by DHS/ICE for surveillance purposes. The potential harms include violations of civil rights, privacy, and mass surveillance, which are serious human rights concerns. However, the article does not report any actual harm or incident resulting from the use of these glasses yet, only the plans and concerns about their future use. Thus, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident but has not yet done so.[AI generated]

Thumbnail Image

AI-Generated Singer in Romania Sparks Racism and Discrimination Debate

2026-04-21
Romania

The AI-generated singer Lolita Cercel has become a sensation in Romania, but has drawn criticism for perpetuating racist stereotypes against the Roma minority and causing economic and reputational harm to real Roma musicians. The incident highlights concerns over AI's role in reinforcing discrimination and replacing human artists.[AI generated]

AI principles:
FairnessRespect of human rights
Industries:
Arts, entertainment, and recreation
Affected stakeholders:
Workers
Harm types:
Economic/PropertyReputational
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The AI system is explicitly involved as it generates the singer's music and image. The harm arises from the AI-generated content reinforcing racist clichés and stereotypes about the Roma minority, which is a violation of human rights and harms the community. The event describes realized harm through social and cultural impacts, including criticism from Roma activists and musicians, and the perpetuation of latent racism. Hence, it meets the criteria for an AI Incident due to indirect harm caused by the AI system's outputs.[AI generated]

Thumbnail Image

AI-Generated Code Increases Engineer Workload and Software Defects in Japan

2026-04-21
Japan

A survey of 322 Japanese IT engineers revealed that the widespread use of AI-generated code has led to a significant increase in reviewer workload, with 78.6% experiencing bugs or defects caused by AI code. Nearly 90% reported increased review burdens, often requiring over three extra hours per week to maintain software quality.[AI generated]

AI principles:
AccountabilityRobustness & digital security
Industries:
IT infrastructure and hosting
Affected stakeholders:
Workers
Harm types:
Economic/Property
Severity:
AI incident
Business function:
Research and development
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (AI code generation tools) whose outputs (AI-generated code) have directly led to bugs and defects requiring additional review and fixes, causing increased workload and quality concerns. These constitute realized harms related to software quality and reliability, which fall under harm to property and disruption of operations. The survey data confirms that these harms are occurring and significant. Hence, this qualifies as an AI Incident rather than a hazard or complementary information, as the harms are realized and directly linked to AI system use.[AI generated]

Thumbnail Image

Studies Link ChatGPT Use to Reduced Brain Activity and Cognitive Skills

2026-04-21
United States

Multiple studies led by MIT's Nataliya Kosmyna found that students using AI tools like ChatGPT showed up to 55% less brain activity in creativity and information-processing areas, produced similar essays, and struggled with memory recall. These findings raise concerns about AI's negative impact on human cognition.[AI generated]

AI principles:
Human wellbeing
Industries:
Education and training
Affected stakeholders:
Consumers
Harm types:
Psychological
Severity:
AI hazard
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (ChatGPT and LLMs) and their impact on human brain activity and cognitive skills. The study shows a correlation between AI reliance and diminished critical thinking, which is a form of potential harm to individuals' cognitive health and educational development. However, the article does not report any realized injury, rights violation, or other direct harm caused by the AI system's malfunction or misuse. The harm is potential and plausible, related to future educational and cognitive risks. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their effects.[AI generated]

Thumbnail Image

AI-Generated Fake Job Offers Lead to Widespread Scams and Data Theft

2026-04-21
Romania

Scammers are increasingly using AI to create highly personalized and convincing fake job offers, deceiving job seekers into providing money or sensitive personal data. These AI-driven recruitment scams, difficult to detect due to their sophistication, have resulted in significant financial losses and privacy breaches for thousands of victims.[AI generated]

AI principles:
Privacy & data governanceTransparency & explainability
Industries:
Business processes and support services
Affected stakeholders:
Consumers
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to generate fake job offers and messages that trick victims into giving money or sensitive personal information, causing realized harm. The AI system's role is pivotal in making the scams credible and effective, directly leading to harm to individuals' finances and privacy. Therefore, this qualifies as an AI Incident under the definition of harm to persons and communities caused by AI misuse.[AI generated]

Thumbnail Image

AI Surveillance System Aids Arrest After Hit-and-Run in Teresina

2026-04-21
Brazil

In Teresina, Brazil, a woman who struck a homeless man with her car was swiftly located and arrested after police used the SPIA AI surveillance system. Despite the vehicle's partially illegible license plate, the AI-enabled system identified and tracked the suspect, enabling law enforcement to apprehend her following the incident.[AI generated]

Industries:
Government, security, and defence
Severity:
AI incident
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

An AI system was explicitly used by the police to identify and locate the suspect vehicle and driver after a hit-and-run incident causing injury to a person. The AI system's involvement was in the use phase, aiding in the investigation and arrest. The harm (injury to the pedestrian) has occurred, and the AI system played a pivotal role in addressing the incident. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the harm event.[AI generated]

Thumbnail Image

Delhi High Court Restrains AI-Generated Deepfakes Exploiting Allu Arjun's Persona

2026-04-21
India

The Delhi High Court issued an injunction protecting actor Allu Arjun's personality rights after AI tools and deepfake technologies were used to clone his voice, simulate fake calls, and create unauthorized content for commercial gain. The order restrains multiple entities from exploiting his identity through AI-generated media and merchandise.[AI generated]

AI principles:
Respect of human rightsPrivacy & data governance
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Other
Harm types:
ReputationalEconomic/Property
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The involvement of AI is explicit through the mention of deepfakes, which are AI-generated synthetic media. The harm relates to violation of personality rights and intellectual property rights due to unauthorized commercial use of AI-generated content. Since the court order is a response to an ongoing harm (unauthorized use of AI to exploit the actor's persona), this constitutes an AI Incident involving violation of rights. The event is not merely a general AI-related update or a potential future risk but a concrete legal action addressing realized harm caused by AI misuse.[AI generated]

Thumbnail Image

Australian Regulators Monitor Anthropic's Mythos AI for Banking Cyber Risks

2026-04-20
Australia

Australian financial regulators, including ASIC and APRA, are closely monitoring Anthropic's advanced AI model Mythos due to concerns it could expose cybersecurity vulnerabilities and destabilize banking systems. No harm has occurred, but authorities are proactively assessing risks and coordinating with global counterparts to safeguard financial infrastructure.[AI generated]

AI principles:
Robustness & digital security
Industries:
Financial and insurance servicesDigital security
Affected stakeholders:
BusinessGeneral public
Harm types:
Economic/PropertyPublic interest
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Anthropic's Mythos) with advanced capabilities that could plausibly lead to harm by destabilizing banking systems through cybersecurity vulnerabilities. However, the article only describes regulatory monitoring and risk assessment without any realized harm or incident. Therefore, this constitutes an AI Hazard, as the AI system's development and potential use could plausibly lead to an AI Incident in the future, but no direct or indirect harm has occurred yet.[AI generated]

Thumbnail Image

Asian Regulators Heighten Cybersecurity Over Anthropic's Mythos AI Risks

2026-04-20
Singapore

Regulators in Singapore, South Korea, and Australia are increasing scrutiny of financial institutions' cybersecurity due to concerns over Anthropic's AI model Mythos, which can identify previously undetected security flaws. Authorities are urging banks to strengthen defenses, though no actual harm has occurred yet.[AI generated]

AI principles:
Robustness & digital security
Industries:
Financial and insurance servicesDigital security
Severity:
AI hazard
Business function:
ICT management and information security
Autonomy level:
No-action autonomy (human support)
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system (Claude Mythos Preview) capable of discovering software vulnerabilities autonomously, which could be exploited to cause cyberattacks on critical financial infrastructure. Although no actual incident or harm has occurred yet, the article outlines credible scenarios where such AI capabilities could lead to significant harm, including disruption of financial services and loss of trust, which fall under harm categories (b) and (d). The focus is on plausible future harm and preparedness rather than a realized incident, fitting the definition of an AI Hazard. The article also discusses governance and mitigation strategies but the primary subject is the potential risk posed by the AI system, not just complementary information about responses. Hence, the classification is AI Hazard.[AI generated]

Thumbnail Image

Leaked Data Reveals Tesla Concealed Thousands of AI-Driven Autopilot Accidents

2026-04-20
United States

Leaked internal data shows Tesla knowingly concealed thousands of accidents, including fatalities, caused by its AI-based Autopilot system. Despite being aware of recurring malfunctions—such as sudden acceleration and braking—Tesla continued public road testing. Regulatory investigations and lawsuits have followed, highlighting significant harm from the AI system's failures.[AI generated]

AI principles:
SafetyTransparency & explainability
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
ConsumersGeneral public
Harm types:
Physical (death)Physical (injury)
Severity:
AI incident
Business function:
Research and development
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Recognition/object detectionReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The autonomous driving system is an AI system as it uses AI to perceive and make driving decisions. The event involves the use and malfunction of this AI system, which directly led to physical harm and fatalities, fulfilling the criteria for an AI Incident. The concealment of these incidents and the court ruling demonstrate that harm has materialized, not just potential harm. Therefore, this event is classified as an AI Incident.[AI generated]

Thumbnail Image

Lovable AI App Builder Exposes Sensitive User Data via API Flaw

2026-04-20
Sweden

A critical API vulnerability in Lovable, a Stockholm-based AI app-building platform, allowed unauthorized access to sensitive data—including AI chat histories, source code, and customer records—from thousands of projects. Despite Lovable denying a data breach, unclear documentation and broken authorization led to significant privacy and security risks for users.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
IT infrastructure and hostingDigital security
Affected stakeholders:
ConsumersBusiness
Harm types:
Human or fundamental rights
Severity:
AI incident
AI system task:
Interaction support/chatbots
Why's our monitor labelling this an incident or hazard?

An AI system is involved as Lovable is an AI app-building platform handling AI chat histories and code projects. The event stems from the use and design of the AI system's visibility settings, which led to unauthorized access to sensitive data, including AI chat histories and customer records. This constitutes a violation of privacy and potentially breaches obligations to protect user data, which falls under harm to rights and possibly harm to individuals. Although the company denies a breach, the exposure of sensitive data due to unclear documentation and design is a realized harm. Therefore, this qualifies as an AI Incident because the AI system's use and design directly led to harm through unauthorized data exposure.[AI generated]

Thumbnail Image

Critical Vulnerability in Anthropic's MCP Exposes AI Systems to Remote Code Execution

2026-04-20
United States

A critical architectural flaw in Anthropic's Model Context Protocol (MCP), widely used in AI agents and frameworks like Flowise, enables remote code execution and data breaches. Security researchers demonstrated live exploitation, affecting millions of users and over 200,000 servers, with sensitive data and systems compromised due to the protocol's design.[AI generated]

AI principles:
Robustness & digital securityPrivacy & data governance
Industries:
Digital securityIT infrastructure and hosting
Affected stakeholders:
ConsumersBusiness
Harm types:
Human or fundamental rightsEconomic/Property
Severity:
AI incident
AI system task:
Interaction support/chatbotsGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly describes the use of AI systems (AI agents orchestrated via MCP) in a confirmed cyber-espionage campaign that targeted high-value organizations, causing harm through unauthorized data access and exploitation. The MCP flaw is a systemic architectural vulnerability in AI system integration, directly enabling these attacks. The harm is realized and significant, involving breaches of security and potential violations of rights and property. The involvement of AI is central and pivotal to the incident, as the AI agents autonomously conducted the intrusion lifecycle. This meets the criteria for an AI Incident because the AI system's use and the architectural flaw directly led to harm. The article also discusses broader systemic risks and governance responses but the primary focus is on the realized harm from AI misuse.[AI generated]

Thumbnail Image

Global Surge in AI-Driven Fraud and Deepfake Scams

2026-04-20
United States

AI technologies such as deepfakes, generative AI, and autonomous agents are increasingly used by criminals for large-scale fraud, identity theft, and social engineering scams worldwide. These AI-enabled attacks have caused significant financial harm to individuals and organizations, with Southeast Asia and the United States among the hardest hit regions.[AI generated]

AI principles:
Privacy & data governanceAccountability
Industries:
Financial and insurance servicesDigital security
Affected stakeholders:
ConsumersBusiness
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of generative AI tools by scammers to impersonate tax officials, making scams more efficient and prevalent. This involves the use of AI systems in the malicious use category, directly leading to harm through tax fraud and identity theft. The harm includes theft of sensitive information and financial loss, which fits the definition of harm to persons and violation of rights. Hence, this is an AI Incident due to the realized harm caused by AI-enabled fraudulent activities.[AI generated]

Thumbnail Image

Barclays CEO Warns of AI Model Mythos as Major Threat to Global Banking Security

2026-04-20
United States

Barclays CEO C.S. Venkatakrishnan warned that Anthropic's AI model Mythos poses a significant cybersecurity risk to the global banking sector due to its advanced programming abilities, including identifying and exploiting vulnerabilities. The warning, issued at a Washington financial summit, has raised serious concerns among regulators and financial institutions.[AI generated]

AI principles:
Robustness & digital security
Industries:
Financial and insurance services
Affected stakeholders:
Business
Harm types:
Economic/PropertyPublic interest
Severity:
AI hazard
Business function:
ICT management and information security
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions advanced AI systems with capabilities to find and exploit cybersecurity weaknesses, which could plausibly lead to harm such as disruption of critical financial infrastructure or harm to financial security. However, the article only reports warnings and concerns about potential risks, not an actual realized incident. Therefore, this qualifies as an AI Hazard, reflecting a credible future risk stemming from AI system capabilities and their potential misuse or exploitation.[AI generated]