aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 14530 incidents & hazards
Thumbnail Image

White House Accuses China of Industrial-Scale Theft of U.S. AI Models

2026-04-23
United States

The U.S. government has accused Chinese entities of conducting industrial-scale campaigns to steal and replicate proprietary American AI models using techniques like distillation and jailbreaking. The White House warns this ongoing activity threatens U.S. intellectual property and innovation, prompting plans for defensive and punitive measures.[AI generated]

AI principles:
Robustness & digital security
Industries:
Digital security
Affected stakeholders:
Business
Harm types:
Economic/Property
Severity:
AI incident
Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems and their development, specifically the unauthorized extraction and copying of AI models, which is a breach of intellectual property rights. This harm is realized as it involves systematic campaigns to steal AI technology, directly violating legal protections. Therefore, it qualifies as an AI Incident due to the violation of intellectual property rights caused by the AI system's development and use.[AI generated]

Thumbnail Image

HD Hyundai Expands AI-Powered Unmanned Naval Vessel Collaboration in the US

2026-04-23
United States

HD Hyundai, in partnership with US defense AI firm Anduril and the American Bureau of Shipping (ABS), signed multiple MOUs to jointly develop AI-driven unmanned surface and underwater vessels. The collaboration includes establishing certification frameworks for autonomous maritime systems, highlighting future risks associated with military AI technologies. Activities are centered in the United States.[AI generated]

AI principles:
AccountabilitySafety
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
General public
Harm types:
Physical (death)Human or fundamental rightsPublic interest
Severity:
AI hazard
Business function:
Research and development
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI involvement in the development of autonomous unmanned submarines, which are military systems with inherent risks. Although no incident or harm has occurred yet, the development and planned deployment of such AI-enabled autonomous weapons systems plausibly could lead to harms such as injury, disruption, or violations of human rights. The event is about the collaboration and development phase, not about an actual incident or realized harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

Indian Government Urges Banks to Prepare for AI-Driven Cyber Threats

2026-04-23
India

Indian Finance Minister Nirmala Sitharaman and IT Minister Ashwini Vaishnaw convened with banks and regulators to address potential cybersecurity risks from advanced AI models like Claude Mythos. The government emphasized vigilance, real-time threat intelligence sharing, and stronger cybersecurity to prevent possible AI-enabled attacks on financial systems. No actual incident has occurred.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Financial and insurance servicesDigital security
Severity:
AI hazard
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article involves AI systems explicitly, particularly the Anthropic Mythos AI model capable of identifying cybersecurity vulnerabilities. However, the event is about assessing and preparing for potential cybersecurity threats that could arise from misuse of such AI systems. There is no indication that any AI-driven harm or breach has occurred so far in the Indian banking sector. The focus is on plausible future harm and risk mitigation. Therefore, this qualifies as an AI Hazard, as the AI system's capabilities could plausibly lead to cybersecurity incidents if misused, but no incident has yet materialized.[AI generated]

Thumbnail Image

Bernstein Warns India of AI-Driven Job Risks and Economic Challenges

2026-04-23
India

Global brokerage Bernstein, in an open letter to Prime Minister Modi, warned that India risks economic underperformance and job losses due to insufficient AI innovation and preparedness. The letter highlights concerns that generative AI could disrupt employment, urging urgent reforms to prevent India from becoming merely a consumer of AI technology.[AI generated]

AI principles:
Human wellbeing
Affected stakeholders:
Workers
Harm types:
Economic/Property
Severity:
AI hazard
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI and its potential impact on jobs and economic growth, indicating plausible future harm from AI-driven automation and lack of AI preparedness. However, it does not describe any actual incident or harm caused by AI systems, nor any malfunction or misuse. The concerns are forward-looking and cautionary, fitting the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not updating or responding to a prior AI Incident or Hazard, nor is it unrelated since AI is a central theme in the risk assessment presented.[AI generated]

Thumbnail Image

AI-Generated Short Dramas Cause Actor Unemployment in China

2026-04-23
China

Chinese actor Zhang Xiaolei, known for roles in short dramas, lost his job due to the rapid adoption of AI-generated actors and content in the entertainment industry. The shift led to a drastic reduction in live-action productions, forcing Zhang and others to leave acting and return to farming for livelihood.[AI generated]

AI principles:
Human wellbeingRespect of human rights
Industries:
Arts, entertainment, and recreation
Affected stakeholders:
Workers
Harm types:
Economic/Property
Severity:
AI incident
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI technology capable of directly generating male and female actors in short dramas has caused a significant reduction in work opportunities for the human actor Zhang Xiaolei, leading to zero work and forcing him to return to farming. This is a direct harm to the actor's employment and income, which falls under harm to a person or group of people. The AI system's use in content generation is the direct cause of this harm, qualifying this as an AI Incident.[AI generated]

Thumbnail Image

AI-Generated Fake Bank Cheque Sparks Fraud Concerns in India

2026-04-23
India

A viral social media post showed a hyper-realistic UCO Bank cheque created using ChatGPT Image 2.0, raising widespread alarm about the potential for AI-generated images to facilitate financial fraud. While no actual harm occurred, the incident highlights growing risks of AI misuse in creating convincing forged documents.[AI generated]

AI principles:
Robustness & digital securityAccountability
Industries:
Financial and insurance services
Affected stakeholders:
BusinessConsumers
Harm types:
Economic/PropertyReputational
Severity:
AI hazard
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system generating a fake bank cheque image with high fidelity, which could be used to deceive individuals or facilitate fraud. While current banking systems may detect such fakes, the risk remains significant in contexts lacking robust verification, such as private transactions or social engineering scams. No direct harm has been reported yet, so it is not an AI Incident. The focus is on the potential for harm due to the AI system's capabilities and the demonstrated ability to bypass safety protocols, fitting the definition of an AI Hazard.[AI generated]

Thumbnail Image

Brazilian Regulator Fines Meta for Anti-Competitive Use of AI Chatbots on WhatsApp

2026-04-23
Brazil

Brazil's competition authority, Cade, upheld a daily fine against Meta for allegedly abusing its dominant position by favoring its own AI chatbots on WhatsApp and excluding competitors. The investigation followed complaints from rival chatbot companies, Luzia and Zapia, citing harm to market competition.[AI generated]

AI principles:
FairnessAccountability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Business
Harm types:
Economic/Property
Severity:
AI incident
Business function:
Citizen/customer service
AI system task:
Interaction support/chatbots
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI chatbots on WhatsApp, an AI system, and concerns the development and use of these AI systems by Meta. The harm is linked to anti-competitive practices (abuse of dominant position) that affect competitors and market fairness, which is a violation of legal frameworks protecting competition and rights. The fine and investigation show that harm is occurring or has occurred, not just a potential risk. Hence, this qualifies as an AI Incident under the definition of violations of applicable law and harm to communities (market competition).[AI generated]

Thumbnail Image

Researchers Warn of Risks from Evolvable Artificial Intelligence

2026-04-23
Hungary

Researchers from Hungary and Belgium warn that evolvable AI systems, capable of autonomous evolution and self-improvement, could soon emerge. These systems pose unique risks, such as loss of human control and resource competition, and require new regulatory approaches to mitigate potential future harms.[AI generated]

AI principles:
Robustness & digital securityDemocracy & human autonomy
Industries:
Digital security
Affected stakeholders:
General public
Harm types:
Public interestEnvironmental
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Other
Why's our monitor labelling this an incident or hazard?

The article centers on the plausible future risks of evolvable AI systems, describing how their development and use could lead to significant harms if not properly controlled. Since no realized harm or incident is reported, but credible risks are detailed, this qualifies as an AI Hazard. The discussion of regulatory recommendations further supports this classification as a hazard and complementary governance information rather than an incident or unrelated news.[AI generated]

Thumbnail Image

Experts Urge AI-Driven Cybersecurity and Device Digital IDs in India

2026-04-23
India

At the Cyber Security India Expo in Mumbai, experts warned of rising AI-enabled cyber threats and advocated for digital identities for all devices and stronger AI-led cybersecurity systems to protect citizens and critical infrastructure. They emphasized the urgent need for proactive measures to counter evolving cyber risks.[AI generated]

AI principles:
Robustness & digital securityPrivacy & data governance
Industries:
Digital securityGovernment, security, and defence
Affected stakeholders:
General publicGovernment
Harm types:
Economic/PropertyPublic interestHuman or fundamental rights
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

The article centers on the potential risks and necessary responses related to AI in cybersecurity, emphasizing plausible future harms from AI-enabled cyber attacks and the defensive role of AI. No actual harm or incident is reported, only expert calls for measures to mitigate risks. Therefore, this qualifies as an AI Hazard, as it describes circumstances where AI systems could plausibly lead to harm (cyber attacks on citizens and critical infrastructure) if not properly managed.[AI generated]

Thumbnail Image

AI-Driven Attacks Fuel Major Crypto Thefts in 2026

2026-04-23
Korea

In 2026, over $600 million was stolen in crypto hacks, with AI systems enabling large-scale attacks. North Korean-linked groups used AI for social engineering, deepfakes, and automated vulnerability scanning, leading to major breaches at Kelp DAO, Drift Protocol, and Zerion. AI's role has amplified the scale and sophistication of these incidents.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Digital securityFinancial and insurance services
Affected stakeholders:
ConsumersBusiness
Harm types:
Economic/PropertyReputational
Severity:
AI incident
Business function:
ICT management and information security
AI system task:
Content generationEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI being used in social engineering attacks that resulted in theft, AI-powered deepfakes and voice manipulation tools sold for bypassing security, and autonomous AI agents conducting attacks. These uses of AI have directly caused significant financial harm, fulfilling the criteria for an AI Incident. The harms are realized, not just potential, and the AI systems' development and use are pivotal in enabling these attacks. Therefore, this event is classified as an AI Incident.[AI generated]

Thumbnail Image

Brazilian Political Parties Seek Suspension of AI-Generated Disinformation Profile

2026-04-23
Brazil

Brazilian parties PT, PV, and PCdoB filed a complaint with the Superior Electoral Court to suspend social media profiles of "Dona Maria," an AI-generated persona used to spread disinformation and attacks against President Lula and left-wing figures. The realistic AI-created character misled users, fueling political manipulation and violating electoral laws.[AI generated]

AI principles:
Transparency & explainabilityDemocracy & human autonomy
Industries:
Media, social platforms, and marketing
Affected stakeholders:
General publicGovernment
Harm types:
ReputationalPublic interest
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly mentions the use of an AI-created character to publish content that attacks political figures and spreads disinformation, which constitutes harm to communities and a violation of political rights. The AI system's use in generating realistic personas for misleading political propaganda directly leads to these harms. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in spreading disinformation and political manipulation.[AI generated]

Thumbnail Image

UAE Plans Massive Government Automation with Agentic AI

2026-04-23
United Arab Emirates

The UAE government has announced plans to automate 50% of its federal operations using Agentic AI systems within two years. These autonomous AI agents will analyze data, make decisions, and execute tasks independently, raising potential future risks related to large-scale AI-driven governance. No actual harm has yet occurred.[AI generated]

AI principles:
AccountabilityDemocracy & human autonomy
Industries:
Government, security, and defence
Severity:
AI hazard
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisationReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The event involves the planned use of agentic AI systems in government operations, which qualifies as AI system involvement. Since the AI systems will have autonomous decision-making and execution capabilities, their deployment at scale could plausibly lead to harms such as mismanagement, rights violations, or operational disruptions if failures or misuse occur. However, as the article only describes future plans and no actual harm or malfunction has been reported, this constitutes an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the planned deployment and its potential implications, not on responses or updates to past incidents.[AI generated]

Thumbnail Image

Palantir AI Systems Implicated in Human Rights Violations and Employee Dissent

2026-04-23
United States

Palantir's AI-powered software, used by US agencies like DHS and ICE, has enabled surveillance, deportations, and military targeting, leading to human rights concerns and harm to communities. Employees have raised ethical objections, highlighting the company's role in controversial government actions and the broader militarization of AI.[AI generated]

AI principles:
Respect of human rightsPrivacy & data governance
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Human or fundamental rights
Severity:
AI incident
Business function:
Compliance and justice
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Event/anomaly detectionForecasting/prediction
Why's our monitor labelling this an incident or hazard?

Palantir's software is an AI system used for data aggregation and analysis to support immigration enforcement. Its use by DHS has directly or indirectly led to harm, including the violent killing of a protester and broader concerns about civil liberties violations. The article highlights the ethical dilemma faced by employees due to the system's role in these harms. The AI system's involvement in enabling government actions that infringe on human rights and cause harm to communities meets the criteria for an AI Incident.[AI generated]

Thumbnail Image

Pentagon Awards $24M Contract for AI-Enabled Humanoid Military Robots

2026-04-23
United States

The Pentagon awarded Foundation Future Industries, backed by Eric Trump, a $24 million contract to develop and test AI-powered humanoid robots for military use. The robots, designed for battlefield deployment, raise concerns about future risks associated with autonomous AI systems in warfare. No harm has yet occurred.[AI generated]

AI principles:
AccountabilitySafety
Industries:
Government, security, and defence
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly, as the humanoid robots are described with autonomous capabilities and advanced mobility, indicating AI-driven operation. The use is in a military context with potential for direct physical harm and disruption, fulfilling the criteria for plausible future harm. Since no actual harm or incident has been reported yet, but the technology's deployment could plausibly lead to injury or other harms, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the development and strategic implications rather than reporting an actual harmful event or incident.[AI generated]

Thumbnail Image

Mercor Faces Lawsuits After AI Training Data Breach Exposes Sensitive Worker Information

2026-04-23
United States

Mercor, a $10 billion AI startup supplying training data to firms like OpenAI, Anthropic, and Meta, faces at least seven class-action lawsuits after a third-party data breach exposed sensitive contractor information, including biometrics and computer screenshots. Plaintiffs allege improper data collection, monitoring, and sharing practices in violation of privacy and labor laws.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
IT infrastructure and hostingDigital security
Affected stakeholders:
Workers
Harm types:
Human or fundamental rights
Severity:
AI incident
Business function:
Other
AI system task:
Other
Why's our monitor labelling this an incident or hazard?

Mercor's AI training operations involve collecting and processing extensive personal and proprietary data from contractors, which is integral to AI system development. The data breach and alleged unauthorized sharing of sensitive information have directly harmed individuals' privacy and potentially violated intellectual property rights. These harms fall under violations of human rights and legal obligations, meeting the criteria for an AI Incident. The involvement of AI systems in data collection, training, and monitoring (e.g., AI proctoring, screenshot capturing software) and the resulting lawsuits confirm direct harm linked to AI system use and malfunction (data breach).[AI generated]

Thumbnail Image

Vorwerk Investigated for Disabling AI Services in Neato Robot Vacuums

2026-04-23
Italy

Italian antitrust authorities have launched an investigation into Vorwerk Management and Vorwerk Italia for allegedly disabling smart services in Neato robot vacuums. This action rendered the AI-powered devices largely unusable, significantly harming consumers by reducing product functionality and value. The probe follows multiple consumer complaints between November 2025 and April 2026.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Consumer products
Affected stakeholders:
Consumers
Harm types:
Economic/Property
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The robot vacuum cleaner is an AI system due to its autonomous smart functionalities. The company's action of disabling these AI-powered services has directly led to harm to consumers by making the product unusable, which fits the definition of an AI Incident involving harm to property and consumer rights. Therefore, this event qualifies as an AI Incident.[AI generated]

Thumbnail Image

Ukraine Plans AI-Driven Autonomous Combat Systems for Battlefield Use

2026-04-23
Ukraine

Ukrainian officials, led by Kyrylo Budanov, announced plans to fully integrate artificial intelligence into autonomous combat systems capable of independently identifying targets and maneuvering. This technological advancement, intended for use on the battlefield, raises credible risks of harm due to the deployment of AI in warfare.[AI generated]

AI principles:
SafetyRespect of human rights
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Physical (death)Physical (injury)Human or fundamental rights
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the development and planned use of AI-enabled autonomous combat systems capable of independent target identification and maneuvering. Although these systems are not yet deployed or causing harm, their intended use in active conflict zones implies a credible risk of causing injury or other significant harms. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to persons or communities. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's potential impact in warfare.[AI generated]

Thumbnail Image

Taiwan Warns Against Use of Gaode Map App Over AI-Driven Data Security Risks

2026-04-22
Chinese Taipei

Taiwanese authorities have raised national security and privacy concerns over China's Gaode Map app, which uses AI to infer traffic light timings from user data. Officials warn that sensitive location and movement data could be transmitted to Chinese servers and accessed by the Chinese government, urging a ban in government agencies and cautioning the public.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Mobility and autonomous vehiclesDigital security
Affected stakeholders:
General publicGovernment
Harm types:
Human or fundamental rightsPublic interest
Severity:
AI hazard
AI system task:
Forecasting/prediction
Why's our monitor labelling this an incident or hazard?

The mapping app uses AI-related data analytics to infer traffic light timings and collects location and movement data, which is sent to Chinese servers accessible by the Chinese government. This raises plausible risks of privacy violations and national security concerns, which fall under harm to individuals and communities. The cybersecurity agency's ongoing assessment and public warnings indicate potential future harm rather than confirmed harm. Hence, the event is best classified as an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the risk and ongoing assessment, not on updates to a past incident. It is not Unrelated because the app involves AI systems and data analytics with security implications.[AI generated]

Thumbnail Image

Chilean Legislative Proposal Sparks Copyright Concerns Over AI Data Use

2026-04-22
Chile

The Chilean government proposed a law allowing AI systems to use large volumes of copyrighted content without authorization or compensation for data mining and training. Media organizations and creators warn this could undermine intellectual property rights and threaten journalism’s economic base, urging the article’s withdrawal or revision.[AI generated]

AI principles:
AccountabilityRespect of human rights
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Business
Harm types:
Economic/PropertyPublic interest
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

The event involves the development and use of AI systems through data mining of copyrighted works without authorization, which is a direct factor in enabling AI functionality. However, the article focuses on the legal change and expert concerns about its broad and ambiguous scope, which could plausibly lead to violations of intellectual property rights in the future. There is no indication that harm has already occurred, only that the law could lead to such harm if not properly balanced. Therefore, this constitutes an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

Anthropic's Claude Desktop Secretly Installs Browser Backdoor on macOS

2026-04-22
United States

Anthropic's Claude Desktop AI application for macOS was found to secretly install configuration files that pre-authorize its browser extensions to access and control browser sessions, even for browsers not yet installed. This was done without user consent, creating significant privacy and security risks, and violating user rights.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Consumer services
Affected stakeholders:
Consumers
Harm types:
Human or fundamental rights
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Interaction support/chatbotsContent generation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Claude Desktop) whose use leads to unauthorized access and control over user browsers, which is a violation of privacy and security rights. The installation of a backdoor without user consent is a direct breach of legal obligations protecting fundamental rights. The AI system's role is pivotal as it is the software performing this unauthorized action. The harm is realized (privacy violation and security risk), not just potential, and the event involves misuse or non-compliance in the AI system's deployment. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]