aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 14557 incidents & hazards
Thumbnail Image

AI-Generated Celebrity Likeness Used in Deceptive Real Estate Ads in Taiwan

2026-04-25
Chinese Taipei

A real estate advertisement in Taiwan used AI-generated images closely resembling actor Takeshi Kaneshiro without his consent, misleading consumers and violating his image rights. Kaneshiro's agency condemned the unauthorized use, highlighting ethical concerns and calling for stronger regulations to prevent AI misuse and protect personal rights.[AI generated]

AI principles:
Respect of human rightsTransparency & explainability
Industries:
Real estate
Affected stakeholders:
ConsumersOther
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Business function:
Marketing and advertisement
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system was used to create images resembling a real person without consent, which is a misuse of AI technology leading to a violation of the actor's rights and deceptive advertising. This harm is realized as the actor's likeness is exploited without permission, which fits the definition of an AI Incident under violations of human rights or breach of obligations protecting intellectual property and personal rights. Therefore, this event qualifies as an AI Incident.[AI generated]

Thumbnail Image

Unauthorized Access and Global Security Concerns Over Anthropic's Claude Mythos AI Model

2026-04-25
United States

Anthropic's powerful Claude Mythos AI model, designed to identify software vulnerabilities, has raised global cybersecurity concerns. Governments and tech firms seek early access to mitigate risks before public release. Despite restricted access, unauthorized users breached the preview system, highlighting potential security and intellectual property risks.[AI generated]

AI principles:
Robustness & digital security
Industries:
Digital security
Affected stakeholders:
BusinessGovernment
Harm types:
Economic/PropertyPublic interest
Severity:
AI hazard
Business function:
ICT management and information security
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Claude Mythos) whose development and imminent release could plausibly lead to harm by exposing vulnerabilities in critical infrastructure. The discussions and interest in early access are preventive measures addressing this potential risk. Since no harm has yet occurred, but the AI system's involvement could plausibly lead to an AI Incident, this qualifies as an AI Hazard.[AI generated]

Thumbnail Image

Turkish Bar Associations Oppose AI-Based Legal Defense Platform

2026-04-25
Türkiye

Turkey's Justice Minister Akın Gürlek proposed an AI-supported platform to assist citizens in legal processes without lawyers. In response, 78 bar associations issued a joint statement warning that such AI use could undermine the right to defense and weaken the legal profession, emphasizing the risks to justice and constitutional rights.[AI generated]

AI principles:
Respect of human rightsDemocracy & human autonomy
Industries:
Government, security, and defence
Affected stakeholders:
WorkersGeneral public
Harm types:
Human or fundamental rightsPublic interest
Severity:
AI hazard
Business function:
Citizen/customer service
Autonomy level:
No-action autonomy (human support)
AI system task:
Interaction support/chatbotsReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

An AI system is explicitly described as planned for use in legal processes to generate legal documents and guide users. The event involves the use of AI in a sensitive domain affecting fundamental rights (right to legal defense). Although no actual harm has yet occurred, the bar associations' objections emphasize credible risks of harm to legal rights and justice, which fits the definition of an AI Hazard. The event does not describe a realized harm or incident, nor is it merely complementary information or unrelated news. Hence, it is best classified as an AI Hazard due to the plausible future harm from the AI system's deployment in legal proceedings.[AI generated]

Thumbnail Image

AI-Enabled Autonomous Kamikaze Drones Demonstrated in Turkey

2026-04-24
Türkiye

Baykar showcased its new AI-powered kamikaze drones, K2 and Sivrisinek, in Keşan, Turkey. The demonstration highlighted autonomous swarm navigation, target detection, and attack capabilities. These AI-enabled weapon systems, set to debut at SAHA 2026, pose potential risks of harm if deployed in conflict scenarios.[AI generated]

AI principles:
SafetyRespect of human rights
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Physical (death)Physical (injury)
Severity:
AI hazard
Business function:
Research and development
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems integrated into drones with autonomous navigation and attack capabilities. Although no harm has occurred during the demonstration, the use of AI in armed drones with automatic target detection and attack functions plausibly could lead to serious harms such as injury or violations of rights in future military operations. The event is about the development and use of AI systems with offensive military applications, which is a credible source of future AI-related harm. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.[AI generated]

Thumbnail Image

AI-Generated Fake Wolf Photo Disrupts Emergency Response in Daejeon

2026-04-24
Korea

A man in Daejeon, South Korea, used AI to create and distribute a fake photo of an escaped zoo wolf, misleading authorities and the public. The image caused emergency services to alter search operations, issue disaster alerts, and delayed the wolf's capture, highlighting the real-world harm from AI-generated misinformation.[AI generated]

AI principles:
SafetyTransparency & explainability
Industries:
Government, security, and defence
Affected stakeholders:
General publicGovernment
Harm types:
Public interestEconomic/PropertyPsychological
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system was explicitly used to create a manipulated image that was disseminated, leading to significant disruption of emergency management and public safety operations. The harm includes interference with critical infrastructure management (emergency response and disaster alert systems) and potential risk to public safety. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm and disruption.[AI generated]

Thumbnail Image

AI-Generated Videos Exploit Elderly and Cause Public Panic in China

2026-04-24
China

AI-generated videos on Chinese platforms have targeted elderly users with emotionally manipulative content, leading to financial scams and psychological harm. Separately, an AI-created fake video of a building collapse caused widespread panic and misinformation. Both incidents highlight the misuse of AI for deception and harm to vulnerable groups and communities.[AI generated]

AI principles:
SafetyTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
ConsumersGeneral public
Harm types:
PsychologicalEconomic/PropertyPublic interest
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems generating realistic videos and emotional content that mislead elderly viewers, causing them to spend money on products under false beliefs. This is a direct harm to the health and well-being of a vulnerable group through deception and financial exploitation. The AI system's use is central to the harm, as it creates convincing fake personas and messages that manipulate emotions. The event meets the criteria for an AI Incident because the AI system's use has directly led to harm (financial and emotional) to a group of people (elderly individuals).[AI generated]

Thumbnail Image

AI System at Shinhan Investment & Securities Blocks Financial Fraud

2026-04-24
Korea

Shinhan Investment & Securities in South Korea used AI-driven anomaly detection and plans to deploy AI call pattern analysis to prevent financial fraud. Over the past year, the system detected and blocked an average of 1,800 suspicious transactions per quarter, preventing approximately 230 million KRW in potential losses each quarter.[AI generated]

Industries:
Financial and insurance services
Severity:
AI incident
Business function:
Compliance and justice
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The AI system is explicitly mentioned as being used to analyze call patterns to detect financial fraud, which is a form of AI system involvement in use. The system's operation has directly prevented financial harm (loss of money) to customers, which qualifies as harm to property. Since the AI system's use has directly influenced the prevention of harm, this qualifies as an AI Incident under the framework, as the AI system's role is pivotal in preventing realized harm from financial fraud attempts.[AI generated]

Thumbnail Image

Geely's Caocao Plans Global Deployment of AI-Powered Robotaxis

2026-04-24
China

Caocao Inc, Geely's ride-hailing arm, announced plans to deploy thousands of fully autonomous robotaxis, the Eva Cab, globally starting in 2027. Initial rollouts will occur in Abu Dhabi, Hong Kong, and several Chinese cities, with large-scale expansion to 100,000 vehicles by 2030. No incidents reported yet.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Severity:
AI hazard
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems (autonomous robotaxis) and their planned deployment, which could plausibly lead to AI incidents such as accidents or disruptions in the future. However, since no harm or malfunction has been reported yet, and the article only discusses future plans, it fits the definition of an AI Hazard rather than an AI Incident. It is not complementary information because it does not provide updates or responses to existing incidents, nor is it unrelated as it clearly involves AI systems with potential impacts.[AI generated]

Thumbnail Image

Turkish Intelligence Academy Warns of AI-Driven Cybersecurity Risks

2026-04-24
Türkiye

The Turkish National Intelligence Academy (MİA) released a report warning that AI is making cyber threats more complex, posing risks to national security, critical infrastructure, and public trust. The report urges a hybrid defense model and comprehensive strategies to address potential AI-enabled cyberattacks and misinformation in Turkey.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Digital securityGovernment, security, and defence
Affected stakeholders:
GovernmentGeneral public
Harm types:
Public interestReputational
Severity:
AI hazard
AI system task:
Content generationEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The report discusses the plausible risks and strategic challenges posed by AI in cybersecurity, including new attack vectors and vulnerabilities introduced by AI systems, as well as the need for coordinated governance and capacity building. It does not describe any concrete event where an AI system directly or indirectly caused harm or disruption. Therefore, it fits the definition of an AI Hazard, as it outlines credible potential risks and the need for preparedness, but no actual incident of harm is reported. It is not Complementary Information because it is not updating or following up on a previously reported incident, nor is it unrelated since it clearly involves AI systems and their security implications.[AI generated]

Thumbnail Image

BSE Warns of Repeated Deepfake Scams Targeting Investors

2026-04-24
India

India's BSE Limited has warned investors about a fourth deepfake video in four months, falsely depicting CEO Sundararaman Ramamurthy giving investment advice. The AI-generated videos mislead viewers with false promises of high returns, urging them to join private groups, posing financial risks and eroding trust.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Financial and insurance services
Affected stakeholders:
ConsumersBusiness
Harm types:
Economic/PropertyReputational
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The deepfake video is an AI-generated manipulated media that falsely represents a trusted figure to mislead and scam investors. The use of deepfake technology is explicitly mentioned, indicating AI system involvement. The harm is realized as investors are targeted with fraudulent investment tips, which can cause financial harm and breach trust. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the malicious use of an AI system (deepfake generation).[AI generated]

Thumbnail Image

European Regulators Warn of Accelerated Cyber Threats from AI in Financial Sector

2026-04-24

The European Securities and Markets Authority (ESMA) warns that rapidly advancing AI models, such as Anthropic's Mythos, are increasing the speed and risk of cyberattacks on financial institutions. Regulators are enhancing oversight and urging financial entities to strengthen cybersecurity defenses amid rising AI-driven threats.[AI generated]

AI principles:
Robustness & digital security
Industries:
Financial and insurance servicesDigital security
Affected stakeholders:
Business
Harm types:
Economic/PropertyReputational
Severity:
AI hazard
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generationReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI models increasing the potential speed of cyberattacks, which could plausibly lead to harm such as disruption of financial infrastructure or economic harm. However, it does not report any actual cyberattack or harm caused by AI systems at this time. The warnings and regulatory monitoring indicate a credible risk but no realized incident. Therefore, this event fits the definition of an AI Hazard, as it concerns plausible future harm from AI-related cyber threats rather than an AI Incident or Complementary Information about a past event.[AI generated]

Thumbnail Image

Metropolitan Police Use Palantir AI to Uncover Officer Misconduct

2026-04-24
United Kingdom

The Metropolitan Police in London used Palantir's AI tool to analyze internal data, uncovering widespread misconduct, corruption, and criminality among hundreds of officers. The AI-led investigation resulted in arrests and disciplinary actions for offenses including fraud, sexual assault, and abuse of authority, prompting consideration of expanded AI use in future policing.[AI generated]

Industries:
Government, security, and defence
Severity:
AI incident
Business function:
Compliance and justice
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (Palantir's software) deployed by the Met Police to detect rule-breaking and criminal behavior among officers. The AI's outputs directly led to investigations and arrests, indicating a causal link between the AI system's use and realized harm, including violations of law and public trust. This fits the definition of an AI Incident, as the AI system's use has directly led to harm in the form of legal violations and damage to institutional integrity.[AI generated]

Thumbnail Image

AI-Generated Microdrama Uses Real Faces Without Consent in China

2026-04-24
China

An AI-generated Chinese microdrama, "The Peach Blossom Hairpin," used the likenesses of real individuals, including model Christine Li, without their consent. The show, hosted on ByteDance's Hongguo app, caused reputational harm and distress, prompting legal action and raising concerns over AI misuse and personal rights violations in China.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Media, social platforms, and marketingArts, entertainment, and recreation
Affected stakeholders:
Workers
Harm types:
ReputationalPsychologicalHuman or fundamental rights
Severity:
AI incident
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The AI system was used to generate digital twins of real individuals without their consent, directly leading to reputational harm and potential professional damage. The unauthorized use of their likenesses in a public AI-generated drama constitutes a violation of their rights. The harm is realized, not just potential, as the individuals have experienced distress and fear, and the platform had to remove the content after public outcry. Hence, this is an AI Incident due to direct harm caused by the AI system's use.[AI generated]

Thumbnail Image

Romania Deploys AI-Powered Drone Interceptors Amid Ukraine Conflict

2026-04-24
Romania

Romania is deploying and testing the AI-powered Merops drone interceptor system, developed by Project Eagle, to counter escalating drone threats from the Ukraine war. The autonomous system, capable of detecting and engaging drones, is being rapidly integrated into Romania's air defenses following repeated Russian drone incursions near its border.[AI generated]

AI principles:
AccountabilityRespect of human rights
Industries:
Government, security, and defence
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as AI-powered autonomous drone interceptors. The system is being tested and soon deployed in a conflict-adjacent area, implying potential future use in defense scenarios where harm could plausibly occur. However, the article does not report any realized harm, injury, or violation caused by the AI system. The partial test success and the system's intended use to counter threats indicate a credible potential for future harm or incident if the system malfunctions or is used in conflict. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

Tesla Begins Production of Autonomous Cybercab Robotaxi

2026-04-24
United States

Tesla, led by Elon Musk, has started production of its fully autonomous robotaxi, the Cybercab, in the United States. Videos show the vehicle operating without a driver, steering wheel, or pedals. While no incidents have occurred, the deployment of this AI-driven vehicle raises potential future safety concerns.[AI generated]

AI principles:
SafetyAccountability
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
ConsumersGeneral public
Harm types:
Physical (injury)Physical (death)
Severity:
AI hazard
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The Cybercab is an AI system (an autonomous vehicle) whose production has begun, but the article does not report any realized harm or incidents. The mention of safety concerns and regulatory scrutiny implies potential future risks, making it a plausible AI Hazard. Since no actual harm or incident is described, it cannot be classified as an AI Incident. It is not merely complementary information because the focus is on the start of production and the potential for future deployment risks, not on responses or updates to past incidents. It is not unrelated because it clearly involves an AI system with potential safety implications.[AI generated]

Thumbnail Image

Anthropic's Claude Source Code Leak Reveals Ambitious AI Agent Capabilities

2026-04-24
United States

Anthropic accidentally leaked 512,000 lines of Claude's source code, exposing a new platform, Conway, designed for persistent, autonomous AI agents capable of running background tasks across devices. The leak highlights potential future risks, such as security breaches or loss of user control, due to the system's advanced, always-on capabilities.[AI generated]

AI principles:
Robustness & digital securityAccountability
Industries:
Digital securityIT infrastructure and hosting
Affected stakeholders:
BusinessGeneral public
Harm types:
ReputationalEconomic/Property
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (large language models like Claude, GPT-5.5, and DeepSeek's models) and discusses their development, use, and strategic withholding of capabilities. While no direct harm or incident is reported, the narrative centers on the plausible risks and competitive pressures that could lead to harm if these powerful AI capabilities were exposed or misused. The discussion of safety concerns, capability overhang, and strategic restraint indicates a credible risk of future AI incidents. There is no indication of actual injury, rights violations, or other harms having occurred yet, so it does not meet the criteria for an AI Incident. It is not merely complementary information because the main focus is on the plausible risks and strategic behaviors that could lead to harm, not on responses or ecosystem updates. Hence, the classification as AI Hazard is appropriate.[AI generated]

Thumbnail Image

Robot Malfunction at Chinese University Event Leads to Unintended Physical Contact

2026-04-24
China

During a university sports event in Xi'an, China, a humanoid robot malfunctioned due to signal interference, unexpectedly hugging a female student during a dance performance. The incident, attributed to program errors from drone signal interference, raised safety concerns about AI systems in public settings, though no injuries occurred.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Education and trainingRobots, sensors, and IT hardware
Affected stakeholders:
Women
Severity:
AI incident
AI system task:
Reasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (a humanoid robot with programmed dance movements) whose malfunction (due to signal interference) caused it to perform an unintended action (hugging a student). This unexpected physical contact poses a risk of injury, which is a form of harm to a person. Although no injury occurred this time, the malfunction directly led to a safety incident, indicating a realized AI Incident involving harm or risk of harm. Therefore, this qualifies as an AI Incident rather than merely a hazard or complementary information.[AI generated]

Thumbnail Image

White House Accuses China of Industrial-Scale Theft of U.S. AI Models

2026-04-23
United States

The U.S. government has accused Chinese entities of conducting industrial-scale campaigns to steal and replicate proprietary American AI models using techniques like distillation and jailbreaking. The White House warns this ongoing activity threatens U.S. intellectual property and innovation, prompting plans for defensive and punitive measures.[AI generated]

AI principles:
Robustness & digital security
Industries:
Digital security
Affected stakeholders:
Business
Harm types:
Economic/Property
Severity:
AI incident
Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems and their development, specifically the unauthorized extraction and copying of AI models, which is a breach of intellectual property rights. This harm is realized as it involves systematic campaigns to steal AI technology, directly violating legal protections. Therefore, it qualifies as an AI Incident due to the violation of intellectual property rights caused by the AI system's development and use.[AI generated]

Thumbnail Image

HD Hyundai Expands AI-Powered Unmanned Naval Vessel Collaboration in the US

2026-04-23
United States

HD Hyundai, in partnership with US defense AI firm Anduril and the American Bureau of Shipping (ABS), signed multiple MOUs to jointly develop AI-driven unmanned surface and underwater vessels. The collaboration includes establishing certification frameworks for autonomous maritime systems, highlighting future risks associated with military AI technologies. Activities are centered in the United States.[AI generated]

AI principles:
AccountabilitySafety
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
General public
Harm types:
Physical (death)Human or fundamental rightsPublic interest
Severity:
AI hazard
Business function:
Research and development
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI involvement in the development of autonomous unmanned submarines, which are military systems with inherent risks. Although no incident or harm has occurred yet, the development and planned deployment of such AI-enabled autonomous weapons systems plausibly could lead to harms such as injury, disruption, or violations of human rights. The event is about the collaboration and development phase, not about an actual incident or realized harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

Indian Government Urges Banks to Prepare for AI-Driven Cyber Threats

2026-04-23
India

Indian Finance Minister Nirmala Sitharaman and IT Minister Ashwini Vaishnaw convened with banks and regulators to address potential cybersecurity risks from advanced AI models like Claude Mythos. The government emphasized vigilance, real-time threat intelligence sharing, and stronger cybersecurity to prevent possible AI-enabled attacks on financial systems. No actual incident has occurred.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Financial and insurance servicesDigital security
Severity:
AI hazard
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article involves AI systems explicitly, particularly the Anthropic Mythos AI model capable of identifying cybersecurity vulnerabilities. However, the event is about assessing and preparing for potential cybersecurity threats that could arise from misuse of such AI systems. There is no indication that any AI-driven harm or breach has occurred so far in the Indian banking sector. The focus is on plausible future harm and risk mitigation. Therefore, this qualifies as an AI Hazard, as the AI system's capabilities could plausibly lead to cybersecurity incidents if misused, but no incident has yet materialized.[AI generated]