aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 14403 incidents & hazards
Thumbnail Image

AI Cybersecurity Models Raise Global Security Concerns

2026-04-14
United States

OpenAI and Anthropic have released advanced AI models (GPT-5.4-Cyber and Claude Mythos) for cybersecurity, capable of detecting software vulnerabilities. While intended for defensive use, their potential misuse has alarmed governments and financial institutions, prompting high-level meetings and warnings about risks to critical infrastructure. No actual harm has occurred yet.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Digital securityFinancial and insurance services
Affected stakeholders:
GovernmentBusiness
Harm types:
Public interestEconomic/Property
Severity:
AI hazard
Business function:
ICT management and information security
Autonomy level:
No-action autonomy (human support)
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (the Mythos model) whose development and use are under scrutiny due to potential cybersecurity risks. While no direct harm has been reported, the article highlights credible concerns from government and financial authorities about possible future harms, including risks to cybersecurity and supply chain security. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harms, but such harms have not yet materialized. The article focuses on the potential risks and ongoing discussions rather than actual incidents or realized harm.[AI generated]

Thumbnail Image

AI Chatbots Found to Dispense Inaccurate and Potentially Harmful Medical Advice

2026-04-14
United States

Multiple studies led by US and Canadian researchers found that popular AI chatbots, including ChatGPT, Gemini, Grok, and others, frequently provide inaccurate or incomplete medical information. Around half of their responses to health-related queries were problematic, raising concerns about potential harm to users who rely on these AI systems for medical advice.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Healthcare, drugs, and biotechnology
Affected stakeholders:
Consumers
Harm types:
Physical (injury)
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Interaction support/chatbotsContent generation
Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (chatbots powered by large language models) whose use has directly led to problematic medical advice that could cause injury or harm to users' health, fulfilling the criteria for an AI Incident. The harm is realized or ongoing, as the chatbots are widely used by adults for health queries, and the study documents a high rate of problematic responses that could mislead users. The event is not merely a potential risk (hazard) or a response/update (complementary information), but a clear case where AI use has caused or is causing harm, justifying classification as an AI Incident.[AI generated]

Thumbnail Image

South Korea Holds Emergency Meetings Over AI Cybersecurity Threats from Anthropic and OpenAI

2026-04-14
Korea

The South Korean government convened emergency meetings with major tech firms and cybersecurity experts in response to new AI-powered cybersecurity projects by Anthropic and OpenAI. Authorities are concerned these advanced AI models, capable of identifying vulnerabilities, could be misused for cyberattacks, prompting urgent security reviews and coordination to mitigate potential threats.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Digital securityGovernment, security, and defence
Affected stakeholders:
GovernmentGeneral public
Harm types:
Public interestHuman or fundamental rightsEconomic/Property
Severity:
AI hazard
Business function:
ICT management and information security
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The AI system 'Mitos' is explicitly described as an autonomous agent capable of identifying and exploiting security vulnerabilities, which directly relates to AI system involvement. The article focuses on the plausible risk that misuse of this AI could cause major disruptions to financial infrastructure, a critical infrastructure harm category. Since the harm is not yet realized but the risk is credible and significant, this event fits the definition of an AI Hazard rather than an AI Incident. The article also details responses by financial authorities, but the main focus is on the emerging threat rather than a response to an incident. Therefore, the classification is AI Hazard.[AI generated]

Thumbnail Image

LiblibAI Generates Inappropriate Content Due to Moderation Failure

2026-04-14
China

LiblibAI, an AI content generation platform operated by Beijing Singularity Xingyu Technology, produced sexually explicit videos after users bypassed moderation with complex prompts. The incident, exposed by CCTV, highlighted flaws in content safety mechanisms. The company apologized, initiated technical fixes, and upgraded moderation to prevent future harm.[AI generated]

AI principles:
SafetyAccountability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
BusinessGeneral public
Harm types:
ReputationalPsychological
Severity:
AI incident
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The AI system generated inappropriate content that bypassed safety controls, directly leading to harm in the form of unsafe and non-compliant content dissemination. The company's response and remediation efforts are complementary information but do not negate the fact that the AI system's malfunction caused harm. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's outputs.[AI generated]

Thumbnail Image

Morrisons Cuts 200 Head Office Jobs Due to AI-Driven Restructuring

2026-04-14
United Kingdom

UK supermarket chain Morrisons is cutting around 200 head office jobs in Bradford as part of a restructuring plan that increases automation and AI use. The job losses are directly linked to the adoption of AI systems aimed at streamlining operations and improving efficiency, resulting in significant workforce reductions.[AI generated]

AI principles:
Human wellbeingRespect of human rights
Industries:
Consumer services
Affected stakeholders:
Workers
Harm types:
Economic/Property
Severity:
AI incident
Business function:
Human resource management
AI system task:
Goal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly states that the job cuts are linked to stepping up AI use and automating manual tasks, indicating the AI system's use is directly leading to harm in the form of job losses. This fits the definition of an AI Incident because the development and use of AI systems have directly led to harm to a group of people (employees losing jobs).[AI generated]

Thumbnail Image

AI Chatbots Frequently Misdiagnose Medical Cases, Study Finds

2026-04-14
United States

A study by Mass General Brigham found that AI chatbots, including ChatGPT and Gemini, gave incorrect medical diagnoses in over 80% of cases when provided with incomplete patient information. Even with full data, error rates remained high, raising concerns about the reliability of AI in medical diagnostics.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Healthcare, drugs, and biotechnology
Affected stakeholders:
Consumers
Harm types:
Physical (injury)
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Interaction support/chatbots
Why's our monitor labelling this an incident or hazard?

The AI systems involved are large language models used as medical diagnostic chatbots, which clearly qualify as AI systems. The study shows that their use leads to a high rate of diagnostic errors, which can cause harm to patients' health by misguiding treatment decisions. This constitutes direct harm to health (harm category a) caused by the AI systems' outputs. Therefore, this event qualifies as an AI Incident due to the realized harm from the AI systems' use in medical diagnosis.[AI generated]

Thumbnail Image

AI-Generated Persona 'Dona Maria' Fuels Political Polarization in Brazil

2026-04-14
Brazil

An AI-generated digital influencer, 'Dona Maria,' created using Google's Gemini, went viral in Brazil by posting aggressive, politically charged content criticizing President Lula and the Supreme Court. The AI avatar's widespread reach and influence raised concerns about manipulation of public opinion, electoral integrity, and potential violations of election laws.[AI generated]

AI principles:
Transparency & explainabilityDemocracy & human autonomy
Industries:
Media, social platforms, and marketing
Affected stakeholders:
General publicGovernment
Harm types:
Public interestReputational
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly mentioned (Google's Gemini and other AI tools) used to generate political content that has a large social impact. The AI-generated avatar influences public opinion and election-related discourse, with risks of misinformation and confusion, which are harms to communities and potentially violations of electoral laws. The article details realized social and political harms, not just potential risks, and discusses challenges in accountability and regulation. This fits the definition of an AI Incident because the AI system's use has indirectly led to significant harm to communities and possible legal violations in the electoral context.[AI generated]

Thumbnail Image

AI Chatbots Exhibit Systematic Bias in Judging Users, Study Finds

2026-04-14
Israel

A study by Hebrew University of Jerusalem reveals that AI chatbots like ChatGPT and Gemini systematically judge users, forming psychological profiles and trust assessments. Unlike humans, these AI systems apply rigid, fragmented criteria, leading to amplified and consistent demographic biases in decisions such as lending and hiring, raising concerns about discrimination.[AI generated]

AI principles:
FairnessRespect of human rights
Industries:
Financial and insurance servicesBusiness processes and support services
Affected stakeholders:
Consumers
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Business function:
Human resource management
AI system task:
Interaction support/chatbots
Why's our monitor labelling this an incident or hazard?

The event involves AI systems (ChatGPT, Gemini) making decisions that directly impact people in areas like finance and trust, with documented biases leading to differential treatment based on demographics. This constitutes a violation of rights and harm to individuals/groups due to biased AI judgments. Since the harm is realized and linked to AI system use, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations protecting fundamental rights.[AI generated]

Thumbnail Image

Brazilian Police Dismantle AI-Driven Deepfake Fraud Ring

2026-04-14
Brazil

Brazilian police dismantled a criminal group that used generative AI to create deepfake facial biometrics, bypassing telecom security systems. The group committed large-scale electronic fraud and identity theft, taking over victims' phone lines and accessing financial accounts, causing widespread financial harm across Brazil.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Digital securityFinancial and insurance services
Affected stakeholders:
Consumers
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly states the use of generative AI to create fake biometric data (deepfakes) to circumvent security systems, which directly enabled criminal activities resulting in financial theft and fraud. This constitutes direct harm caused by the AI system's use. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to violations of property rights and harm to consumers and the company.[AI generated]

Thumbnail Image

Uber Accelerates Investment and Testing of AI-Powered Robotaxi Fleet

2026-04-14
United States

Uber is rapidly advancing its autonomous Robotaxi project in the US, investing over $10 billion to purchase and deploy AI-driven vehicles from partners like Lucid and Nuro. Employees are already testing the service with human safety drivers. No harm has occurred, but large-scale AI deployment poses future risks.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Severity:
AI hazard
Business function:
Logistics
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Recognition/object detectionReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The article describes Uber's planned large-scale deployment and investment in autonomous driving technology, which involves AI systems. While no harm has yet occurred, the widespread use of autonomous vehicles could plausibly lead to AI incidents such as accidents or disruptions. Therefore, this event represents a plausible future risk related to AI systems, qualifying it as an AI Hazard rather than an incident or unrelated news.[AI generated]

Thumbnail Image

South Korean Government Cracks Down on AI-Generated Deepfake Election Content

2026-04-14
Korea

South Korea's government, led by Prime Minister Kim Min-seok, announced strict enforcement and maximum legal penalties against the use of AI-generated deepfake videos and fake news during elections. The misuse of generative AI is seen as a direct threat to electoral fairness and democratic trust, prompting new prohibitions and rapid response measures.[AI generated]

AI principles:
Democracy & human autonomyTransparency & explainability
Industries:
Government, security, and defenceMedia, social platforms, and marketing
Affected stakeholders:
General public
Harm types:
Public interest
Severity:
AI hazard
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems in the context of AI-generated deepfake videos and fake news that could undermine election integrity. However, it does not describe any realized harm or incident caused by AI misuse; rather, it is a warning and policy announcement about preventing such harms. Therefore, this event represents a plausible future risk of harm from AI misuse in elections, qualifying it as an AI Hazard. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information since it is not an update on a past incident but a new government statement about potential risks and responses.[AI generated]

Thumbnail Image

AI-Generated Fake Court Documents Used in Fraud Attempt in Batman, Turkey

2026-04-14
Türkiye

In Batman, Turkey, scammers used AI and deepfake technology to create fake court documents and attempted to defraud a citizen via WhatsApp, demanding 30,000 TL under threat of imprisonment. The fraud was detected and prevented by the victim's lawyer, highlighting the risks of AI-enabled document forgery.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Government, security, and defenceDigital security
Affected stakeholders:
General public
Harm types:
Economic/PropertyPsychological
Severity:
AI incident
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (deepfake technology) to generate fake legal documents used in a fraudulent scheme. This use of AI directly led to an attempted harm to a person (financial fraud and potential legal consequences). The harm was averted, but the AI system's role in the fraudulent attempt is clear and pivotal. Therefore, this qualifies as an AI Incident because the AI system's use directly led to an attempted harm, even if the harm was prevented.[AI generated]

Thumbnail Image

AI Language Models Fail at Early Clinical Reasoning, Raising Patient Safety Concerns

2026-04-13
United States

A study by Mass General Brigham found that large language model AI systems, including GPT-5 and Gemini, fail to provide adequate early differential diagnoses in over 80% of cases. While accurate with complete data, their lack of clinical reasoning poses risks if used unsupervised in medical settings.[AI generated]

AI principles:
SafetyAccountability
Industries:
Healthcare, drugs, and biotechnology
Affected stakeholders:
Consumers
Harm types:
Physical (injury)
Severity:
AI hazard
AI system task:
Reasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (large language models) used in medical diagnosis. It discusses their use and limitations, focusing on their failure to perform initial diagnostic reasoning without human supervision. Although no actual harm (such as misdiagnosis causing injury) is reported, the study warns that unsupervised use could plausibly lead to harm in clinical settings. This fits the definition of an AI Hazard, as the AI systems' malfunction or misuse could plausibly lead to injury or harm to patients if deployed without human oversight. Since no realized harm is described, it is not an AI Incident. The article is not merely complementary information because it centers on the risk and performance limitations of AI in diagnosis, not on responses or ecosystem updates. Therefore, the correct classification is AI Hazard.[AI generated]

Thumbnail Image

Metropolitan Police Trials AI to Identify Child Abuse Victims Faster

2026-04-13
United Kingdom

The UK's Metropolitan Police is trialling AI technology to rapidly grade and triage child sexual abuse imagery, aiming to identify and safeguard victims more quickly. The AI system is intended to reduce officers' exposure to distressing material and accelerate intervention, with human oversight and victim care remaining central to investigations.[AI generated]

Industries:
Government, security, and defence
Severity:
AI hazard
Business function:
Compliance and justice
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The article describes the potential future use of AI systems in policing to assist with child sexual abuse investigations. While AI involvement is clear, the use is still under consideration and not yet implemented, so no harm has materialized. The AI's role could plausibly lead to benefits or risks in victim identification and safeguarding, fitting the definition of an AI Hazard. There is no indication of realized harm or incident, nor is the article primarily about responses or updates to past incidents, so it is not an AI Incident or Complementary Information. It is not unrelated because AI is central to the discussion.[AI generated]

Thumbnail Image

Brazilian TV Airs AI-Generated Fake News Image, Spreads Misinformation

2026-04-13
Brazil

Brazilian broadcaster SBT's program 'Se Liga Brasil' aired a fake image generated by AI, presenting it as real news about alleged misogyny at a São Paulo gas station. The misinformation led to public debate and criticism. SBT admitted the error, citing a breach of journalistic standards, and implemented internal corrective measures.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
BusinessGeneral public
Harm types:
Reputational
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system was involved as the image was generated by AI. The use of this AI-generated image without proper verification led to the spread of false information on a public broadcast, which is a harm to communities by spreading misinformation. The harm is realized, not just potential, as the false image was aired and discussed as if real. Therefore, this qualifies as an AI Incident under the definition of harm to communities caused directly or indirectly by an AI system's output.[AI generated]

Thumbnail Image

AI Data Centers Drive Water Scarcity in Southeast Asia

2026-04-13
Singapore

The rapid expansion of AI-driven data centers in Southeast Asia by global tech companies is causing significant strain on local water resources due to intensive cooling needs. This has led to environmental harm, with communities facing water shortages and increased regulatory scrutiny as water demand surges alongside AI infrastructure growth.[AI generated]

AI principles:
SustainabilityHuman wellbeing
Industries:
IT infrastructure and hostingEnvironmental services
Affected stakeholders:
General public
Harm types:
Environmental
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

The article clearly involves AI systems through the discussion of AI-driven data centers and their resource consumption. While no direct harm has yet occurred, the article warns of potential environmental harm (water scarcity) caused by the growth of AI infrastructure. This fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to harm to communities and the environment. There is no indication of a realized incident or harm, so it is not an AI Incident. The article is not merely complementary information since it focuses on the risk and impact of AI system use on natural resources, not on responses or updates. Hence, AI Hazard is the appropriate classification.[AI generated]

Thumbnail Image

Irish Cybersecurity Leaders Warn of AI-Driven Cyberattack Risks

2026-04-13
Ireland

Irish National Cyber Security Centre (NCSC) director Richard Browne and Defence Forces officials warned the Oireachtas that advanced AI tools like Anthropic's Mythos could soon enable state and criminal actors to automate and escalate cyberattacks. While no incidents have occurred yet, the potential for AI misuse poses significant cybersecurity risks for Ireland.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Digital securityGovernment, security, and defence
Affected stakeholders:
GovernmentGeneral public
Harm types:
Economic/PropertyPublic interest
Severity:
AI hazard
Autonomy level:
No-action autonomy (human support)
AI system task:
Reasoning with knowledge structures/planningContent generation
Why's our monitor labelling this an incident or hazard?

The article involves AI systems in the context of cybersecurity threats and defense, highlighting the potential for AI-enabled cyberattacks and the challenges they pose. However, it does not describe any actual AI-related harm or incidents occurring at present. Instead, it provides a warning and assessment of plausible future risks and challenges associated with AI in cybersecurity. Therefore, this qualifies as an AI Hazard, as it plausibly could lead to AI Incidents but no incident has yet occurred or been reported.[AI generated]

Thumbnail Image

Shadow AI Causes Corporate Data Leaks and IP Violations

2026-04-13
Korea

Employees' unauthorized use of generative AI tools, known as 'Shadow AI,' has led to incidents of confidential data leaks and intellectual property violations in workplaces. Notably, Samsung employees accidentally input sensitive code into public AI systems, prompting stricter company controls and highlighting the urgent need for robust AI governance and data protection measures.[AI generated]

AI principles:
Privacy & data governanceAccountability
Industries:
Digital securityIT infrastructure and hosting
Affected stakeholders:
Business
Harm types:
Economic/PropertyReputational
Severity:
AI hazard
Business function:
Research and development
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article centers on the plausible risk that AI systems could be exploited to leak sensitive information covertly, which could lead to significant harm such as intellectual property theft and security breaches. It discusses research that proposes detection frameworks and calls for legal and governance upgrades to address these risks. Since no actual data leakage or harm has been reported, but the risk is credible and the article emphasizes the need for preventive measures, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their potential misuse.[AI generated]

Thumbnail Image

AI Traffic Cameras Deployed in Sussex to Detect Dangerous Driving Behaviors

2026-04-13
United Kingdom

Sussex Police have deployed AI-powered cameras to autonomously detect drivers using mobile phones or not wearing seatbelts. The system, part of Operation Spotlight, aims to reduce road injuries and fatalities by identifying and enforcing against these dangerous behaviors, following a successful trial that detected hundreds of offences.[AI generated]

AI principles:
Privacy & data governanceTransparency & explainability
Industries:
Government, security, and defenceMobility and autonomous vehicles
Affected stakeholders:
General public
Harm types:
Economic/PropertyHuman or fundamental rights
Severity:
AI incident
Business function:
Compliance and justice
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The AI cameras are explicitly described as AI systems designed to detect specific driver behaviors that are among the leading causes of fatal and serious injury collisions. Their use directly influences enforcement actions (fines, penalty points) and aims to reduce harm to people on the roads. Since the AI system's use is directly linked to preventing injury and death, this qualifies as an AI Incident under the definition of harm to health of persons resulting from the use of an AI system.[AI generated]

Thumbnail Image

Angolan Tax Authority Uses AI to Detect and Report Major Tax Fraud

2026-04-13
Angola

The Angolan tax authority (AGT) deployed AI mechanisms during its 2024 audit to automatically identify and report tax fraud among major taxpayers. The AI system enabled the detection of irregularities, leading to investigations, reporting to authorities, and convictions for fraudulent activities, thereby protecting state revenue.[AI generated]

Industries:
Government, security, and defence
Severity:
AI incident
Business function:
Compliance and justice
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly used by the tax authority to detect fraud, which is a violation of legal and financial rights and harms public property (state revenue). The AI system's use in identifying irregularities and reporting them to authorities directly contributes to preventing or addressing these harms. Since the article describes the AI system's active role in detecting fraud (harm that has occurred or is ongoing), this qualifies as an AI Incident. The harm is related to violations of obligations under applicable law protecting financial rights and property (public revenue).[AI generated]