aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 6123 incidents & hazards
Thumbnail Image

Italian Prime Minister Sues Over Deepfake Pornographic Videos

Italian Prime Minister Giorgia Meloni is suing two men for creating and distributing AI-generated deepfake pornographic videos featuring her likeness. The videos, viewed millions of times online, caused reputational harm and led Meloni to seek €100,000 in damages, highlighting the misuse of AI for defamation and abuse.[AI generated]

AI principles:
AccountabilityPrivacy & data governance
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Government
Harm types:
ReputationalHuman or fundamental rights
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI (deepfake technology) to create and distribute a pornographic video without consent, causing harm to the individual depicted (PM Meloni). This constitutes a violation of rights and harassment, which are harms under the AI Incident definition. The involvement of AI is clear, the harm is realized, and legal action is underway, confirming this as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

AI System 'Mia' Detects Breast Cancers Missed by Doctors in NHS Trial

2024-03-21
United Kingdom

The AI tool Mia, used in a UK NHS trial, analyzed over 10,000 mammograms and detected early-stage breast cancers in 11 women that human doctors missed. This early detection enabled less invasive treatment and improved patient outcomes, demonstrating AI's significant role in preventing harm through enhanced medical diagnosis.[AI generated]

Industries:
Healthcare, drugs, and biotechnology
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

Mia is an AI system used in medical imaging analysis to detect breast cancer. Its use has directly contributed to identifying cancer cases that were missed by human doctors, which constitutes a clear health benefit and harm prevention. This fits the definition of an AI Incident because the AI system's use has directly led to harm prevention and improved health outcomes, which is a form of injury or harm to health avoided or mitigated. The article does not describe any malfunction or misuse but highlights the positive impact of the AI system in clinical practice.[AI generated]

Thumbnail Image

Greek Student Uses AI Deepfake to Defame Peer and Steal Parade Honor

2024-03-21
Greece

A 14-year-old Greek student used AI deepfake technology to create a fake explicit video of a female classmate, aiming to discredit her and take her place as flag bearer in a school parade. The incident caused significant emotional harm and reputational damage, highlighting the risks of malicious AI use among minors.[AI generated]

AI principles:
AccountabilityPrivacy & data governance
Industries:
Education and training
Affected stakeholders:
Children
Harm types:
PsychologicalReputational
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI (deepfake technology) to create manipulated content that directly harms the victim's privacy and reputation, which is a violation of fundamental rights. The harm has already occurred as the victim experienced shock and defamation. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to a person.[AI generated]

Thumbnail Image

Congress Investigates IRS Use of AI for Mass Financial Surveillance

2024-03-21
United States

Congressional leaders are investigating the IRS for allegedly using AI systems to conduct mass surveillance of Americans' bank accounts and financial records without legal authorization. Lawmakers cite concerns over violations of privacy and constitutional rights, with evidence suggesting the AI-enabled monitoring has led to enforcement actions and potential harm to civil liberties.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Human or fundamental rightsPublic interest
Severity:
AI incident
Business function:
Compliance and justice
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

An AI system is explicitly mentioned as being used by the IRS to detect tax fraud and monitor financial transactions. The lawmakers allege that this AI system is being used to surveil Americans without legal process, which constitutes a violation of rights and due process protections. This use of AI has directly led to concerns about violations of fundamental rights and privacy, fitting the definition of an AI Incident due to the realized harm of unlawful surveillance and potential rights violations.[AI generated]

Thumbnail Image

AI-Generated Deepfakes Fuel Scams, Identity Theft, and Election Manipulation

2024-03-21
United States

Generative AI tools like DALL-E, Midjourney, and Sora are enabling the widespread creation of deepfake images and videos, making it difficult to distinguish real from fake. These AI-generated fakes are increasingly used for scams, identity theft, and manipulating elections, causing significant harm to individuals and communities.[AI generated]

AI principles:
AccountabilityPrivacy & data governance
Industries:
Media, social platforms, and marketingDigital security
Affected stakeholders:
General public
Harm types:
Economic/PropertyPsychologicalReputational
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI generative systems (e.g., DALL-E, Midjourney) being used to create deepfake content that is actively causing harm such as scams, identity theft, and political manipulation. These harms fall under harm to communities and violations of rights. The AI systems' use is directly linked to these harms, meeting the criteria for an AI Incident. The discussion of detection tools and advice is complementary but does not overshadow the primary focus on realized harms caused by AI-generated deepfakes.[AI generated]

Thumbnail Image

Greek TV Host Faye Skorda Targeted by AI-Generated Deepfake Scam Video

2024-03-21
Greece

A deepfake video using AI technology manipulated footage of Greek TV host Faye Skorda and a guest, altering their speech to promote a product fraudulently. The incident caused reputational harm and public misinformation, with Skorda publicly denouncing the video as fake and warning about the dangers of AI-generated content.[AI generated]

AI principles:
AccountabilityPrivacy & data governance
Industries:
Media, social platforms, and marketing
Affected stakeholders:
WorkersGeneral public
Harm types:
ReputationalPublic interest
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system used to create a fake video with altered speech, which is a direct misuse of AI technology to deceive viewers and promote a product falsely. This manipulation can cause harm to the reputation of the individuals involved and mislead the public, constituting harm to communities and potentially violating rights. Since the harm is occurring through the dissemination of this AI-generated fake content, this qualifies as an AI Incident.[AI generated]

Thumbnail Image

Ukraine Plans to Mass-Produce AI-Enabled Kamikaze Drones

2024-03-20
Ukraine

Ukraine's Minister of Digital Transformation, Mykhailo Fedorov, announced plans to scale up production of AI-enabled kamikaze drones capable of autonomous targeting and engagement, contingent on increased Western funding. The expansion of these autonomous lethal systems raises concerns about potential future harm and AI-related hazards in conflict zones.[AI generated]

AI principles:
SafetyAccountability
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Physical (death)Physical (injury)
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-enabled kamikaze drones capable of autonomous real-time decision-making and target engagement, which qualifies as AI systems. The discussion centers on scaling up production of these drones, implying increased deployment and potential use in conflict. Although no direct harm is reported in the article, the nature of these AI systems—autonomous lethal drones—presents a credible risk of causing injury, violation of human rights, and harm to communities. Hence, the event is best classified as an AI Hazard due to the plausible future harm from the development and use of these AI-powered autonomous weapons.[AI generated]

Thumbnail Image

France Fines Google €250 Million for AI Copyright Violations

2024-03-20
France

French regulators fined Google €250 million after its AI chatbot Bard/Gemini was trained on copyrighted news content without proper authorization or compensation to publishers. The Autorité de la Concurrence found Google breached legal commitments and failed to provide publishers with opt-out mechanisms, violating intellectual property rights.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Business
Harm types:
Economic/Property
Severity:
AI incident
Business function:
Citizen/customer service
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (Google's chatbot Bard) whose training data included copyrighted content without authorization, leading to a legal penalty for copyright infringement. This constitutes a violation of intellectual property rights (harm category c) directly caused by the AI system's development and use. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm has already materialized and regulatory action has been taken.[AI generated]

Thumbnail Image

Georgia Lawmakers Use Deepfake of Senator to Demonstrate AI Election Risks

2024-03-20
United States

Georgia legislators created and presented an AI-generated deepfake video impersonating Senator Colton Moore and activist Mallory Staples without their consent to illustrate the dangers of political deepfakes. The incident spurred legislative efforts to criminalize deceptive AI use in election ads, highlighting risks of voter deception and election fraud.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Media, social platforms, and marketingGovernment, security, and defence
Affected stakeholders:
GovernmentCivil society
Harm types:
ReputationalHuman or fundamental rights
Severity:
AI hazard
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (deepfake generation technology) used to create a realistic but false video impersonating political figures. The use of this AI deepfake technology in political communication poses a credible risk of election interference and fraud, which are harms to the democratic process and communities. However, the article primarily discusses the legislative response to this risk and the demonstration of the technology's capabilities rather than reporting an actual incident of harm caused by AI deepfakes in an election. Therefore, this situation represents an AI Hazard, as the AI system's use could plausibly lead to significant harm (fraudulent election interference), but no specific harmful incident has yet occurred as described in the article.[AI generated]

Thumbnail Image

OpenAI's GPT Store Flooded with Copyright-Infringing and Policy-Violating Chatbots

2024-03-20
United States

OpenAI's GPT Store has been inundated with custom chatbots that violate copyright laws, impersonate public figures, and promote academic dishonesty. Despite moderation efforts, many illegal and policy-violating bots remain accessible, highlighting failures in OpenAI's content control and leading to widespread intellectual property and ethical breaches.[AI generated]

AI principles:
AccountabilitySafety
Industries:
Media, social platforms, and marketingEducation and training
Affected stakeholders:
BusinessGeneral public
Harm types:
Economic/PropertyReputational
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The event involves AI systems (chatbots based on ChatGPT) whose use violates intellectual property rights and platform policies, constituting a breach of obligations intended to protect intellectual property rights. Since these violations are occurring and the chatbots are actively available on the platform, this constitutes an AI Incident due to realized harm (violation of intellectual property rights). The article does not merely warn of potential harm but reports actual violations and presence of infringing AI content, thus qualifying as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Ukraine Deploys Autonomous AI-Guided FPV Drones in Combat Despite Jamming

2024-03-20
Ukraine

Ukrainian forces have begun using FPV drones equipped with AI-powered autonomous targeting systems that can lock onto and strike targets even after losing communication due to electronic warfare. These drones have successfully destroyed enemy assets, marking a significant escalation in the use of AI-driven lethal autonomous weapons in active conflict.[AI generated]

AI principles:
AccountabilitySafety
Industries:
Government, security, and defence
Affected stakeholders:
Government
Harm types:
Physical (death)
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves the development and potential use of AI-enabled autonomous guidance systems for drones, which are AI systems by definition as they infer from input to generate outputs influencing physical environments (targeting and navigation). The article does not report any realized harm yet but highlights the plausible future harm from deploying such autonomous drones in warfare, including injury or death and property damage. The autonomous targeting capability could bypass existing countermeasures, increasing risk. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future. There is no indication of current harm or incident, so it is not an AI Incident. It is not merely complementary information or unrelated, as the focus is on the development and potential impact of the AI system.[AI generated]

Thumbnail Image

AI-Generated Deepfakes Used to Defraud Using Daniel Sarcos and Olga Tañón's Likenesses

2024-03-20
Venezuela

AI was used to synthesize the voices and images of Daniel Sarcos and Olga Tañón without their consent to create fraudulent advertisements for a credit score company. Both celebrities publicly denounced the misuse, warning of reputational harm and potential financial scams targeting the public.[AI generated]

AI principles:
AccountabilityPrivacy & data governance
Industries:
Financial and insurance services
Affected stakeholders:
General publicOther
Harm types:
ReputationalEconomic/PropertyHuman or fundamental rights
Severity:
AI incident
Business function:
Marketing and advertisement
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system was used to generate synthetic voice audio impersonating Daniel Sarcos without his consent, which is a misuse of AI leading to reputational harm and potential fraud. This misuse directly harms the individual (violation of rights and potential financial harm to victims of the scam) and thus qualifies as an AI Incident. The event involves the use of AI-generated content for fraudulent purposes, which is a clear harm caused by AI misuse.[AI generated]

Thumbnail Image

AI Chatbots Generate and Spread Health Disinformation Due to Inadequate Safeguards

2024-03-20
United Kingdom

A study published in the British Medical Journal found that major AI chatbots, including ChatGPT, Gemini, and Meta's Llama 2, can generate convincing health disinformation, such as false claims about cancer cures. Inadequate safeguards allow these AI systems to produce and spread harmful medical misinformation, posing risks to public health.[AI generated]

AI principles:
AccountabilitySafety
Industries:
Healthcare, drugs, and biotechnology
Affected stakeholders:
General public
Harm types:
Physical (injury)
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (large language models powering chatbots) that have been used to generate health disinformation, which is a form of harm to communities and public health. The harm is realized as the disinformation is actively generated and could mislead people about health matters. The AI systems' failure to block such content despite being prompted and tested indicates a malfunction or inadequate safeguards in their use. The researchers' call for regulation and auditing further supports the recognition of this as a significant harm caused by AI. Hence, this event meets the criteria for an AI Incident.[AI generated]

Thumbnail Image

AI-Generated Fake 'Carcinogenic Sanitary Pad Blacklist' Causes Public Alarm in China

2024-03-20
China

An AI-powered Q&A system generated a false blacklist claiming major sanitary pad brands were carcinogenic, which spread widely online and caused public concern. The misinformation, amplified by social media influencers, led to reputational harm and confusion before being debunked by authorities as AI-generated disinformation.[AI generated]

AI principles:
AccountabilityRobustness & digital security
Industries:
Consumer productsMedia, social platforms, and marketing
Affected stakeholders:
BusinessGeneral public
Harm types:
ReputationalPsychological
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Interaction support/chatbotsContent generation
Why's our monitor labelling this an incident or hazard?

An AI system is implicated as the source of false harmful content (the fake blacklist), which has led to misinformation and public worry about health risks. This constitutes harm to communities by spreading false health-related information, which is a form of harm under the framework. Since the misinformation has already spread and caused concern, this is a realized harm, making it an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

AI-Generated Deepfakes of Celebrities Used in Financial Scams Across Latin America

2024-03-20
Mexico

AI-generated deepfake videos and voices impersonating public figures like Carlos Slim and Elon Musk are being used to lure victims into online financial scams across Latin America. These convincing deepfakes have led to real financial losses, making it harder for users to distinguish legitimate content from fraudulent schemes.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketingFinancial and insurance services
Affected stakeholders:
Consumers
Harm types:
Economic/Property
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI to create deepfake videos and voices of public figures to lure victims into financial scams. The harm is direct financial loss to individuals who fall for these scams. The AI system's use in generating realistic fake content is pivotal to the scam's success, fulfilling the criteria for an AI Incident involving harm to people. The article also discusses the challenges in detecting such AI-generated content and the ongoing efforts to mitigate these harms, but the primary event is the realized harm caused by AI-generated deepfakes used in scams.[AI generated]

Thumbnail Image

Michael Cohen Submits AI-Generated Fake Legal Citations in Court Filing

2024-03-20
United States
Open in  AIID Logo

Michael Cohen, former lawyer for Donald Trump, submitted a court motion containing fake legal case citations generated by Google Bard, an AI chatbot. Although the incident caused embarrassment and procedural issues, a federal judge declined to impose sanctions, finding no evidence of bad faith or intentional misconduct.[AI generated]

AI principles:
AccountabilitySafety
Industries:
Government, security, and defence
Affected stakeholders:
Government
Harm types:
ReputationalPublic interest
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The AI system (Google Bard) was used for legal research and generated fake cases that were mistakenly believed to be real and cited in court filings. This misuse of AI-generated content led to false legal claims, which is a violation of legal obligations and could be considered a breach of law and ethical standards. The event involves realized harm in the form of misleading the court and potential perjury, directly linked to the AI system's outputs. Therefore, it meets the criteria for an AI Incident due to the direct role of AI in causing harm related to legal rights and obligations.[AI generated]

Thumbnail Image

Florida Teacher Arrested for Creating AI-Generated Child Pornography from Student Yearbook Photos

2024-03-19
United States

Steven Houser, a third-grade teacher at Beacon Christian Academy in Florida, was arrested after using AI to generate child erotica from yearbook photos of three students. Authorities found AI-generated illegal content and other child pornography in his possession, highlighting the misuse of AI for child exploitation.[AI generated]

AI principles:
AccountabilityPrivacy & data governance
Industries:
Education and training
Affected stakeholders:
Children
Harm types:
PsychologicalHuman or fundamental rights
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event explicitly mentions the use of AI to generate child pornography, which is an illegal and harmful act violating human rights and laws protecting children. The AI system's use in generating such content directly caused harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the AI-generated child pornography was possessed and investigated by authorities.[AI generated]

Thumbnail Image

AI-Generated Fake Obituaries Cause Distress and Misinformation

2024-03-19
United States

Scammers are using AI tools to rapidly generate and post fake obituaries of living individuals online, including journalist Deborah Vankin, to attract clicks and ad revenue. This AI-driven scheme spreads misinformation, causes emotional distress, and can expose victims to further cyber risks such as malware.[AI generated]

AI principles:
AccountabilitySafety
Industries:
Media, social platforms, and marketing
Affected stakeholders:
General public
Harm types:
PsychologicalReputationalPublic interest
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI to generate fake obituaries, which are then posted online to deceive readers and generate ad revenue. The harm is realized as individuals are falsely declared dead, causing emotional distress and misinformation spread among their social circles and the public. The AI system's use in creating and disseminating this false content directly leads to harm to persons and communities, fulfilling the criteria for an AI Incident.[AI generated]

Thumbnail Image

Conversation Overflow Attacks Exploit AI Email Security to Enable Phishing and Credential Theft

2024-03-19

Threat actors are using a new 'Conversation Overflow' technique to bypass AI- and machine learning-based email security systems. By embedding hidden benign text in phishing emails, attackers trick AI filters, allowing malicious messages to reach victims and resulting in credential theft and data breaches within enterprise networks.[AI generated]

AI principles:
Robustness & digital security
Industries:
Digital security
Affected stakeholders:
WorkersBusiness
Harm types:
Economic/PropertyReputationalHuman or fundamental rights
Severity:
AI incident
Business function:
ICT management and information security
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly (AI/ML-enabled security platforms) and their use in cybersecurity. The attackers exploit the AI systems' detection mechanisms to bypass security, leading to phishing attacks that cause harm (credential theft). This constitutes an AI Incident because the AI system's malfunction or limitation directly contributes to the harm. The article details ongoing attacks, not just potential risks, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the attack method causing harm, not on responses or broader ecosystem context.[AI generated]

Thumbnail Image

Tesla FSD Beta V12.3 Praised but Faces Update Failures and Trust Risks

2024-03-19
United States

Tesla's FSD Beta V12.3, an AI-driven autonomous driving system, is praised for its human-like performance but faces high update failure rates on some Hardware-4 vehicles. Experts warn that increased user trust in the system, despite potential malfunctions, could pose safety risks if users over-rely on the AI.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
ConsumersGeneral public
Harm types:
Physical (injury)
Severity:
AI hazard
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detectionReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The Tesla FSD Beta is an AI system controlling autonomous driving functions. The reported event is a malfunction in the software update process, which is part of the AI system's use and maintenance. Although the update failure could potentially lead to safety risks if the system is not properly updated or functioning, the article does not mention any actual incidents or harms resulting from this failure. The problem is currently being investigated and may be fixed by future updates or hardware replacement. Since no harm has occurred yet but there is a plausible risk if unresolved, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]