The event involves AI systems (deepfake technology) used to create fraudulent content that has directly led to harm by deceiving users and promoting fraudulent products, which constitutes harm to communities and violations of rights. Meta's legal actions are responses to these harms. Since the harms have already occurred due to the use of AI-generated deepfakes, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AIM: AI Incidents and Hazards Monitor
Automated monitor of incidents and hazards from public sources (Beta).
AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Advanced Search Options
As percentage of total AI events
Show summary statistics of AI incidents & hazards

Meta Sues Over Deepfake-Driven Health Fraud in Brazil
Meta has filed lawsuits against individuals and companies in Brazil for using AI-generated deepfakes of celebrities and doctors in fraudulent health product ads on its platforms. The deepfakes misled users, resulting in financial and privacy harm. Legal actions also target similar schemes in China and Vietnam.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
Tesla Plans to Deploy AI-Driven Robotaxis and Robots in Europe
Tesla CEO Elon Musk announced plans to introduce fully autonomous, AI-powered robotaxis (Cybercab) and humanoid robots (Optimus) in Europe, pending regulatory approval, with production starting as early as 2024. While no incidents have occurred, the deployment raises plausible future risks related to AI system safety.[AI generated]
AI principles:
Industries:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (autonomous driving and robotics AI) and their planned use, but no actual harm or incidents have occurred yet. The article highlights the potential for these AI systems to be deployed in Europe soon, which could plausibly lead to AI-related incidents in the future (e.g., safety risks from autonomous vehicles). Therefore, this qualifies as an AI Hazard because it describes credible future risks from the development and use of AI systems, but no current incident or harm is reported.[AI generated]

Dutch Authors and Journalists Demand Meta Stop Using Copyrighted Works for AI Training
Dutch writers, translators, and journalists, represented by the Auteursbond, NVJ, and Stichting Lira, have formally demanded that Meta cease using their copyrighted texts without permission or payment to train AI models like Llama. They allege this practice violates intellectual property rights and undermines creators' economic interests.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Meta's AI language model Llama) trained on copyrighted works without authorization, which constitutes a violation of intellectual property rights (harm category c). The unions' demand to stop using these datasets and the threat of legal action indicate that the harm has already occurred due to the AI system's development and use. Therefore, this qualifies as an AI Incident because the AI system's development and use have directly led to a breach of legal obligations protecting intellectual property rights. The event is not merely a potential risk or a complementary update but a concrete incident of harm related to AI.[AI generated]

Samsung Settles Texas Lawsuit Over Smart TV AI Data Collection
Samsung Electronics settled a lawsuit with the Texas Attorney General over its smart TVs' use of AI-powered Automatic Content Recognition (ACR) technology to collect viewing data without adequate consumer notice or consent. Samsung agreed to enhance transparency and obtain explicit consent from Texas consumers, addressing privacy violations caused by the AI system.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The Automatic Content Recognition (ACR) system in Samsung smart TVs is an AI system that collects and analyzes user viewing data. The legal dispute arose because of the way this AI system collected and used data without sufficient user notification, constituting a violation of privacy rights. The settlement and lawsuit withdrawal indicate that harm related to rights violations had occurred. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The event is not merely a governance response or complementary information because the core issue is the realized harm from the AI system's data collection practices.[AI generated]

NHRC Probes AI Education Project Over Children's Data Privacy Risks
India's National Human Rights Commission has issued notices to government bodies after complaints about privacy risks in an AI-powered education initiative by US-based Anthropic and NGO Pratham. The AI system processes children's academic data, raising concerns about potential violations of privacy and data protection laws under India's DPDP Act.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in the context of data collection and processing related to children, which raises concerns about privacy and data protection. However, the article does not report any actual harm or data breach that has occurred; rather, it focuses on potential risks and the initiation of inquiries to prevent misuse. Therefore, this situation represents a plausible risk of harm due to AI system use but no realized harm yet. The main focus is on governance and regulatory response to these potential risks, making it primarily a case of addressing an AI Hazard and related governance actions. Since the event centers on potential risks and regulatory inquiries rather than an actual incident of harm, it is best classified as an AI Hazard.[AI generated]

Exposed Google API Keys Enable Unauthorized Access to Gemini AI and Data
Researchers discovered that legacy Google Cloud API keys, previously considered safe to embed in public code, now grant unauthorized access to Gemini AI endpoints. This exposes private data and allows attackers to incur significant financial charges, affecting thousands of organizations, including Google itself. The incident highlights a critical security vulnerability in Google's AI integration.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's generative AI service Gemini AI) and its integration with cloud API keys. The misuse of these keys can lead to unauthorized access to AI services, resulting in potential harm such as data exposure (harm to property and possibly to communities) and financial damage (mounting AI bills). This constitutes harm directly linked to the use of an AI system, fulfilling the criteria for an AI Incident.[AI generated]

AI-Generated Disinformation Targets Paris Municipal Election Candidates
Authorities uncovered a network of fake websites, operated from South Asia, spreading AI-generated, sensationalist content targeting Paris mayoral candidates ahead of the 2026 municipal elections. The campaign, primarily for profit rather than political motives, disseminated misleading material via Facebook and fake media sites, causing limited but real engagement.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate harmful content that is actively disseminated, constituting a direct harm to communities through misinformation and manipulation during an election, which is a violation of rights and harms societal trust. The AI system's use in generating and spreading this content directly leads to harm, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as the content is already being spread and engagement has occurred. Therefore, this is classified as an AI Incident.[AI generated]

Ford Recalls Over 4 Million Vehicles Due to AI-Controlled Trailer Module Software Defect
Ford is recalling approximately 4.4 million vehicles in the U.S. after a software defect in the AI-controlled integrated trailer module was found to disable trailer lights and brakes, posing significant safety risks. The recall affects multiple models from 2021-2026, with a free software update offered as a fix.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The software bug is part of an AI system or advanced software controlling vehicle functions related to trailer operation. The malfunction directly affects vehicle safety features, increasing the risk of accidents, which constitutes harm to persons (injury or harm to health). Since the AI system's malfunction has directly led to a safety risk and potential harm, this qualifies as an AI Incident under the framework.[AI generated]

South Korea Strengthens Measures Against AI-Generated Fake News Ahead of Elections
Ahead of local elections, South Korea's government, led by Prime Minister Kim Min-seok, is intensifying efforts to combat AI-generated fake news and misinformation. Authorities are coordinating across agencies to enforce strict legal responses, enhance detection, and raise public awareness to protect democratic processes from AI-enabled manipulation.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article discusses the use and potential misuse of AI systems to generate and spread fake news that can disrupt political and election order, which constitutes harm to communities and democratic processes. However, it does not report a specific incident where AI-generated fake news has already caused harm; rather, it focuses on the risk and the government's planned responses. Therefore, this is best classified as an AI Hazard, as the AI system's involvement could plausibly lead to harm but no concrete incident is described.[AI generated]

Chinese Actor Wang Jinsong's Likeness Deepfaked by AI, Raising Legal and Fraud Concerns
Chinese actor Wang Jinsong's image and voice were used without consent in a highly realistic AI-generated video, causing confusion even among his family. The incident highlights growing concerns over AI-enabled impersonation, intellectual property violations, and potential for fraud. Authorities have taken action against similar cases involving other celebrities in China.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic video impersonating the actor, constituting unauthorized use of his likeness and voice, which is a violation of personal rights and intellectual property. The harm has materialized as the actor's image was misused, and there is a plausible risk of more severe harms like AI-enabled scams. Since the misuse has already occurred and caused harm, this qualifies as an AI Incident.[AI generated]

Block Lays Off 40% of Workforce Due to AI Automation
Financial technology company Block, founded by Jack Dorsey, announced layoffs of over 4,000 employees—about 40% of its workforce—citing the adoption of AI tools that automate and streamline operations. The move, attributed directly to AI-driven efficiency, caused significant economic harm to affected workers and highlights AI's disruptive impact on employment.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the company is reducing its workforce by over 4000 employees due to AI automating more work and increasing efficiency. This is a direct use of AI systems causing harm to employees through job loss, which fits the definition of an AI Incident as it involves harm to groups of people resulting from the use of AI. The harm is realized, not just potential, and the AI system's role is pivotal in the decision and outcome.[AI generated]

German Court Upholds Student Exclusion for Unauthorized AI Use in Exams
Two students at the University of Kassel were excluded from exam retakes after using AI tools in their academic work, violating university rules. The Administrative Court of Kassel upheld the university's decision, establishing legal precedent and general rules for handling AI misuse in academic assessments in Germany.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system was involved as a tool used by students to assist in writing academic papers, which is a use of AI. The event concerns the use (misuse) of AI systems in academic work. The harm here is a violation of academic integrity rules and legal consequences for the students, which can be considered a breach of obligations under applicable law (academic regulations) and a violation of rights related to fair academic conduct. Since the AI use led directly to sanctions and legal rulings, this constitutes an AI Incident involving violations of obligations and rights. The event is not merely a potential risk or a general update but a concrete case of harm resulting from AI misuse.[AI generated]

South Korean Authorities Crack Down on AI-Generated Fake News Ahead of Local Elections
South Korean prosecutors and police are intensifying efforts to combat the spread of AI-generated fake news, particularly deepfakes, ahead of the June 3 local elections. Authorities have made arrests and launched investigations, emphasizing a zero-tolerance policy to protect election integrity and democratic processes from AI-driven misinformation.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake AI) to generate and spread fake news, which constitutes a violation of rights and harm to communities by undermining democratic processes. The article describes realized harm through the active spread of AI-generated fake news and the resulting law enforcement actions. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (disruption of election integrity and political order).[AI generated]

Abu Dhabi Launches Supervised Trials of Tesla Self-Driving Cars and Autonomous Trucks
Abu Dhabi has begun supervised road trials of Tesla's Full Self-Driving technology and pilot operations of autonomous freight trucks within its logistics zones. These AI-driven vehicle trials, overseen by the Integrated Transport Centre, aim to assess safety and operational readiness under regulatory frameworks, with no reported incidents or harm.[AI generated]
Industries:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
Tesla's Full Self-Driving is an AI system performing autonomous driving tasks. The event involves its use in real-world trials under supervision, with the goal of verifying safety and operational readiness. Since no harm or malfunction is reported, and the trials are conducted within a regulatory framework to prevent harm, this is a potential risk scenario rather than an actual incident. The event fits the definition of an AI Hazard because the AI system's use could plausibly lead to harm if issues arise during or after deployment, but no harm has yet materialized.[AI generated]

Scotland Considers Criminalizing AI-Generated Deepfake Intimate Images
The Scottish government has launched a consultation on criminalizing the creation of deepfake intimate images using AI without consent. The proposed law aims to address the potential misuse of AI tools to generate non-consensual intimate content, seeking to strengthen protections for women and girls against abuse.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article discusses the potential misuse of AI technology (deepfake creation) that could lead to harm, specifically violations of privacy and abuse targeting women and girls. Since no actual harm or incident is reported, but the government is responding to the plausible risk of harm by proposing new offences, this fits the definition of an AI Hazard. The AI system's use (deepfake generation) could plausibly lead to violations of rights and harm to individuals, but the event is about preventing such harm through legal measures, not about a realized incident.[AI generated]

Italian Court Rules AI-Driven Employee Dismissal Lawful
The Rome Labor Court ruled that the dismissal of a graphic designer, whose role became redundant due to the adoption of AI tools during a company reorganization, was lawful. This marks one of Italy's first legal decisions explicitly addressing AI's impact on employment rights.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools in the company's reorganization that made a job position superfluous, leading to the employee's dismissal. This dismissal is a realized harm affecting labor rights and employment, which falls under violations of labor rights and harm to individuals. The court ruling confirms the legitimacy of the dismissal but does not negate the fact that AI use contributed to the harm. Hence, this is an AI Incident due to the direct link between AI use and realized harm (job loss).[AI generated]
/s3/static.nrc.nl/wp-content/uploads/2026/02/25161338/260226VER_2031861764_.jpg)
Dutch Organizations Sue X Over AI Chatbot Grok's Generation of Illegal Nude Images
Dutch organizations Offlimits and Fonds Slachtofferhulp have filed a lawsuit against X (formerly Twitter) and its AI chatbot Grok for generating and distributing non-consensual nude images, including child sexual abuse material. They demand an immediate ban and fines, citing ongoing harm and legal violations in the Netherlands.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok, a generative AI model integrated into X) whose use has directly led to significant harm: the creation and widespread dissemination of sexual deepfakes without consent, including child sexual abuse images. This constitutes violations of human rights and breaches of legal protections against sexual abuse and privacy violations. The harm is realized and ongoing, with documented psychological impacts on victims and legal actions being taken. The AI system's development and use have facilitated this harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

YouTube's AI Algorithms Flood Children’s Feeds with Harmful AI-Generated Videos
Investigations reveal YouTube's AI-driven recommendation system systematically promotes low-quality, misleading, and developmentally inappropriate AI-generated videos to children. These videos, often disguised as educational, feature distorted visuals and misinformation, raising concerns about cognitive and emotional harm to young viewers. YouTube has removed some content, but the issue persists.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems: AI tools generating videos and YouTube's AI recommendation algorithm promoting them. The harm described is cognitive overload, misinformation, and developmental disruption in young children, which is a direct harm to health and communities. The harm is occurring as children are actively exposed to and influenced by these videos. The article provides concrete examples and expert opinions on the negative impact, fulfilling the criteria for an AI Incident. Although some mitigation steps are mentioned, the primary focus is on the harm caused, not on the response, so it is not Complementary Information. The harm is realized, not just potential, so it is not an AI Hazard. Therefore, the event is best classified as an AI Incident.[AI generated]
AI Traffic Cameras in Western Australia Cause License Loss and Fines for Drivers
AI-powered road safety cameras in Western Australia have issued fines and demerit points to drivers for seatbelt and mobile phone violations, including those committed by passengers or children. Many drivers have lost licenses or faced significant penalties, raising concerns about fairness and the rigid enforcement of AI-detected infractions.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The AI road safety cameras are explicitly described as detecting offences and triggering fines and demerit points, which directly harm drivers through financial penalties and license suspensions. The system's use has led to disputes over fairness, indicating the AI's outputs have material consequences. This fits the definition of an AI Incident because the AI system's use has directly led to harm (legal and financial penalties) and violations of rights (potentially unfair enforcement). The presence of the AI system is explicit, and the harms are realized, not just potential. Thus, the event is best classified as an AI Incident.[AI generated]

























