The event involves an AI system insofar as Meta's social media platforms use AI-driven algorithms for content recommendation and user engagement, which are central to the allegations of harm to teens' mental health and safety. The harm described (mental health damage and risk of sexual exploitation) falls under harm to groups of people. Since the harm is alleged to have occurred and is the subject of a legal case, this qualifies as an AI Incident. The event focuses on the use and impact of AI-enabled social media systems leading to harm, not just potential or future harm, nor is it merely complementary information or unrelated news.[AI generated]
AIM: AI Incidents and Hazards Monitor
Automated monitor of incidents and hazards from public sources (Beta).
AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Advanced Search Options
As percentage of total AI events
Show summary statistics of AI incidents & hazards
Meta Faces Landmark Trial Over AI Algorithms' Harm to Children in New Mexico
Meta is on trial in New Mexico, accused of misleading users about children's safety on its platforms. Prosecutors allege Meta's AI-driven algorithms promoted harmful and addictive content to minors, prioritizing profits over safety and violating consumer protection laws. Jury deliberations follow extensive testimony on the algorithms' impact.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?

KAI to Test Autonomous AI Satellite Fault Response in Space
Korea Aerospace Industries (KAI) and partners will launch a CubeSat equipped with an AI module to autonomously diagnose and respond to satellite faults in orbit. The project aims to validate AI onboard processing for real-time, self-directed satellite operation, presenting future risks if malfunctions occur but no harm has yet materialized.[AI generated]
AI principles:
Industries:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being developed and tested for autonomous satellite operation. However, the article focuses on the planned deployment and testing phase without any reported harm or malfunction. Since no direct or indirect harm has occurred, and the AI system's use is prospective, this qualifies as an AI Hazard due to the plausible future risk associated with autonomous AI operation in space systems, but not an AI Incident or Complementary Information.[AI generated]
AI Companion Chatbots Expose Australian Children to Harmful Content
A report by Australia's eSafety Commissioner found that popular AI companion chatbots, including Character.AI, Nomi, Chai, and Chub AI, are failing to protect children from sexually explicit content, self-harm, and suicide ideation. The platforms lack robust age verification and safeguards, exposing children to significant risks.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots) whose use has directly led to harm to children and teenagers through exposure to harmful content and emotional manipulation. The harms are realized and documented, including mental health impacts and exposure to child sexual exploitation material. The failure of the AI systems' providers to implement robust age checks and content moderation constitutes a malfunction or inadequate use safeguards. This fits the definition of an AI Incident because the AI systems' use has directly led to harm to persons (children and teens).[AI generated]
GM Begins Supervised Testing of Next-Gen Autonomous Vehicles in Michigan and California
General Motors has deployed 200 vehicles equipped with advanced autonomous driving technology for supervised public-road testing on highways in Michigan and California. Trained drivers are present to intervene if needed. The testing aims to refine GM's 'eyes-off' driving system, slated for launch in 2028, but poses plausible future risks if AI malfunctions.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous driving technology) in real-world testing, which could plausibly lead to harm if the technology malfunctions or is misused. However, no actual harm or incident is reported in the article. Therefore, this qualifies as an AI Hazard, as the deployment of such technology on public roads could plausibly lead to injury or other harms in the future if failures occur.[AI generated]

AI-Generated Fake Personas Drive Viral Crypto Scams on X
Blockchain investigator ZachXBT exposed a network of over 10 X accounts using AI-generated fake personas and deepfakes to spread sensational war-related misinformation, boost engagement, and funnel users into crypto scams. The operation netted six-figure profits, causing widespread financial harm and manipulating online communities.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to impersonate influencers and generate misleading content, which directly led to financial harm through crypto scams (pump-and-dump schemes and fake giveaways). The harm to individuals' property (financial loss) and communities (misinformation and manipulation) is realized. Therefore, this qualifies as an AI Incident due to the direct role of AI in causing harm through malicious use and coordinated scam activity.[AI generated]

AI-Enabled Drone Countermeasure Systems Developed and Deployed in Taiwan
Taiwanese company Wistron integrates AI technologies into the Aegis drone countermeasure system, deployed at over 1,200 critical sites. The government plans a NT$44.2 billion investment over five years to foster the domestic drone industry, highlighting potential future risks from AI-enabled military systems but no current harm reported.[AI generated]
AI principles:
Industries:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the "雷盾" drone countermeasure system) that uses AI for image analysis and signal processing to detect and counter drones. However, the article does not describe any realized harm or incident resulting from the AI system's use or malfunction. Instead, it presents the system as a defensive technology aimed at mitigating drone threats, which could plausibly lead to preventing harm but does not itself constitute an incident. Therefore, this event is best classified as an AI Hazard, reflecting the plausible future risk context and the system's role in defense rather than an actual incident or harm.[AI generated]

French Prosecutors Investigate AI-Generated Deepfake Scandal Involving Elon Musk's Companies
French prosecutors are investigating Elon Musk's companies X and xAI after Grok, their AI system, generated sexualized deepfake images, including those depicting minors. Authorities suspect the controversy may have been orchestrated to artificially inflate company valuations ahead of a planned 2026 stock listing. Musk publicly insulted the prosecutors in response.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Grok') to generate deepfake sexual videos, which are being investigated for their role in artificially inflating company value and potentially involving illegal content related to children. This involves direct use of AI leading to violations of law and human rights, fitting the definition of an AI Incident. The investigation and legal scrutiny confirm that harm has occurred or is ongoing, rather than just a potential risk, thus it is not merely a hazard or complementary information.[AI generated]

Critical AI System Vulnerabilities in OpenClaw and Langflow Lead to Security Risks and Exploitation
360 Security discovered and reported a zero-day vulnerability in OpenClaw's intelligent agent gateway, confirmed by its founder, allowing attackers to bypass authentication and potentially crash systems. Separately, Langflow's API flaw enabled remote code execution, actively exploited within 20 hours of disclosure, causing unauthorized access and data theft. Both incidents highlight urgent AI security challenges.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The OpenClaw Gateway is an AI-related system (smart agent gateway) whose security vulnerability allows attackers to bypass authentication and gain control, potentially causing system resource exhaustion or crashes. This constitutes a direct risk of harm to property or system integrity. Since the vulnerability is confirmed and exploitable, and the article reports on the discovery and confirmation of this high-risk flaw, it qualifies as an AI Incident due to the realized security risk and potential harm stemming from the AI system's malfunction or misuse.[AI generated]

Protests and Legislative Action in Germany Over AI-Generated Deepfake Sexual Abuse
Around 10,000 people protested in Berlin against digital sexual violence, following allegations that AI tools were used to create pornographic deepfakes without consent. The German government is preparing urgent legislation to address legal gaps exposed by the incident involving actress Collien Fernandes and her ex-husband.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake images with sexual content without consent, which constitutes a violation of rights and harm to the individual involved. The harm has already occurred, as evidenced by the public outcry and legal complaints. The government's legislative response aims to address these harms. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and digital sexual abuse).[AI generated]

AI-Driven Internet Fraud Surges in Germany, Exploiting Language Barriers
Criminals in Germany are increasingly using AI to create convincing fake online shops and phishing attacks, overcoming language barriers and targeting new victim groups. The Bundeskriminalamt reports a rise in both the quality and quantity of internet fraud, resulting in significant financial losses for victims.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The involvement of AI in generating realistic fake content and phishing messages directly contributes to harm by enabling fraudsters to deceive victims and cause financial losses. This constitutes harm to individuals and communities, fitting the definition of an AI Incident where the use of AI systems has directly led to harm. The article explicitly states that these harms are occurring and increasing due to AI use, not just potential or future risks.[AI generated]

TikTok and Instagram Ban Accounts for Unlabeled, Exploitative AI-Generated Black Female Avatars
TikTok banned around 20 accounts after a BBC and Riddance investigation revealed the use of AI-generated, highly sexualized Black female avatars to promote explicit content without disclosure. The avatars, often racially stereotyped and exploitative, also appeared on Instagram, prompting Meta to investigate. The incident highlights AI misuse and community harm.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating digital avatars and videos, which are used in harmful ways including sexual exploitation, racial stereotyping, and identity theft. The AI-generated content is misleading and not properly labeled, violating platform policies and causing harm to individuals and communities. TikTok's banning of accounts confirms the recognition of harm. The harms are realized, not just potential, including violation of rights and harm to communities. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]

AI-Generated Film Poster Sparks Outrage Among Sikh Community in Mumbai
An AI-generated poster for the film Dhurandhar: The Revenge, depicting Ranveer Singh in Sikh attire holding a cigarette, sparked outrage among the Sikh community in Mumbai. Complaints were filed with police, alleging the poster and film scenes disrespect Sikh religious sentiments, leading to community backlash and formal investigations.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate the controversial poster, which directly led to harm in the form of violation of religious sentiments and cultural disrespect, a form of harm to communities and a breach of rights. The complaint and public outrage indicate that the harm is realized, not just potential. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

Suspected Terrorist Attack Targets Czech AI Drone Factory Supplying Ukraine
A fire broke out at LPP Holding's factory in Pardubice, Czechia, which produces AI-powered attack drones for Ukraine. Authorities are investigating the incident as a possible terrorist attack, following claims of responsibility by an anti-arms group. The fire disrupted production of critical AI military systems.[AI generated]
AI principles:
Industries:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The factory produces AI-powered autonomous attack drones, which are AI systems by definition. The fire, likely caused by a terrorist attack, has disrupted the production of these AI systems, which are used in active conflict zones. This disruption constitutes harm to critical infrastructure and potentially to communities affected by the conflict. The AI system's development and use are directly implicated in the event. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

Man Arrested for Posting AI-Generated Defamatory Images of Indian PM
Delhi Police arrested Siddhnath Kumar from Bihar for creating and sharing AI-generated objectionable images of Prime Minister Narendra Modi and female leaders on social media. The images, intended to mislead and disrupt public order, led to charges of forgery, defamation, and criminal intimidation. Investigations into the dissemination network are ongoing.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to create images that were disseminated with the intent to mislead and disturb public order, which is a form of harm to communities. The AI system's use directly led to legal consequences and police action, indicating realized harm rather than just potential harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The involvement of AI in generating misleading content that impacts public order fits the definition of an AI Incident due to harm to communities.[AI generated]

AI Disrupts Chinese Film Industry: Job Losses, Fake Content, and Public Backlash
AI systems in China’s film industry have led to significant job losses, economic insecurity, and reputational harm through AI-generated actors, scriptwriting, and fake videos. Public backlash and legal concerns over image and voice likeness violations have prompted regulatory responses, highlighting ongoing harm and ethical challenges.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for deepfake generation (AI face-swapping, voice cloning) that have directly led to harms such as unauthorized use of likeness, misinformation, reputational damage, and potential financial loss due to scams. These harms fall under violations of human rights (portrait and reputation rights) and harm to communities (misleading information). Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm. The legal responses and restrictions on AI-generated content are complementary information but do not change the primary classification.[AI generated]

AI-Driven 3D Modeling Reduces Risks in Cerebral Aneurysm Treatment at San Camillo Hospital
San Camillo Forlanini Hospital in Rome integrated an AI-powered 3D simulation system for planning endovascular treatments of cerebral aneurysms. Over 120 cases were treated in a year, with the AI system significantly reducing patient risk and unnecessary device use, improving safety and lowering costs.[AI generated]
Industries:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used in the development and application of personalized treatment plans for cerebral aneurysms. The system's use has directly contributed to improved patient safety and reduced risk of complications by enabling better preoperative planning and device selection. Since the AI system's use has led to realized health benefits and harm reduction (less risk to patients), this qualifies as an AI Incident under the definition of an event where AI use has directly led to harm reduction and improved health outcomes. Although the article focuses on positive outcomes, the involvement of AI in medical treatment affecting patient health is within the scope of AI Incidents, as it relates to injury or harm to health (here, the AI system reduces such harm).[AI generated]

Restaurant Service Robots Cause Disruption and Safety Concerns in Texas and California
AI-powered service robots at restaurants in Houston, Texas, and Cupertino, California malfunctioned or were improperly operated, causing disruption, discomfort, and property damage. Incidents included erratic movements, broken dishware, and staff intervention, highlighting risks and challenges in deploying autonomous robots in customer-facing environments.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The humanoid robot is an AI system performing autonomous or semi-autonomous actions during a restaurant performance. Its erratic and uncontrolled behavior directly caused physical harm risks (broken dishware, potential burns, blunt-force injuries) to people present. The staff had to physically restrain the robot, indicating a malfunction or failure to control the AI system. The incident involves direct harm or risk of harm to people, fulfilling the criteria for an AI Incident. The restaurant's statement about the robot's operating environment does not negate the harm caused. Hence, this is not merely a hazard or complementary information but a realized incident involving AI malfunction and resulting harm.[AI generated]

German Digital Minister Warns of Major Job Losses Due to AI
German Digital Minister Karsten Wildberger warns that artificial intelligence will lead to dramatic job losses in Germany, urging employers, unions, and society to prepare for significant labor market changes. He emphasizes the need for collective action to adapt and benefit from AI, rather than resisting its adoption.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by AI systems but rather warns about potential future job losses and societal challenges due to AI adoption. It involves AI systems implicitly as the cause of future economic disruption but does not report any direct or indirect harm that has already occurred. Therefore, it fits the definition of an AI Hazard, as it plausibly leads to future harm related to employment and social stability.[AI generated]

Chacao Deploys AI-Powered Robotic Dogs for Public Security Patrols
The municipality of Chacao, Venezuela, has become the first in Latin America to deploy autonomous AI-powered robotic dogs, "Voltio" and "Turbo," for public security patrols. Equipped with cameras and sensors, these robots monitor public spaces, detect crimes, and communicate incidents to police, raising potential privacy and safety concerns.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered autonomous robotic dogs used for patrolling and monitoring public spaces, which qualifies as AI systems. However, there is no indication that these systems have caused any injury, rights violations, property damage, or other harms. Since no harm has occurred yet, but the deployment of such AI systems in public security could plausibly lead to incidents in the future, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the introduction of potentially impactful AI systems, not on responses or updates to prior events.[AI generated]

ChatGPT Flags Republican Fundraising Links as Unsafe, Raising Bias Concerns
OpenAI's ChatGPT erroneously flagged links to the Republican fundraising platform WinRed as potentially unsafe, while similar Democratic links to ActBlue were not flagged. OpenAI attributed this to a technical glitch, but the incident raised concerns about AI bias and its potential impact on political participation in the U.S.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) explicitly caused differential treatment of political fundraising links, flagging Republican links as unsafe while not doing so for Democratic links. This is a malfunction of the AI system's content filtering or safety warning mechanism. The harm is realized as it affects political actors and potentially voters by unfairly flagging one side's fundraising platform, which can disrupt political processes and violate rights to fair political participation. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's malfunction leading to political bias and potential election interference.[AI generated]

























