aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 8595 incidents & hazards
Thumbnail Image

AI-Driven Surge in European Data Center Electricity Demand Raises Emission Concerns

2025-02-11

Two reports from Beyond Fossil Fuels warn that AI growth could boost European data center power needs by 160% in five years, reaching 287 TWh annually. If powered mainly by fossil fuels, this surge could multiply emissions, stressing the need for strategic, sustainable energy planning.[AI generated]

AI principles:
SustainabilityHuman wellbeing
Industries:
IT infrastructure and hostingEnergy, raw materials, and utilities
Affected stakeholders:
General public
Harm types:
EnvironmentalPublic interest
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

No actual harm has yet occurred, but the coalition’s analysis shows that continued AI‐led expansion of data centers could plausibly lead to large‐scale environmental damage (increased greenhouse gas emissions, strain on renewable supplies). Therefore, this is an AI hazard scenario, not a realized incident or merely complementary context.[AI generated]

Thumbnail Image

AI-Powered Traffic Lights Slash Congestion and Accidents in Chengdu

2025-02-11
China

Chengdu traffic police deployed an AI-driven signal control system on the 3.1 km Machao Road corridor. By dynamically adjusting green-light durations based on real-time vehicle flow, queue lengths fell by about 200 m, travel speeds rose up to 22 %, and accident rates dropped 33.8 %, cutting overall congestion.[AI generated]

Industries:
Government, security, and defenceMobility and autonomous vehicles
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI system controlling traffic lights in real time to optimize traffic flow and reduce congestion and accidents. The system's deployment has resulted in a 33.8% reduction in traffic accident rates and improved traffic speeds, indicating a direct link between AI use and harm reduction (less injury and improved safety). Since the AI system's use has directly led to a measurable decrease in harm (accidents), this event fits the definition of an AI Incident. The harm here is positive (reduction of harm), but the framework includes injury or harm to health, so the AI system's role in reducing such harm is relevant. This is not a hazard or complementary information, as the AI system is actively in use and causing real-world effects.[AI generated]

Thumbnail Image

Musk push risks ending Tesla Autopilot safety probes

2025-02-11
United States

Elon Musk's close ties to the Trump administration risk quashing federal investigations into Tesla's AI-driven Autopilot, including NHTSA crash probes and a DOJ criminal inquiry over overstated self-driving claims, as well as crash data reporting mandates. Safety experts warn rollback of oversight endangers drivers after incidents and fatalities.[AI generated]

AI principles:
AccountabilitySafety
Industries:
Mobility and autonomous vehiclesGovernment, security, and defence
Affected stakeholders:
ConsumersGeneral public
Harm types:
Physical (death)Physical (injury)Reputational
Severity:
AI incident
Business function:
Manufacturing
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detectionForecasting/prediction
Why's our monitor labelling this an incident or hazard?

Tesla's Autopilot is an AI system enabling partially automated driving. The article reports multiple crashes involving this technology, including a fatal accident, which are under federal investigation. These investigations and recalls are safety measures addressing harms caused by the AI system's malfunction or limitations. The article also highlights the risk that political influence could weaken these safety measures, increasing future harm. Since actual harm has occurred due to the AI system's use, this qualifies as an AI Incident. The discussion of potential weakening of oversight is relevant but secondary to the realized harms.[AI generated]

Thumbnail Image

AI Bots and Deepfakes Deceive Dating App Users

2025-02-11
Costa Rica

Surveys in Costa Rica, Colombia, and Guatemala reveal that 62–76% of dating app users suspect AI chatbots, fake profiles, and deepfakes posing as matches. These malicious AI-driven profiles have led to phishing attempts, romance scams, and emotional harm, prompting demands for identity verification measures to assure genuine human interactions.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketingDigital security
Affected stakeholders:
Consumers
Harm types:
Economic/PropertyPsychologicalReputational
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Interaction support/chatbotsContent generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems (conversational bots, deepfakes) used in dating apps that have directly led to harms including emotional manipulation, deception, and phishing attacks on users. These constitute harm to people and communities, fulfilling the criteria for an AI Incident. The article describes realized harms, not just potential risks, and the AI's role is pivotal in causing these harms. Therefore, this is classified as an AI Incident.[AI generated]

Thumbnail Image

Airbnb warns holidaymakers of AI-generated rental scams

2025-02-11
United Kingdom

Airbnb research with Get Safe Online reveals nearly two-thirds of travelers can't distinguish AI-generated holiday rental images, enabling scammers to post fake listings that cost victims an average £1,937. Exploiting AI's realism on social media, fraudsters lure consumers into non-existent bookings. Airbnb advises vigilance: verify listings, report scams, and avoid suspicious deals.[AI generated]

AI principles:
Transparency & explainabilityRobustness & digital security
Industries:
Travel, leisure, and hospitalityDigital security
Affected stakeholders:
Consumers
Harm types:
Economic/PropertyReputationalPsychological
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated images being used in scams that have caused customers to lose significant amounts of money. This constitutes direct harm to individuals' property (financial loss) caused by the use of an AI system. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.[AI generated]

Thumbnail Image

Hackers Exploit Prompt Injection to Corrupt Google Gemini's Memory

2025-02-11
United States

Researchers, including Johann Rehberger, demonstrated a new prompt injection method that permanently corrupts Google Gemini's long-term memory. The hack uses indirect and delayed tool invocation techniques to implant false data, raising concerns about security and potential harm from persistent inaccurate AI behavior across sessions.[AI generated]

AI principles:
AccountabilityRobustness & digital security
Industries:
Digital securityIT infrastructure and hosting
Affected stakeholders:
Consumers
Harm types:
ReputationalEconomic/PropertyPublic interest
Severity:
AI hazard
Business function:
ICT management and information security
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Interaction support/chatbotsContent generation
Why's our monitor labelling this an incident or hazard?

The event centers on a new hack—indirect and delayed prompt injections—that can override Gemini’s defenses and permanently implant malicious instructions or false user data in the chatbot’s memory. Although demonstrated only in controlled research, it highlights a clear, plausible pathway to user misinformation, persistent behavioral manipulation, and data theft. Because this exploit poses a potential future harm rather than reporting actual widespread damage, it qualifies as an AI Hazard.[AI generated]

Thumbnail Image

Mistral AI Faces Data Privacy Controversy Over Chatbot ‘Le Chat’

2025-02-11
France

Mistral AI has been accused of exploiting users' personal data without proper consent mechanisms. A complaint was filed with France’s CNIL over the absence of opt-out options in the free version of its AI chatbot, 'Le Chat', raising serious concerns about its adherence to data protection laws and user privacy.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Consumer servicesMedia, social platforms, and marketing
Affected stakeholders:
Consumers
Harm types:
Human or fundamental rightsReputational
Severity:
AI incident
Business function:
Citizen/customer service
Autonomy level:
No-action autonomy (human support)
AI system task:
Interaction support/chatbotsContent generation
Why's our monitor labelling this an incident or hazard?

The report describes a real complaint filed to the French data protection authority (CNIL) alleging that an AI system (Le Chat) actively exploited personal data without proper consent, constituting a violation of users’ fundamental privacy rights under applicable law. This is a concrete AI‐driven harm (GDPR violation), so it qualifies as an AI Incident.[AI generated]

Thumbnail Image

Colombian President Warns of AI-Driven Human Debacle at Dubai Summit

2025-02-11
Colombia

At AI Forum 2025 in Dubai, Colombian President Gustavo Petro warned that uncontrolled AI productivity risks mass unemployment, wealth concentration, social and political unrest, and environmental damage by increasing fossil fuel use. He urged creation of a global, multilateral regulatory body to ensure AI development balances productivity with humanity’s survival.[AI generated]

AI principles:
AccountabilityFairness
Industries:
Energy, raw materials, and utilitiesEnvironmental services
Affected stakeholders:
WorkersGeneral public
Harm types:
Economic/PropertyEnvironmentalPublic interest
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

The article involves AI in the context of its potential societal and economic impacts, highlighting plausible future harms such as mass unemployment, social conflict, and environmental degradation linked to AI-driven productivity increases. However, it does not report any actual incident or harm caused by an AI system at present, nor does it describe a specific AI system malfunction or misuse. The focus is on the need for global regulation to prevent these potential harms, making this an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

AI-Powered Vehicle-Mounted Counter-Drone System Unveiled by Adani & DRDO

2025-02-11
India

At Aero India 2025, Adani Defence & Aerospace and DRDO unveiled India’s first public-private Vehicle-Mounted Counter-Drone System. Mounted on a 4×4 platform, the AI-enabled system automatically detects, classifies, and neutralizes hostile drones with precision. The collaboration marks a step toward indigenous defense innovation and enhanced aerial threat preparedness.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Government, security, and defenceDigital security
Harm types:
Physical (death)Physical (injury)Economic/Property
Severity:
AI hazard
Business function:
Monitoring and quality control
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The Vehicle-Mounted Counter-Drone System clearly involves AI systems, as it uses automatic detection, classification, and neutralization of drones through advanced sensors and integrated technologies. The event focuses on the system's unveiling and its capabilities, with no indication of any harm or incident resulting from its use or malfunction. Since the system is designed to counter drone threats, it could plausibly lead to harm if it malfunctions or is misused (e.g., wrongful targeting or escalation), but no such incident is described. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it involves AI-enabled defense technology with potential implications for harm.[AI generated]

Thumbnail Image

Tesla Cybertruck FSD Malfunction Leads to Crash

2025-02-11
United States
Open in  AIID Logo

Florida-based software developer Jonathan Challinger experienced a crash when his Tesla Cybertruck's Full Self-Driving system (v13 and v13.2.4) failed to merge or turn, resulting in collisions with a curb and a pole. He shared the incident on social media and warned users to remain vigilant while using advanced driving features.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
Consumers
Harm types:
Physical (injury)Economic/PropertyReputational
Severity:
AI incident
Business function:
Other
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Recognition/object detectionReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

This is a direct harm incident caused by the AI system’s malfunction. Tesla’s FSD, an autonomous driving system, failed to detect or respond to a lane ending, leading to property damage and potential risk to pedestrians. Under the framework, a realized harm stemming from an AI system’s malfunction qualifies as an AI Incident.[AI generated]

Thumbnail Image

Inaccurate AI News Summaries Spark Misinformation Concerns

2025-02-11
United Kingdom

The BBC found that leading AI assistants, including ChatGPT, Microsoft's Copilot, Google's Gemini, and Perplexity, produced inaccurate news summaries. Over half of the answers had significant errors such as factual distortions, misquoted statistics, and altered content from BBC reports, raising concerns about misinformation and public trust.[AI generated]

AI principles:
Transparency & explainabilityRobustness & digital security
Industries:
Media, social platforms, and marketing
Affected stakeholders:
ConsumersGeneral public
Harm types:
ReputationalPublic interestPsychological
Severity:
AI incident
Business function:
Citizen/customer service
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

AI‐powered summarisation systems were directly used to generate news summaries and were found to produce incorrect facts, altered quotations, and blurred fact/opinion distinctions. Even though this is framed as a broad study rather than a single isolated event, it documents realized harms—misinformation and distortion of trusted sources—caused by deployed AI systems. Therefore, it qualifies as an AI Incident.[AI generated]

Thumbnail Image

Families sue TikTok over AI-recommended blackout challenge data deletion

2025-02-11
United Kingdom

Four British families have sued TikTok and ByteDance in the US, alleging the platform’s AI-driven recommendation system promoted a dangerous “blackout challenge” leading to their children’s deaths. They seek account data to investigate, but TikTok’s senior government relations manager says some data may have been deleted and is unavailable.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Children
Harm types:
Physical (death)Psychological
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Organisation/recommenders
Why's our monitor labelling this an incident or hazard?

TikTok's content recommendation algorithm is an AI system that influences what content users see. The lawsuit alleges that this AI system deliberately targeted children with harmful content, leading to their deaths. This constitutes direct harm to persons caused by the use of an AI system, fulfilling the criteria for an AI Incident. The harm is realized and severe (death), and the AI system's role is pivotal as per the allegations. Therefore, this event is classified as an AI Incident.[AI generated]

Thumbnail Image

Georgia Harrison Confronts AI-Driven Revenge Porn

2025-02-10
United Kingdom

Georgia Harrison, whose ex Stephen Bear was jailed for disclosing her private sex tape without consent, continues to struggle with its widespread online availability. In her ITV documentary Porn, Power, Profit, she investigates AI-enabled deepfake porn and image-based sexual abuse, tracking down postings and urging government and tech platforms to strengthen protections.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Media, social platforms, and marketing
Affected stakeholders:
General public
Harm types:
PsychologicalReputationalHuman or fundamental rights
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article details a real harm caused by the illegal sharing of private sexual content, which is a violation of rights and personal privacy. The mention of the deep fake porn industry implies the involvement of AI systems in creating or distributing manipulated sexual content. The ongoing widespread distribution of the video and the investigation into the sources and advertising around it indicate direct harm linked to AI-enabled technologies. Hence, this is an AI Incident as the AI system's use or misuse has directly led to harm to an individual and potentially others in similar situations.[AI generated]

Thumbnail Image

Ukrainian Military Deploys AI-Driven Robotic Forces in Combat

2025-02-10
Ukraine

Ukrainian forces, including the National Guard’s 13th Brigade, deployed AI-controlled ground robots armed with machine guns in coordinated attacks near Kharkiv against Russian positions. Directed via joystick and supported by drone surveillance, these operations demonstrate a new era of combat technology designed to reduce troop casualties.[AI generated]

AI principles:
AccountabilitySafety
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
Government
Harm types:
Physical (death)Physical (injury)Human or fundamental rights
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The described robots use AI-powered targeting, autonomous navigation, and grenade‐avoidance capabilities, and they have directly caused lethal harm to Russian soldiers. This is a materialized harm (injury and death) caused by the use of AI systems in warfare, fitting the definition of an AI Incident.[AI generated]

Thumbnail Image

Dubbing Actors Strike Over AI Voice Cloning Threat

2025-02-10
Spain

Voice actors in Spain—including delegations in Catalunya and the Balearic Islands—and in France are protesting the unauthorized use of AI to clone their voices, striking and demanding contractual clauses barring their recordings from training generative models. Industry groups like AADPC, PASAVE, CADIB and artists such as Bruno Méyère warn AI could erase their livelihoods.[AI generated]

AI principles:
Privacy & data governanceAccountability
Industries:
Arts, entertainment, and recreationMedia, social platforms, and marketing
Affected stakeholders:
Workers
Harm types:
Economic/PropertyHuman or fundamental rightsReputational
Severity:
AI incident
Business function:
Research and development
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions generative AI systems cloning voices of dubbing actors without their consent, which is a direct use of AI technology. This unauthorized use leads to harm by violating actors' rights and reducing their employment opportunities, fulfilling the criteria for an AI Incident under violations of human rights and labor rights. The harm is realized, not just potential, as actors report reduced work and ethical concerns. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Friend Betrayal: Deepfake Pornography Incident in Sydney

2025-02-10
Australia

Hannah Grundy in Sydney was victimized when a trusted friend, Andrew Hayler, used AI deepfake technology to produce and circulate non-consensual explicit imagery and violent content. Together with her partner, Hannah uncovered the misuse of private social media images affecting dozens of women, constituting a serious human rights violation.[AI generated]

AI principles:
Respect of human rightsPrivacy & data governance
Industries:
Media, social platforms, and marketingDigital security
Affected stakeholders:
General public
Harm types:
Human or fundamental rightsPsychologicalReputational
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generationRecognition/object detection
Why's our monitor labelling this an incident or hazard?

This is a direct misuse of an AI system (deepfake generation) to harass, threaten, and inflict emotional and reputational harm on Hannah and other women. The harm has materialized (non-consensual sexual content, threats, doxxing), constituting violations of personal and human rights. Therefore, it meets the criteria for an AI Incident.[AI generated]

Thumbnail Image

Criminal Organization Uses AI Voice Cloning for Fraud

2025-02-10
Bolivia
Open in  AIID Logo

In Bolivia, a criminal group used AI to clone Education Minister Omar Véliz’s voice to scam 19 victims by advertising fraudulent job offers and selling fake items on social media. The scheme, which resulted in over Bs 5 million in losses, involved seven individuals, with evidence gathered from sequestered cellphone numbers.[AI generated]

AI principles:
AccountabilityPrivacy & data governance
Industries:
Media, social platforms, and marketingDigital security
Affected stakeholders:
ConsumersGovernment
Harm types:
Economic/PropertyReputationalPublic interest
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article describes actual harm (financial fraud) directly enabled by an AI system (voice-cloning technology). The AI’s misuse led to property harm, meeting the criteria for an AI Incident.[AI generated]

Thumbnail Image

L3Harris unveils AMORPHOUS autonomous swarm-control platform

2025-02-10
United States

L3Harris introduced AMORPHOUS, an open-architecture AI platform enabling U.S. and allied forces to command thousands of heterogeneous unmanned assets via decentralized decision-making across multiple domains. Designed for complex military missions, the swarm-control software has undergone prototype testing with the U.S. Army and Defense Innovation Unit, raising potential hazard concerns despite no reported incidents.[AI generated]

AI principles:
AccountabilitySafety
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Harm types:
Physical (death)Physical (injury)Human or fundamental rights
Severity:
AI hazard
Business function:
Research and development
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisationReasoning with knowledge structures/planning
Why's our monitor labelling this an incident or hazard?

The article clearly involves an AI system, as Amorphous is an autonomy software platform coordinating large numbers of autonomous systems. The event concerns the development and use of this AI system for military swarm control. Although the software is not reported to have caused any harm yet, its deployment in military contexts with autonomous capabilities could plausibly lead to harms such as injury, disruption, or violations of rights. Therefore, this event fits the definition of an AI Hazard, as it describes a credible potential for future harm stemming from the AI system's use, but no realized harm or incident is described.[AI generated]

Thumbnail Image

Helsing and Mistral partner to develop AI-driven defence systems

2025-02-10
Germany

The German defence tech firm Helsing and French AI startup Mistral have formed a partnership to develop vision-language-action LLMs and autonomous drones for European defence, aiming to enhance battlefield perception, communication and decision-making, raising concerns over future risks of AI-powered weapons.[AI generated]

AI principles:
AccountabilityRobustness & digital security
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Harm types:
Physical (death)Physical (injury)Human or fundamental rights
Severity:
AI hazard
Business function:
Research and development
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems (computer vision) being developed for defense purposes, which inherently carry risks of harm due to their application in warfare. Although no incident has occurred yet, the partnership's focus on military AI systems plausibly leads to potential harms such as injury or disruption, qualifying this as an AI Hazard rather than an Incident or Complementary Information.[AI generated]