aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 8659 incidents & hazards
Thumbnail Image

Court Rules Ross Intelligence's AI Training Infringed Reuters’ Copyright

2025-02-17

A U.S. judge granted Thomson Reuters partial summary judgment against Ross Intelligence, finding that Ross infringed Reuters’ copyrights by using Westlaw headnotes to train its AI legal research platform. The ruling sets a pivotal precedent on intellectual property rights in AI training data.[AI generated]

AI principles:
AccountabilityPrivacy & data governance
Industries:
Media, social platforms, and marketingReal estate
Affected stakeholders:
Business
Harm types:
Economic/Property
Severity:
AI incident
Business function:
Compliance and justice
Autonomy level:
No-action autonomy (human support)
AI system task:
Organisation/recommendersContent generation
Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (Ross Intelligence's AI legal research platform) whose use of copyrighted material for training was ruled to infringe on intellectual property rights. This is a direct violation of intellectual property rights caused by the AI system's use, meeting the criteria for an AI Incident. The ruling affects ongoing litigation and has implications for AI copyright law, indicating realized harm rather than just potential harm. Therefore, the event is best classified as an AI Incident.[AI generated]

Thumbnail Image

South Korea blocks AI chatbot DeepSeek over data privacy concerns

2025-02-17

South Korea’s Personal Information Protection Commission ordered a temporary block on downloads of Chinese AI chatbot DeepSeek, citing privacy concerns after researchers uncovered user data being sent to servers in China. The move aligns with Italy and France’s restrictions and will remain until DeepSeek implements legally required data-handling reforms.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Consumer servicesDigital security
Affected stakeholders:
Consumers
Harm types:
Human or fundamental rightsReputational
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Interaction support/chatbotsContent generation
Why's our monitor labelling this an incident or hazard?

The event involves an AI system (DeepSeek app with generative AI capabilities) whose use has directly led to significant privacy and security risks, including unauthorized data collection and transmission to foreign servers, vulnerabilities exploitable for attacks, and potential government access to personal data. These factors constitute violations of privacy rights and risks to security, fitting the definition of harm under AI Incident (c) violations of human rights or breach of obligations protecting fundamental rights. The app's removal and regulatory actions confirm the harm is recognized and materialized, not merely potential. Therefore, this is classified as an AI Incident.[AI generated]

Thumbnail Image

NetEase Apologizes for AI-Generated Valentine's Promo Featuring Underage Character

2025-02-17
China

NetEase Super Membership used AI-generated promotional text featuring an underage NPC from ‘Yan Yun 16 Sheng’ for Valentine’s Day, sparking player backlash. NetEase apologized, took down the content, and committed to tighter AI content review and collaboration with the game team to prevent future missteps.[AI generated]

AI principles:
AccountabilitySafety
Industries:
Media, social platforms, and marketingArts, entertainment, and recreation
Affected stakeholders:
ConsumersBusiness
Harm types:
ReputationalPsychologicalPublic interest
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI to generate promotional content that misrepresented a minor character, leading to harm in the form of negative player experience and reputational damage. The AI system's use directly led to this harm, as the AI-generated text was the cause of the inappropriate content. Therefore, this qualifies as an AI Incident due to harm caused by AI-generated content misuse.[AI generated]

Thumbnail Image

Platforms Leverage AI to Combat AI-Fueled Online Harassment

2025-02-17
China

Chinese social media platforms are deploying AI-based detection, filtering, and human-machine review—like Xiaohongshu’s ‘Shield’ and TikTok’s initiative—after severe cyberbullying cases, including a self-media blogger’s ordeal. Under new Network Violence Governance Regulations, platforms bear legal duties to curb AI-enhanced abuse that inflicts psychological harm and privacy breaches.[AI generated]

AI principles:
Privacy & data governanceHuman wellbeing
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Consumers
Harm types:
PsychologicalHuman or fundamental rights
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article clearly involves AI systems as it describes platforms employing AI-based content filtering, risk assessment, and intervention mechanisms to manage and reduce online violence. The harms described include psychological injury to individuals, privacy violations, and social harm, which have directly resulted from the use or misuse of AI-enabled platforms' recommendation and moderation systems. Since the harms are occurring and the AI systems' role in both causing and mitigating these harms is central, this qualifies as an AI Incident. The article does not merely discuss potential future harm or general AI developments but focuses on realized harms linked to AI system use in social media platforms.[AI generated]

Thumbnail Image

Chinese AI Chatbot DeepSeek Blocked Over Privacy Violations

2025-02-17
Korea

Italy, Australia and South Korea have restricted or banned Chinese AI chatbot DeepSeek over data privacy and security concerns. South Korea’s privacy regulator found the app transmitted local users’ personal data to ByteDance without explicit consent, prompting its removal from official app stores and warnings until compliance.[AI generated]

AI principles:
Privacy & data governanceTransparency & explainability
Industries:
Media, social platforms, and marketingDigital security
Affected stakeholders:
Consumers
Harm types:
Human or fundamental rightsReputationalPublic interest
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Interaction support/chatbotsContent generation
Why's our monitor labelling this an incident or hazard?

DeepSeek is an AI chatbot system, and the event describes its use leading to potential unauthorized transfer of personal data to ByteDance, raising privacy and data protection concerns. The South Korean regulator's findings and warnings indicate that the AI system's use has directly or indirectly led to a violation of data privacy rights, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law. Since the harm (privacy violation) is occurring or has occurred, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Investigation Launched into AI-Driven Rental Price Fixing

2025-02-17

The Competition Bureau in Canada is investigating claims that real estate companies are using AI-driven pricing tools to track competitors and collude in inflating rents. The inquiry follows allegations in an American antitrust lawsuit and calls from political figures over potential breaches of fair market practices and price fixing.[AI generated]

AI principles:
AccountabilityFairness
Industries:
Real estate
Affected stakeholders:
Consumers
Harm types:
Economic/PropertyPublic interestReputational
Severity:
AI hazard
Business function:
Planning and budgeting
AI system task:
Organisation/recommendersForecasting/prediction
Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (algorithmic pricing software) in setting rents, which is under investigation for potentially causing harm by enabling collusion and artificially inflating rents. Although no confirmed harm has been reported yet, the investigation indicates a credible risk of violation of legal obligations and harm to tenants (a group of people) through economic harm. Since the harm is not yet confirmed but plausible, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or a response update but concerns a credible potential harm from AI use.[AI generated]

Thumbnail Image

Autonomous Transporters to Be Tested in Braunschweig

2025-02-17

Several autonomous electric transporters (U-Shift) from the DLR’s IMoGer project will be tested in Braunschweig’s Schwarzer Berg district with €35 million federal funding. The unpiloted vehicles, monitored for safety, aim to support last-mile logistics around the clock and gather data for similar urban and rural deployments.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehiclesLogistics, wholesale, and retail
Harm types:
Physical (injury)Economic/PropertyHuman or fundamental rights
Severity:
AI hazard
Business function:
Logistics
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detectionEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The event involves AI systems (autonomous transporters) being introduced and tested, which fits the definition of AI systems. Since the vehicles are not yet deployed and no harm or malfunction has been reported, there is no AI Incident. However, the deployment of autonomous vehicles inherently carries plausible risks of harm (e.g., accidents, disruptions), making this an AI Hazard. The article does not focus on responses or updates to past incidents, so it is not Complementary Information. It is not unrelated because it clearly involves AI systems.[AI generated]

Thumbnail Image

AI-Powered Media Warfare Against Algeria

2025-02-17

Multiple reports allege that dark rooms are using AI technologies such as deepfake and algorithm manipulation in a coordinated media warfare campaign against Algeria and its institutions. Algeria is countering with advanced local applications while facing a campaign allegedly backed by international funding aimed at disrupting digital platforms.[AI generated]

AI principles:
Transparency & explainabilityAccountability
Industries:
Media, social platforms, and marketingDigital security
Affected stakeholders:
GovernmentGeneral public
Harm types:
ReputationalPublic interestHuman or fundamental rights
Severity:
AI incident
Business function:
ICT management and information security
AI system task:
Content generationOrganisation/recommenders
Why's our monitor labelling this an incident or hazard?

This is an active misuse of AI systems—deepfake generation and algorithmic manipulation—resulting in realized harm (spread of false narratives, attack on public discourse and institutions). It goes beyond potential risk, describing concrete, unfolding AI-driven harm, so it qualifies as an AI Incident.[AI generated]

Thumbnail Image

South Korea Suspends DeepSeek AI Downloads Over Privacy Concerns

2025-02-17

South Korea’s Personal Information Protection Commission has halted new downloads of Chinese AI app DeepSeek, following bans on internal use by several ministries due to risky data-collection practices. The government requires compliance with local privacy laws and app improvements before resumption. Existing users retain access while DeepSeek appoints a local representative to remedy shortcomings.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Government, security, and defenceDigital security
Affected stakeholders:
Consumers
Harm types:
Human or fundamental rightsReputationalEconomic/Property
Severity:
AI hazard
AI system task:
Interaction support/chatbots
Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system, DeepSeek, whose use has been suspended or banned by several countries due to concerns about data privacy and potential misuse of collected data. The harms described are potential violations of personal data protection laws and risks to national security, which could plausibly lead to AI Incidents if the data were misused or leaked. Since the article focuses on the potential risks and preventive bans rather than actual realized harm, this situation fits the definition of an AI Hazard rather than an AI Incident. The AI system's development and use are central to the concerns, and the plausible future harm justifies classification as an AI Hazard.[AI generated]

Thumbnail Image

US states push AI-driven social media digital ID laws

2025-02-17
United States

Connecticut, Nebraska and Utah propose bills to curb algorithmic content recommendations, limit minors’ screen time, and mandate AI-enabled age verification via digital IDs. While aimed at protecting children, critics warn these measures could erode online anonymity, privacy and personal freedom by ushering in pervasive identity checks and monitoring.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Media, social platforms, and marketingDigital security
Affected stakeholders:
ChildrenGeneral public
Harm types:
Human or fundamental rightsPublic interest
Severity:
AI hazard
Business function:
Marketing and advertisement
AI system task:
Organisation/recommendersRecognition/object detection
Why's our monitor labelling this an incident or hazard?

The event involves AI systems in the form of algorithmic content recommendations and AI-enabled age verification methods. The legislation targets the use and regulation of these AI systems, which could plausibly lead to harms such as privacy violations and loss of anonymity. However, no direct or indirect harm has yet occurred as these are proposed laws and anticipated consequences. Therefore, this situation constitutes an AI Hazard because it plausibly could lead to significant harms related to privacy and freedom online if implemented and expanded. It is not an AI Incident since no harm has materialized, nor is it merely Complementary Information or Unrelated, as the focus is on potential AI-driven harms from these policies.[AI generated]

Thumbnail Image

Deepfake Image Lawsuit Sparks Political Concerns in Taiwan

2025-02-17
Chinese Taipei

Former UMC founder Cao Xingcheng has filed a lawsuit against media personality Xie Hanbing, alleging that AI-generated deepfake photos wrongly suggesting an extramarital affair have harmed his reputation. He is seeking NT$100 million in damages, which he plans to donate to a political recall effort, amid broader worries of AI misuse.[AI generated]

AI principles:
AccountabilityPrivacy & data governance
Industries:
Media, social platforms, and marketingGovernment, security, and defence
Affected stakeholders:
Other
Harm types:
ReputationalHuman or fundamental rightsPsychological
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves the misuse of an AI system (deepfake generation) to create non-consensual, defamatory content. The circulation of these AI-generated images has directly harmed the individual’s reputation and privacy, constituting a violation of rights. Therefore, it qualifies as an AI incident.[AI generated]

Thumbnail Image

Google’s AI-Driven Fingerprinting Raises Privacy Alarm

2025-02-17
United States

Google has rolled out an AI-driven “fingerprinting” technique that uniquely identifies users by aggregating device and browser parameters, replacing cookies to enable persistent, virtually unchallengeable tracking. Privacy advocates and regulators warn this probabilistic profiling breaches user privacy rights and undermines online anonymity, with few options to opt out.[AI generated]

AI principles:
Privacy & data governanceRespect of human rights
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Consumers
Harm types:
Human or fundamental rightsPublic interest
Severity:
AI incident
Business function:
Marketing and advertisement
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Organisation/recommendersOther
Why's our monitor labelling this an incident or hazard?

Fingerprinting is a technique that involves sophisticated data processing to uniquely identify and track users, which fits the definition of an AI system's use in data-driven profiling and tracking. The event involves the use of this AI-enabled tracking method leading to violations of user privacy and potentially breaches of data protection rights, which are human rights under applicable law. Since the tracking is actively occurring and has led to concerns about loss of user control and privacy violations, this constitutes an AI Incident under the framework, specifically a violation of human rights and privacy protections. The article reports on the realized use and its direct impact, not just potential future harm or complementary information.[AI generated]

Thumbnail Image

Gartner Warns of 40% Data Breaches from Cross-Border GenAI Misuse

2025-02-17
United States

Gartner forecasts that over 40% of AI-related data breaches by 2027 could stem from the cross-border misuse of generative AI. The rapid adoption of GenAI tools has outpaced data governance and security measures, leading to potential unintended data transfers and increased exposure of sensitive information.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Digital securityIT infrastructure and hosting
Affected stakeholders:
ConsumersBusiness
Harm types:
Human or fundamental rightsEconomic/PropertyReputational
Severity:
AI hazard
Business function:
ICT management and information security
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

No specific data breach incident is described—rather, the article presents forward-looking analysis of risks from misuse of generative AI. It outlines potential harms (data breaches, compliance failures) and offers mitigation strategies, fitting the definition of an AI Hazard.[AI generated]

Thumbnail Image

Former OpenAI Engineer Warns of Impending AI Catastrophe

2025-02-17
United States

William Saunders, a former member of OpenAI’s super-alignment team, warned that unchecked AI development could lead to catastrophic outcomes, likening the potential disaster to the sinking of the Titanic. He predicts that without proper controls, a significant AI incident may occur within the next three years.[AI generated]

AI principles:
AccountabilityRobustness & digital security
Industries:
IT infrastructure and hostingDigital security
Harm types:
Physical (death)Public interest
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

The article centers on expert warnings about potential future harms from AI systems, including manipulation and loss of control, but does not report any actual harm or incident caused by AI. The concerns relate to the development and use of AI systems that could plausibly lead to significant harm if unmitigated. Therefore, this qualifies as an AI Hazard, as it describes credible risks and potential future incidents stemming from AI, but no direct or indirect harm has yet occurred according to the article.[AI generated]

Thumbnail Image

Intelligent Warfare Theory and AI-Driven Information Warfare

2025-02-17
Ukraine

The articles explore future trends in AI-enabled warfare, highlighting shifts toward algorithm-driven combat and preemptive tactical design. They also discuss the threat of AI-powered disinformation, including deepfakes, used in information warfare by authoritarian states, urging democratic nations to develop effective counterstrategies.[AI generated]

AI principles:
Democracy & human autonomyRespect of human rights
Industries:
Government, security, and defenceDigital security
Affected stakeholders:
General publicGovernment
Harm types:
Physical (death)Physical (injury)Public interest
Severity:
AI incident
Business function:
ICT management and information security
AI system task:
Content generationGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of unmanned drones for reconnaissance and target guidance in a military strike that caused destruction of military vehicles and casualties among soldiers. The drones' autonomous or semi-autonomous functions in identifying and confirming targets and guiding missile strikes fit the definition of AI systems influencing physical environments. The resulting harm to personnel and property meets the criteria for an AI Incident under the OECD framework, as the AI system's use directly led to injury and harm in a conflict setting.[AI generated]

Thumbnail Image

Geoffrey Hinton Warns AI Could Outsmart, Manipulate Humanity and Worsen Inequality

2025-02-16

Geoffrey Hinton, known as the “godfather of AI,” cautions that current deep-learning advances could yield systems more intelligent than humans, capable of manipulating society. He warns unchecked AI may concentrate wealth, deepen social inequality and fuel political extremism, urging urgent measures to control its development before risks materialize.[AI generated]

AI principles:
AccountabilityFairness
Industries:
General or personal useMedia, social platforms, and marketing
Affected stakeholders:
General public
Harm types:
Economic/PropertyPublic interestPsychological
Severity:
AI hazard
Why's our monitor labelling this an incident or hazard?

The article discusses expert opinions and warnings about potential future risks from AI surpassing human intelligence, but it does not describe any actual harm or incident caused by AI at present. It focuses on plausible future harm and risks associated with AI development, which fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. There is no mention of a specific AI system malfunction or use causing harm, nor is it a governance or societal response update. Therefore, the event is best classified as an AI Hazard.[AI generated]

Thumbnail Image

OpenAI loosens ChatGPT content rules with adult mode, reduces censorship

2025-02-16
Portugal

OpenAI has introduced an “adult mode” for ChatGPT and relaxed its censorship policies, allowing the AI to generate explicit sexual and violent content when context is provided. Users have already shared erotic scenes on social media. The company also plans to adjust its training to further limit topic restrictions, sparking concerns about abuse and misinformation.[AI generated]

AI principles:
SafetyHuman wellbeing
Industries:
Media, social platforms, and marketingConsumer services
Affected stakeholders:
ConsumersGeneral public
Harm types:
PsychologicalPublic interest
Severity:
AI hazard
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system (ChatGPT) whose use policies have been changed to allow more explicit content generation. The changes could plausibly lead to harms such as dissemination of abusive sexual content, revenge porn, or privacy violations, as warned by human rights groups. However, no specific incident of harm is reported as having occurred due to these changes. The event thus fits the definition of an AI Hazard, where the development or use of the AI system could plausibly lead to an AI Incident in the future. It is not Complementary Information because the article focuses on the policy change and its implications rather than updates on a past incident. It is not Unrelated because the AI system and its use are central to the event.[AI generated]

Thumbnail Image

US Army solicits AI robots to build bridges under fire

2025-02-16
United States

The US Army issued an SBIR seeking defense contractors to develop autonomous, AI-controlled robotic rafts capable of self-assembling floating bridges in contested areas. Aimed at reducing engineer casualties and logistics footprint, the untested systems could face GPS jamming, cyberattacks, or failures in combat.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
Workers
Harm types:
Physical (injury)Physical (death)Economic/Property
Severity:
AI hazard
Business function:
Research and development
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly discusses the development and intended use of AI systems for autonomous bridge-building robots in military combat situations. Although no incident of harm has occurred yet, the nature of the AI system's application in warfare and the potential for these systems to be used under fire plausibly leads to significant harms, including injury or death to personnel and disruption of military operations. The AI system's role is pivotal in enabling autonomous operation in dangerous environments, which carries credible risks. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

Deepfake Videos Defame South Korean President and First Lady

2025-02-16
Korea

South Korea's regulatory agencies are acting swiftly against deepfake videos defaming President Yoon Suk-yeol and First Lady Kim Gun-hee. The manipulated videos, shown at pro-impeachment protests, have sparked legal investigations and led YouTube to remove related content due to serious defamation and human rights violations.[AI generated]

AI principles:
Respect of human rightsTransparency & explainability
Industries:
Media, social platforms, and marketingGovernment, security, and defence
Affected stakeholders:
GovernmentWomen
Harm types:
ReputationalHuman or fundamental rightsPublic interest
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

Deepfake generation is a malicious use of AI to fabricate realistic but false video content. The distribution of these deepfakes has already occurred and prompted defamation charges and regulatory blocking due to the real risk of public harm and confusion. This constitutes a realized incident of AI-driven disinformation.[AI generated]

Thumbnail Image

AI-Driven Identity Fraud via Deepfake and Synthetic Identities

2025-02-16

Cybersecurity experts warn that AI-driven deepfake technology and synthetic identities complicate detection and prevention of identity theft. Fraudsters leverage AI to create falsified images and videos that bypass financial verification, raising concerns about human rights violations and breaches of legal protections.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
Financial and insurance servicesDigital security
Affected stakeholders:
ConsumersBusiness
Harm types:
Economic/PropertyHuman or fundamental rightsReputational
Severity:
AI incident
Business function:
ICT management and information security
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article details actual AI-enabled fraud techniques actively used in the financial sector, quantifies their prevalence and success rates, and highlights real harms (identity theft, financial loss). These meet the definition of an AI Incident, as development and malicious use of AI systems has directly led to harm.[AI generated]