This qualifies as an AI Incident because the students’ custom system uses AI facial recognition and language models to directly violate individuals’ privacy rights and extract confidential information, constituting a materialized harm under human rights and privacy categories.[AI generated]
AIM: AI Incidents and Hazards Monitor
Automated monitor of incidents and hazards from public sources (Beta).
AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Advanced Search Options
As percentage of total AI events
Show summary statistics of AI incidents & hazards

Harvard Students Use Meta Ray-Ban Glasses for AI-Powered Doxing
Two Harvard students demonstrated the potential privacy risks of Meta's Ray-Ban smart glasses by using AI-powered facial recognition to identify strangers and access their personal information without consent. This raises significant privacy concerns as the glasses can generate AI-created profiles, potentially leading to unauthorized surveillance and data collection.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?

AI-Driven Foreign Influence Campaigns Manipulate Social Media Ahead of 2024 US Election
Foreign actors, including Russia, China, Iran, and Israel, have used generative AI and social bots to conduct coordinated influence campaigns on social media. These AI-powered operations spread disinformation, manipulate public opinion, and flood platforms with fake content, causing harm to communities and distorting public discourse.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems, including generative AI and AI-generated social bots, to conduct coordinated inauthentic behavior on social media platforms. These AI systems are used to spread disinformation, scams, and manipulate public opinion, which are clear harms to communities and potentially human rights. The involvement of AI in the use phase (operation of bots and content generation) has directly led to these harms. Therefore, this qualifies as an AI Incident under the framework, as the AI systems' use has directly caused significant harm to communities through manipulation and misinformation.[AI generated]

AI Chatbot Mimics Murdered Teen, Family Outraged
A family is outraged after discovering an AI chatbot on Character.ai mimicked their murdered daughter, Jennifer Ann, using her name and image without consent. This incident raises ethical concerns about privacy and the misuse of AI technology, highlighting the need for stricter regulations on personal identity use in AI.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The chatbot’s unauthorized use of the teen’s identity and likeness constitutes misuse of an AI system that directly harmed the family’s emotional well-being and violated the deceased’s personal rights.[AI generated]

Privacy Concerns Over Meta's AI-Enabled Ray-Ban Glasses
Meta's AI-integrated Ray-Ban glasses can perform tasks like video recording and answering questions by processing data through AI, raising privacy concerns. These glasses passively record their surroundings and send the data to Meta's AI, potentially violating privacy rights. This feature is not available in the EU due to privacy regulations.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Meta's AI analyzing images and audio from smart glasses) and describes its use of personal data to train AI models. While no direct harm or incident is reported, the extensive data collection and insufficient user awareness create a plausible risk of privacy violations and misuse of personal information. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to harm related to user privacy and rights. There is no indication of a realized incident or a response update, so it is not an AI Incident or Complementary Information. It is not unrelated because the AI system and its data use are central to the concerns raised.[AI generated]

Organized Crime in Asia Exploits AI for Cybercrime
The UNODC reports that organized crime in Asia is leveraging AI, including generative AI and deepfake technology, to commit cyber fraud and create illicit content. These groups are integrating new technologies into their operations, establishing underground markets, and using cryptocurrency for money laundering.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The UNODC report describes ongoing crimes—fraud, money laundering, forced labor, creation of ultrafake content—explicitly enabled by generative AI systems. These activities have already caused significant financial and human harms. Because AI played a direct and pivotal role in these realized harms, this constitutes an AI Incident.[AI generated]

AI-Driven eCommerce Fraud Predicted to Surge by 2029
A Juniper Research study forecasts eCommerce fraud to rise from $44 billion in 2024 to $107 billion by 2029, driven by AI advancements. Fraudsters are using AI to create deepfakes and synthetic identities, bypassing verification systems and increasing 'friendly fraud,' posing significant threats to merchant profitability.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly links AI to the rise and execution of large-scale eCommerce fraud, which causes significant financial harm to merchants and customers. The use of AI-generated deepfakes and synthetic identities to bypass security measures and commit fraud is a direct misuse of AI leading to harm. The harm is materialized and ongoing, not just potential. Although the article also discusses AI-driven fraud detection as a response, the primary focus is on the harm caused by AI-enabled fraud. Hence, this qualifies as an AI Incident under the framework, as AI misuse has directly led to significant harm (financial losses and exploitation).[AI generated]

Northrop Grumman Deploys AI-Driven Air Defense System for Countering Drone Swarms
Northrop Grumman has integrated advanced AI capabilities into its Forward Area Air Defense (FAAD) system, enabling rapid, automated weapon-target pairing to counter drone swarms. Successfully tested in real-world scenarios, the system streamlines combat decisions, but its autonomous targeting role presents credible risks of harm if malfunctions occur.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in a critical military defense context, which inherently carries risks of harm if the system malfunctions or is misused. However, the article only reports on successful trials and deployment without any reported injury, violation, or damage. Therefore, it does not meet the criteria for an AI Incident. Instead, it represents a credible potential for harm due to the AI's role in weapon targeting and engagement, qualifying it as an AI Hazard.[AI generated]

AI Data Centers Drive Environmental and Community Harm Through Massive Resource Consumption
The rapid expansion of AI-driven data centers by major tech companies is causing significant environmental and community harm due to their enormous energy and water consumption. This ongoing impact highlights the direct consequences of AI system growth on local resources and infrastructure, raising concerns about sustainability and regulatory oversight.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article focuses on the potential future challenges and technical hurdles in scaling AI training infrastructure, including energy grid capacity and data center networking. While it discusses plausible risks and the need for new methods to handle AI training at scale, it does not report any actual harm, malfunction, or violation caused by AI systems. Therefore, it fits the definition of an AI Hazard, as it plausibly could lead to incidents related to energy grid strain or AI system failures in the future, but no incident has yet occurred.[AI generated]

AI-Driven Cyberattacks Expose Security Vulnerabilities
AI-driven cyberattacks have led to significant breaches, such as the one at Star Health, exposing sensitive health data. Cybercriminals are using AI to automate and enhance the sophistication of their attacks, bypassing traditional security measures. This highlights the urgent need for improved cybersecurity to protect against AI-enabled threats.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article describes actual, ongoing AI-driven cyberattacks that have led to data breaches and pose significant threats. The AI systems are being misused to execute these attacks, directly causing harm (violation of privacy, data loss). Thus, it meets the definition of an AI Incident.[AI generated]

Hong Kong's AI Surveillance Expansion Raises Human Rights Concerns
Hong Kong authorities plan to install thousands of AI-powered surveillance cameras, including facial recognition technology, to combat crime. Critics warn this expansion could erode privacy and civil liberties, drawing comparisons to China's authoritarian surveillance practices and raising concerns about potential human rights violations.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and planned expansion of AI systems such as facial recognition integrated into surveillance cameras. Although the current system contributes to public safety, the main concern is the plausible future erosion of civil liberties and privacy violations due to intrusive AI surveillance. No direct or indirect harm has been reported as having occurred yet, but the credible risk of such harm is clearly articulated. Hence, the event fits the definition of an AI Hazard, reflecting a credible potential for harm stemming from AI system use in surveillance.[AI generated]

Study Identifies Professions Most at Risk from AI Automation
A Nokia Bell study highlights that AI is expected to significantly impact various professions, including highly skilled roles such as cardiology technologists, sound engineers, and nuclear medicine technologists. The research introduces an 'AI Impact Score' to assess how closely job tasks align with recent AI innovations, indicating potential future job transformation or displacement.[AI generated]
AI principles:
Industries:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article describes a research study that assesses the potential impact of AI on different professions, introducing a metric to measure AI's influence on job tasks. It discusses plausible future effects of AI on employment but does not describe any realized harm, malfunction, or misuse of AI systems. Therefore, it fits the definition of an AI Hazard, as it concerns plausible future harm from AI's influence on jobs, without any current incident or realized harm.[AI generated]

Zeekr Executive Warns Against Unsafe Use of Autonomous Driving After Viral Video
A viral video showed Zeekr car owners lying down and watching TV while the vehicle operated in autonomous mode, bypassing hand detection with a bottle. Zeekr's CMO, Guan Haitao, publicly discouraged such unsafe and potentially illegal behavior, emphasizing the need for users to follow safety regulations when using AI driving features.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system (automated driving) is involved, and the misuse of the system (driver lying down and not monitoring) could plausibly lead to harm such as accidents or injury. However, no actual harm is reported in the event, only a warning against the behavior. Therefore, this qualifies as an AI Hazard due to the plausible risk of harm from misuse of the AI system.[AI generated]

Ecovacs Accused of Privacy Violations with AI-Enabled Vacuums
Ecovacs, a Chinese robotics company, faces controversy for its AI-enabled robotic vacuums allegedly collecting personal data, including images and audio, without clear user consent. Marketed under the guise of product improvement, these practices raise significant privacy concerns, as users are not adequately informed about the data collection scope.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
Ecovacs’ devices (which incorporate AI for navigation and mapping) are harvesting intimate home data and using it to train AI without properly informing or securing user consent. This unauthorized data collection and use constitutes a direct harm (privacy and human rights violation) caused by the deployed AI system, fitting the definition of an AI Incident.[AI generated]
Swedish Police Chief Advocates Real-Time AI Facial Recognition
Swedish police chief Petra Lundh supports implementing AI-powered real-time facial recognition to combat serious crime, pending new legislation aligned with EU rules. While intended to help identify suspects, the proposed use raises concerns about potential violations of personal privacy and human rights.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article discusses the intended use and legislative preparation for real-time facial recognition by police, which involves AI systems. However, no actual harm or incident has occurred yet; the event concerns potential future use that could plausibly lead to harms such as privacy violations or rights infringements. Therefore, it qualifies as an AI Hazard rather than an Incident or Complementary Information.[AI generated]

Russian AI-Controlled Stealth Drone Malfunctions and is Destroyed Over Ukraine
A Russian S-70 Ochotnik stealth drone malfunctioned during a test flight, losing contact with ground control and entering Ukrainian airspace. To prevent the drone from falling into Ukrainian hands, a Russian pilot shot it down. The incident allowed Ukraine to gain insights into the drone's technology.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The S-70 is an AI system (autonomous combat UAV) whose malfunction (loss of control/communication) directly led to its destruction. This constitutes an AI Incident since the AI system’s failure caused harm (destruction of property) and operational loss.[AI generated]

Tesla Announces Upcoming Robotaxi Launch Amid AI Safety Concerns
Tesla is set to unveil its fully autonomous Robotaxi, an AI-driven vehicle without a steering wheel or pedals, designed for driverless urban transport. While no incidents have occurred, the technology's deployment raises credible future risks related to autonomous vehicle safety and regulatory challenges.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article centers on the announcement and upcoming reveal of Tesla's Robotaxi, an AI-enabled autonomous vehicle system. Although the system is not yet fully autonomous and no harm has been reported, the nature of the technology implies a credible risk of future harm related to autonomous vehicle operation. Therefore, this event qualifies as an AI Hazard due to the plausible future risks associated with the deployment of autonomous driving AI systems. There is no indication of realized harm or incident, nor is the article primarily about responses or updates to past incidents, so it is not an AI Incident or Complementary Information.[AI generated]

Concerns Over Police Scotland's Facial Recognition Plans
Police Scotland's consideration of live facial recognition technology has raised concerns about potential bias and human rights violations. Chief Constable Jo Farrell advocates for its use, while experts and Scottish Liberal Democrats warn it could harm public relations and is not yet fit for deployment.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
Live facial recognition is an AI system under consideration, and although no deployment or harms have yet occurred in Scotland, its use would plausibly lead to violations of privacy, potential civil rights breaches, and misidentification harms (false positives/negatives). The discussion is about preventing future risks rather than reporting a realized incident, making this an AI Hazard.[AI generated]

Serial Production of AI-Enabled KIZILELMA Combat Drone Begins in Turkey
Turkey has started serial production of KIZILELMA, an AI-powered unmanned combat aircraft. The system, highlighted by Selçuk Bayraktar, marks a shift from manned to autonomous military aviation, raising concerns about future risks and potential harm associated with the deployment of autonomous weapon platforms.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the development and serial production of KIZILELMA, an unmanned combat aircraft that relies on AI and autonomous technologies for operation. Although no harm or incident is reported, the nature of the system—a weaponized autonomous drone—implies credible potential for harm in the future, such as injury, disruption, or rights violations. The event is not a realized incident but a credible hazard due to the plausible risks associated with AI-enabled autonomous weapons. It is not complementary information because the main focus is on the production start of a potentially harmful AI system, not on responses or updates to past incidents. It is not unrelated because the AI system and its implications are central to the report.[AI generated]

GM Develops Level 3 'Eyes-Off, Hands-Off' Autonomous Driving System
General Motors is developing an AI-powered Level 3 autonomous driving system that will allow drivers to take their hands off the wheel and eyes off the road under certain conditions. While no harm has occurred yet, the increased autonomy raises potential safety risks if the system fails or is misused.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The Super Cruise system is an AI system involved in autonomous driving. The article mentions future plans to upgrade to Level 3 autonomy, which allows drivers to take their eyes off the road, increasing the risk of harm if the system fails or is misused. Since no actual harm or incident is reported yet, but plausible future harm exists due to increased autonomy, this qualifies as an AI Hazard.[AI generated]

























