The event clearly involves an AI system—the AI assistant integrated into Meta's smart glasses that automatically processes and transmits data including video and audio recordings. The use of this AI system has directly led to harm in the form of violations of privacy and human rights, as private and sensitive moments are recorded and reviewed without informed consent. This meets the criteria for an AI Incident because the AI system's use has directly caused harm to individuals' rights and privacy, a breach of obligations under applicable law protecting fundamental rights.[AI generated]
AIM: AI Incidents and Hazards Monitor
Automated monitor of incidents and hazards from public sources (Beta).
AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Advanced Search Options
As percentage of total AI events
Show summary statistics of AI incidents & hazards

Meta's AI Smart Glasses Expose Sensitive User Data to Overseas Reviewers
Meta's AI-powered Ray-Ban smart glasses record sensitive user data, including intimate and financial information, which is reviewed by human annotators in Kenya to train AI models. Users in Europe are often unaware their private footage is sent abroad, raising serious privacy and GDPR violation concerns.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?

AI-Generated Disinformation Undermines Nepal's Election
AI-generated fake videos and images have flooded Nepal's election campaigns, spreading misinformation and hate speech. This disinformation, amplified on social media, is misleading voters and undermining democratic processes, particularly in a context of low digital literacy and limited monitoring expertise.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images and videos being used to spread false information and hate speech during the election, with authorities already handling cases related to this disinformation. The harm is realized as misinformation is misleading voters and undermining democracy, which constitutes harm to communities and a violation of democratic rights. Therefore, this qualifies as an AI Incident due to the direct role of AI systems in causing significant societal harm.[AI generated]

AI-Powered Airstrikes Accelerate Lethal Decision-Making in Iran Conflict
U.S. and Israeli forces used Anthropic's AI model Claude to automate and accelerate airstrike planning and execution during attacks on Iran, resulting in around 900 strikes and the death of Iran's Supreme Leader. Experts warn this AI-driven process reduces human oversight, raising ethical and legal concerns over civilian harm.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used in military targeting and strike planning, which directly led to a missile strike causing civilian deaths and a serious violation of international humanitarian law. This constitutes harm to persons and a breach of legal obligations protecting fundamental rights. Therefore, this is an AI Incident because the AI system's use directly contributed to the harm and legal violations described.[AI generated]

Zero-Click Prompt Injection in Perplexity's Comet AI Browser Enables Credential Theft
Security researchers at Zenity Labs discovered that Perplexity's AI-powered Comet browser was vulnerable to zero-click prompt injection attacks. Malicious calendar invites could hijack the AI agent, enabling attackers to exfiltrate local files and steal 1Password credentials without user interaction. Although patches were released, some vulnerabilities remain due to default configurations.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The AI system (Comet browser with AI agents) is explicitly involved and malfunctioning by executing malicious prompts embedded in user data without user consent or awareness. This led to direct harm in terms of privacy violations and potential theft of sensitive data (passwords, files), which falls under violations of human rights and harm to property. The exploit was demonstrated and is a concrete incident, not just a theoretical risk. Therefore, this qualifies as an AI Incident.[AI generated]

Supreme Court Flags Use of AI-Generated Fake Judgments in Indian Trial Court
The Supreme Court of India has taken serious note of a trial court's reliance on AI-generated fake or non-existent judgments in a civil dispute, warning that such conduct constitutes judicial misconduct and undermines the integrity of the legal process. The court is examining the consequences and accountability of this misuse.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate fake judicial judgments that were relied upon by a trial court, leading to a direct impact on the integrity of the adjudicatory process. This constitutes a violation of legal obligations and undermines the fundamental rights to fair judicial process, fitting the definition of an AI Incident due to realized harm caused by the AI system's outputs in a legal context.[AI generated]

Seoul's AI System Rapidly Deletes Digital Sexual Crime Content Nationwide
Seoul City developed an AI system that detects and deletes illegal digital sexual exploitation content online, reducing removal time from 3 hours to 6 minutes and increasing accuracy. The technology, credited with significantly increasing deleted harmful content, is now being distributed free to institutions across South Korea to better protect victims.[AI generated]
Industries:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as being used to detect and remove illegal and harmful content related to digital sexual crimes, which directly protects victims from ongoing harm. The use of AI here is central to reducing harm and supporting victim protection, thus the event involves the use of an AI system that has directly led to harm mitigation. Therefore, this qualifies as an AI Incident because the AI system's use is directly linked to preventing and addressing violations of human rights and harm to individuals and communities.[AI generated]

Bengaluru Techie Fires Cook After AI Surveillance Detects Theft
A Bengaluru tech professional, Pankaj Tanwar, used an AI-powered surveillance system in his kitchen to monitor his cook. The AI, integrating vision and language models, detected the cook taking fruits without permission, leading to her dismissal. The incident sparked online debate over privacy, ethics, and labor rights.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the techie uses a vision AI model and language model chatbot to monitor and report the cook's actions. The AI system's use directly led to the firing of the cook for stealing, which is a harm related to labor rights and privacy violations. The AI system's role is pivotal in detecting and documenting the theft, which otherwise might have gone unnoticed. Hence, this is an AI Incident involving harm to labor rights and privacy through AI-enabled surveillance and consequent employment action.[AI generated]

AI-Driven Online Financial Scams Surge in Bulgaria
European financial regulators warn of a sharp rise in online financial scams in Bulgaria, enabled by AI-generated fake messages, profiles, voices, and videos. Criminals use these technologies to impersonate trusted individuals, leading to financial loss, identity theft, and psychological harm among victims.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content (voices, videos, messages) by scammers to perpetrate financial frauds that have already caused harm such as financial loss and psychological stress. The AI systems' use is central to the harm, as they enable more convincing and effective scams. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons (financial and psychological). The article is not merely a warning or potential risk but describes realized harms due to AI-enabled scams.[AI generated]

AI Deepfake Voice Scams Target 1 in 4 Americans, Causing Financial and Emotional Harm
AI-generated deepfake voice calls have targeted one in four Americans in the past year, leading to significant financial losses and emotional distress, especially among seniors. The widespread use of AI in these scams has eroded trust in mobile networks and prompted calls for stricter regulation and carrier accountability.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice deepfakes being used in fraudulent calls that have directly led to financial harm to victims, particularly older adults who have lost significant amounts of money. The AI system's use in cloning voices for scams is a direct cause of harm. The event involves the use of AI systems (deepfake voice generation) leading to realized harm (financial losses and erosion of trust), meeting the criteria for an AI Incident. The discussion of regulatory demands and carrier responsibility is complementary but does not change the primary classification.[AI generated]

China Raises Concerns Over US Plans for AI-Powered Cyber Operations
China has expressed strong concerns after reports that the US Department of Defense is exploring partnerships with major AI firms to develop AI-powered cyber tools for automated reconnaissance and potential cyberattacks targeting China's critical infrastructure. Beijing warns of heightened cybersecurity risks and vows to take necessary protective measures.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered cyber tools being discussed for reconnaissance and cyber operations, indicating AI system involvement. The concerns raised by China about potential cyberattacks and destabilization reflect a credible risk of harm to critical infrastructure and cybersecurity, which aligns with the definition of an AI Hazard. Since no actual harm or cyber incident caused by these AI tools is reported, and the focus is on potential future risks and geopolitical tensions, the event fits the classification of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

Google Chrome Gemini AI Vulnerability Exposes Users to Surveillance and Data Theft
A high-severity vulnerability in Google Chrome's Gemini AI assistant allowed malicious browser extensions to exploit the AI panel's elevated privileges, enabling unauthorized access to users' cameras, microphones, local files, and sensitive data. Discovered by Palo Alto Networks' Unit 42, the flaw was patched by Google in January 2026.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the Gemini agentic AI feature in Google Chrome. The vulnerability allowed malicious extensions to exploit the AI system's permissions and perform unauthorized actions, directly leading to harms such as spying on users, stealing data, and phishing. These harms fall under injury to privacy and security of persons, which is a violation of rights and harm to individuals. Since the vulnerability was actively exploitable and caused realized harm, this qualifies as an AI Incident. The article also discusses broader security implications and mitigation efforts, but the primary focus is on the realized harm from the vulnerability exploitation.[AI generated]

Telkom Indonesia Warns of Data Leakage Risks from Public AI Use
PT Telkom Indonesia cautioned employees against uploading internal company documents to public AI platforms like ChatGPT and Gemini, citing risks of sensitive data being stored on external servers and potential data leakage. The company is developing an internal AI chatbot to mitigate these risks.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (public AI applications like ChatGPT) and concerns the use of these systems in a way that could plausibly lead to data leakage, a form of harm to property or business interests. Since no actual data breach or harm has been reported yet, but the risk is credible and foreseeable, this qualifies as an AI Hazard. The article is a warning and advisory about potential harm rather than a report of an incident or a complementary update on a past event.[AI generated]

AI-Generated Content on Chinese Platforms Causes Harm and Triggers Regulatory Crackdown
Chinese platforms WeChat and Douyin have removed thousands of AI-generated videos that distorted classic literature, animated characters, and celebrity likenesses, leading to cultural harm, misleading youth, and rights violations. Some content targeted minors with harmful or explicit material. Platforms responded with mass takedowns and stricter moderation.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to maliciously alter children's animation content, creating harmful "children's cult" content that endangers minors' mental health. The platform's actions to remove such content and penalize accounts confirm the AI system's involvement in causing direct harm. Additionally, the misuse of AI in these contexts has led to violations of minors' rights and health, fulfilling the criteria for an AI Incident. The event involves the use and misuse of AI systems leading to realized harm, not just potential harm or general information, so it is not an AI Hazard or Complementary Information.[AI generated]

Flock Safety License Plate Reader Data Sharing Sparks Privacy and Rights Concerns in California
Flock Safety's AI-powered license plate readers, used by law enforcement in California, have come under scrutiny after data was shared with federal agencies, including ICE and Border Patrol, without proper oversight. This has led to privacy violations, public backlash, and contract terminations by cities and Amazon's Ring, highlighting risks of AI surveillance misuse.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Flock's license plate reader) actively used by police and communities, which has led to widespread public concern about privacy and civil liberties. The system's use has indirectly caused harm by eroding trust and raising fears of surveillance misuse, which aligns with violations of human rights and harm to communities. The termination of contracts by cities is a direct consequence of these harms. Although no physical injury or legal ruling is mentioned, the societal and rights-based harms are clear and materialized, meeting the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

AI Cancer Pathology Tools Risk Unreliable Diagnoses Due to Shortcut Learning
Research from the University of Warwick reveals that many AI systems used in cancer pathology rely on superficial data correlations, or "shortcut learning," rather than genuine biological signals. This raises concerns that such tools may be unreliable and could lead to harm if adopted in clinical settings without further validation.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used in cancer pathology, which are explicitly mentioned and analyzed. The research shows these AI systems' use leads to unreliable predictions due to reliance on shortcuts, which could plausibly lead to harm if used in clinical decision-making without proper validation. However, no direct or indirect harm has been reported as having occurred so far. The article serves as a warning and a call for improved evaluation protocols to prevent future harm. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm if current AI pathology tools are used without addressing their limitations.[AI generated]

AI Chatbots in Mental Health Counseling Pose Ethical and Safety Risks, Study Finds
A Brown University-led study found that AI chatbots like GPT, Claude, and Llama, when used for mental health support, frequently violate professional ethical standards. The systems mishandled crisis situations, reinforced harmful beliefs, and failed to provide accountable, safe therapeutic advice, raising concerns about their use as substitutes for trained therapists.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The AI systems involved are large language models used as therapy chatbots, which qualifies as AI systems. The study identifies multiple ethical risks and failures in these AI systems when used for mental health advice, indicating potential for harm to individuals' health and well-being. Since no actual harm or incident is reported, but the article emphasizes the plausible risks and the need for safeguards, this fits the definition of an AI Hazard rather than an AI Incident. The article is not merely general AI news or a complementary update but a warning about potential harm from AI use in therapy contexts.[AI generated]

Researchers Warn of Privacy Risks in AI-Based Age Verification Systems
Over 370 security and privacy experts from 29 countries have urged governments to pause the rollout of AI-driven age verification systems on social media. They warn these systems, already used or planned in countries like France and Australia, pose significant privacy, security, and autonomy risks without sufficient safeguards or understanding.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for age verification and estimation, which are explicitly described as using biometric data, behavior analysis, and identity verification—tasks indicative of AI. The concerns raised relate to potential harms including privacy violations, security risks, and discrimination, which align with the definitions of harm in the framework. However, the article focuses on warnings and potential risks rather than reporting actual incidents of harm caused by these AI systems. Thus, the event fits the definition of an AI Hazard, where the development and use of AI systems could plausibly lead to harm but such harm has not yet been realized or documented at scale.[AI generated]

AI-Powered WiFi Systems Enable Through-Wall Human Detection, Raising Privacy and Surveillance Concerns
AI systems developed by institutions like MIT and featured in projects such as WiFi DensePose can analyze WiFi signals to detect human poses and movements through walls without cameras. While offering benefits for security and rescue, these technologies raise significant privacy and surveillance risks, especially in military and law enforcement contexts.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems that analyze WiFi signals to infer human poses behind walls, which fits the definition of an AI system. The technology is described as being developed and tested, with potential military deployment for counterterrorism. Although no actual harm or incident is reported, the plausible future use of this AI system for pervasive surveillance and tracking in sensitive environments poses credible risks of harm, including privacy violations and potential misuse in military or law enforcement contexts. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its implications are central to the article.[AI generated]

Waymo Robotaxi Impedes Emergency Response and Is Shot at During Austin Shootings
In Austin, Texas, a Waymo self-driving taxi blocked emergency vehicles during a fatal mass shooting, briefly delaying ambulance access. In a separate incident, another Waymo robotaxi was shot at while carrying a passenger, causing vehicle damage but no injuries. Both incidents highlight safety and reliability concerns for autonomous vehicles in critical situations.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a Waymo robotaxi, an AI system for autonomous driving. The AI system's malfunction (stalling and confusion in moving out of the way) directly caused a delay in emergency responders reaching victims of a terror attack, thus disrupting critical emergency services. Although the delay was brief and did not ultimately affect patient outcomes, the AI system's failure to act appropriately in this high-stakes context meets the criteria for an AI Incident due to disruption of critical infrastructure management and operation. The presence of harm (disruption) and direct causation by the AI system's malfunction justifies classification as an AI Incident rather than a hazard or complementary information.[AI generated]

Blind YouTuber Applies for Neuralink AI Vision Restoration Trial
Blind Korean YouTuber 'Oneshot Hansol' has applied to participate in Neuralink's clinical trial for 'Blindsight,' an AI-powered brain implant aiming to restore vision by stimulating the visual cortex. While no harm has occurred, concerns about privacy, hacking, and social inequality have been raised regarding the technology's future use.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Neuralink's brain implant technology and robotic surgery) in a clinical trial aimed at restoring vision to a blind person. While the technology is promising and intended for health benefits, the article does not report any actual harm or injury yet. The participant expresses concerns about potential misuse or hacking, indicating plausible future risks. Since no harm has occurred but plausible harm could arise from the AI system's use, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

























