The article explicitly mentions AI in the context of military applications and the Pope's concern about its role in escalating conflicts and causing a 'spiral of annihilation.' Although no actual incident of harm caused by AI is described, the Pope's speech serves as a warning about the plausible future harms of AI-directed warfare. This fits the definition of an AI Hazard, where the development and use of AI systems could plausibly lead to significant harm. There is no indication of a realized AI Incident or complementary information about responses or updates, nor is the article unrelated to AI.[AI generated]
AIM: AI Incidents and Hazards Monitor
Automated monitor of incidents and hazards from public sources (Beta).
AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Advanced Search Options
As percentage of total AI events
Show summary statistics of AI incidents & hazards
Pope Leo XIV Warns Against AI-Directed Warfare and Calls for Ethical Oversight
Pope Leo XIV, during a speech at Rome's La Sapienza University, warned that investments in AI-driven weaponry risk plunging humanity into a "spiral of annihilation." He urged vigilance and ethical oversight of AI in warfare, emphasizing the need for peace and responsible technology use amid ongoing global conflicts.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?

Anthropic's Mythos AI Uncovers Critical macOS Security Vulnerabilities
Security researchers at Calif used Anthropic's Mythos AI model to discover two previously unknown vulnerabilities in Apple's macOS, enabling a privilege escalation exploit that could bypass memory integrity enforcement and allow unauthorized system access. Apple is reviewing the findings and preparing patches to address the risk.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The AI system (Anthropic Mythos) was used in the development of a working exploit that bypasses Apple's M5 security protections, directly leading to a security breach capability. This constitutes a direct link between AI use and a harm scenario involving disruption of critical infrastructure (Apple's hardware security). Although the exploit was responsibly disclosed to Apple, the fact that the AI system enabled the rapid creation of such a powerful exploit represents an AI Incident due to the realized harm potential and actual compromise of security protections. The event is not merely a potential hazard or complementary information, but a concrete case where AI was instrumental in creating a harmful exploit.[AI generated]

Italian Woman Uses AI-Generated Images to Commit Funeral Fraud
In Northern Italy, a woman used AI-generated images to fabricate the death of her pregnant daughter, deceiving a former colleague and obtaining money under false pretenses. The AI-created funeral photos made the story more convincing, leading to financial harm before the fraud was uncovered by relatives.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate fabricated images (photos of a funeral) to perpetrate a scam, which directly caused financial harm to the victim. The AI's role in creating convincing fake content was pivotal in enabling the deception and resulting harm. Therefore, this qualifies as an AI Incident due to realized harm (financial fraud) caused by AI-generated content.[AI generated]

Japanese Newspapers Sue Perplexity AI for Unauthorized Article Use
Asahi Shimbun and Nikkei sued US AI firm Perplexity in Tokyo District Court, alleging its generative AI service repeatedly used and reproduced their articles without permission, violating copyright and damaging their reputation. The newspapers seek damages and an injunction, while Perplexity disputes the claims. Similar lawsuits are ongoing.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (a generative AI search service) that has been used to reproduce and summarize copyrighted articles without permission, leading to alleged copyright infringement and reputational harm. These constitute violations of intellectual property rights and harm to business interests, which are recognized harms under the AI Incident definition. The involvement of the AI system in generating summaries and accessing content without authorization directly links it to the harm. The ongoing lawsuits and claims of damages further confirm that harm has materialized. Thus, this is an AI Incident rather than a hazard or complementary information.[AI generated]

US and China Discuss AI Controls to Prevent Cyberattack Risks
US Treasury Secretary Scott Bisent announced that the US and China are negotiating protocols to regulate AI use, aiming to prevent its misuse in cyberattacks. Both countries share concerns about non-governmental actors accessing advanced AI models, but emphasize not stifling innovation. Talks occurred during President Trump's visit to China.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems and their potential misuse (e.g., facilitating cyberattacks), but no actual harm or incident has occurred. The article discusses international cooperation to set safeguards and protocols to prevent misuse, which is a governance and risk mitigation effort. Therefore, this is an AI Hazard as it concerns plausible future harm from AI systems and efforts to prevent it, rather than an AI Incident or Complementary Information about a past event.[AI generated]

OpenAI Faces Lawsuit Over ChatGPT Data Sharing With Meta and Google
OpenAI is facing a class-action lawsuit in California alleging it embedded Meta's Facebook Pixel and Google Analytics in ChatGPT, resulting in users' sensitive queries and personal data being shared with Meta and Google without consent. The suit claims this violates U.S. and California privacy laws.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (OpenAI's chatbot) that processes personal user data. The lawsuit alleges that the AI system's use has directly led to violations of privacy laws and unauthorized sharing of intimate personal information, constituting harm to users' rights. This meets the definition of an AI Incident because the AI system's use has directly led to a breach of obligations under applicable law protecting fundamental rights (privacy). The harm is realized, not just potential, as the lawsuit is filed based on actual data sharing practices. Hence, the classification is AI Incident.[AI generated]

Italian Parents Sue Meta and TikTok After AI Algorithms Linked to Child Suicide
In Italy, the parents of a 12-year-old girl who died by suicide in February 2024, supported by other families and advocacy groups, have filed a civil lawsuit against Meta and TikTok. They allege that AI-driven recommendation algorithms repeatedly exposed minors to harmful content, contributing to mental health deterioration and suicide, and demand urgent action on age verification.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of recommendation algorithms that maximize user engagement by tailoring content, which in this case has been linked to psychological harm and a fatality of a minor. The class action alleges that these AI-driven systems have caused harm to health (mental health and suicide) indirectly by promoting harmful content to vulnerable users. The involvement of AI in causing harm is clear and direct enough to classify this as an AI Incident. The legal action and demands for suspension and reform further confirm the recognition of harm caused by AI systems in use.[AI generated]

Bucharest Approves AI-Powered Smart Traffic Light System
Bucharest's city council has approved the implementation of an AI-driven smart traffic light system, involving 305 cameras and 1,500 sensors across 92 intersections. The system aims to autonomously manage traffic flow and reduce congestion. While no harm has occurred, future risks exist if the AI system malfunctions.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
While the system involves an AI system that will make autonomous decisions affecting traffic management, the article does not report any realized harm or incidents resulting from its deployment or malfunction. The description focuses on the planned deployment and expected benefits, without mentioning any direct or indirect harm or risks that have materialized. Therefore, this event represents a potential future impact scenario where AI could plausibly lead to harm (e.g., if the system malfunctions or causes traffic disruptions), but no harm has yet occurred or been reported. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
Singapore Businessman Scammed via Deepfake Impersonation of Government Officials
A Singapore businessman lost at least S$4.9 million after scammers used deepfake AI technology to impersonate senior government officials, including Prime Minister Lawrence Wong, in a Zoom call. The AI-generated impersonations convinced the victim to transfer funds, highlighting the risks of AI-enabled fraud.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create realistic impersonations of government officials, which directly led to significant financial harm to the victim. The AI system's use in the scam is central to the incident, fulfilling the criteria for an AI Incident as it caused harm to a person (financial loss) through malicious use of AI-generated content.[AI generated]
AI-Driven Gig Platforms Cause Global Labor Rights Violations
Human Rights Watch reports that gig workers in nine countries face labor rights abuses, unsafe conditions, and economic harm due to AI-driven algorithmic management by platform companies. These systems control pay, task assignments, and account status, leading to exploitation and lack of protections for workers.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how platform companies use algorithmic systems to control gig workers' pay, task assignments, and account status, which leads to labor rights violations, unsafe conditions, and economic harm. These harms fall under violations of human rights and labor rights, as well as harm to communities (workers). The AI systems' use is central to these harms, making this an AI Incident. The article does not merely warn of potential harm but documents ongoing harm experienced by workers due to AI-driven platform management.[AI generated]

AI-Driven Cyberattacks Surge in Argentina
Argentina has seen a 15% rise in cyberattacks compared to 2025, driven by increased use of generative AI tools and automation by malicious actors. These AI-enabled attacks, including ransomware, have led to greater exposure of sensitive information and operational disruptions for organizations across sectors.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI and AI-driven automation by malicious actors in cyberattacks, including ransomware, which have caused realized harm such as data breaches and operational disruptions. The involvement of AI systems in the development and use of these attacks directly leads to harm to organizations and communities. Hence, this qualifies as an AI Incident under the framework, as the AI systems' use has directly led to significant harms.[AI generated]

AI Systems Accelerate Cybersecurity Risks and Real-World Incidents
AI models such as Microsoft's MDASH, Anthropic's Mythos, and OpenAI's GPT-5.5 are rapidly advancing in autonomously finding and exploiting software vulnerabilities, leading to both the discovery of new security flaws and increased risks of AI-enabled cyberattacks. Authorities and experts warn of urgent threats to critical infrastructure, especially in Europe.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's Mythos) and discusses its development and use in cybersecurity tasks. However, it does not report any direct or indirect harm resulting from the AI's deployment or malfunction. The focus is on capability improvements and potential implications, which aligns with a plausible future risk rather than an actual incident. Therefore, this event fits the definition of an AI Hazard, as the AI system's rapid advancement in cybersecurity tasks could plausibly lead to incidents in the future, but no harm has yet occurred or been reported.[AI generated]

AI-Generated Fake Content Used to Blackmail Turkish Celebrity
Turkish entertainer Mehmet Ali Erbil was targeted by unidentified individuals who used AI-generated manipulated images to blackmail him for money. After refusing their demands, Erbil faced reputational attacks and has initiated legal action. The incident highlights the misuse of AI for extortion and reputational harm in Turkey.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-generated manipulated images for blackmail, which is a direct misuse of an AI system leading to harm (reputational damage and extortion attempts). The involvement of AI in creating fake content that causes harm to a person fits the definition of an AI Incident under violations of rights and harm to communities or individuals. The harm is realized (blackmail attempt and reputational damage), not just potential, so it is not merely a hazard or complementary information.[AI generated]

Tech Giants Sued for Using Voiceprints to Train AI Without Consent
Award-winning journalists, podcasters, and audiobook narrators sued Nvidia, Google, Microsoft, Amazon, Apple, and Meta, alleging their voices were used without consent to train AI voice models. The lawsuits, filed in Illinois, claim violations of the Biometric Information Privacy Act, citing unauthorized collection and commercial exploitation of biometric voiceprints.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (voice AI models like Google Assistant and Gemini Live) trained on voice recordings allegedly obtained without consent. The plaintiffs claim violations of biometric data privacy and publicity rights, which are breaches of legal protections. The misuse of voice data for AI training is a direct cause of harm to the plaintiffs' rights. Hence, this is an AI Incident involving harm (violation of rights) caused by the development and use of AI systems.[AI generated]

ChatGPT Use Drives Grade Inflation in Texas University Courses
A University of California, Berkeley study found that after ChatGPT's late-2022 release, courses at a large Texas university with writing and coding assignments saw a 30% surge in A grades. The AI-assisted grade inflation undermines academic integrity and raises concerns for employers relying on GPAs.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by students to complete assignments, leading to grade inflation. This is an indirect effect of AI use impacting the reliability of academic grading and employer evaluation processes. While this does not cause direct physical harm or legal violations, it represents a significant, clearly articulated harm to the integrity of educational assessment and labor market evaluation. Therefore, it qualifies as an AI Incident due to the realized harm to societal trust and fairness in academic and employment contexts.[AI generated]

South Korea Invests in AI-Driven Autonomous Shipyards and Vessels
The South Korean government announced a major investment—over 1 trillion KRW by 2030—in developing AI-powered autonomous shipyards and vessels. The initiative aims to automate shipbuilding processes and advance autonomous navigation, raising potential future AI-related risks but reporting no current incidents or harm.[AI generated]
AI principles:
Industries:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the autonomous AI-operated shipyard) and its planned use to automate shipbuilding processes. While the AI system is central to the project, the article does not report any actual harm, malfunction, or incident caused by the AI system. Instead, it describes a large-scale investment and strategic plan for future AI deployment. According to the definitions, this fits the category of an AI Hazard, as the AI system's development and use could plausibly lead to incidents or harms in the future, but no current harm is reported. It is not Complementary Information because it is not an update or response to a past incident, nor is it unrelated since AI is central to the project.[AI generated]

Lawyers Fined for Attempting to Manipulate Judicial AI System in Pará
Two lawyers in Pará, Brazil, were fined for using prompt injection—hidden instructions in legal documents—to manipulate the Galileu AI system used by the labor court. The concealed commands aimed to influence judicial decisions, undermining the integrity of the legal process. The court detected and penalized the misconduct.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The AI system of the Tribunal Regional do Trabalho was intentionally misled by the insertion of a hidden command designed to manipulate its output, which is a misuse of the AI system in a legal context. This manipulation led to a judicial response including fines and official condemnation, indicating that harm to the legal process and rights has occurred. Therefore, this qualifies as an AI Incident because the AI system's use was directly involved in causing harm related to legal rights and the justice system's integrity.[AI generated]

Anduril's $5B Funding Fuels Expansion of AI-Driven Autonomous Weapons
US defense tech firm Anduril Industries raised $5 billion, doubling its valuation to $61 billion. The funding will expand production of AI-powered autonomous weapons, drones, and battlefield management systems, heightening concerns over the potential risks and hazards of deploying advanced AI in military applications.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-backed autonomous weapons and systems developed and deployed by Anduril, indicating the presence of AI systems. Although no direct harm or incident is reported, the nature of these AI systems—autonomous military weapons—carries a credible risk of causing injury, disruption, or other harms if used in conflict or malfunctioning. The event focuses on the company's funding and expansion, which increases the scale and potential impact of these AI systems. Hence, it fits the definition of an AI Hazard, as the development and proliferation of AI-enabled autonomous weapons plausibly could lead to AI Incidents in the future.[AI generated]

Meta's AI Smart Glasses Spark Privacy Violations and Legal Action
Meta's AI-powered Ray-Ban smart glasses have led to widespread privacy violations, with users secretly recording individuals—often women—without consent and sharing videos online. Some videos are used for AI training, exposing workers to graphic content. Lawsuits have been filed over unauthorized data sharing and privacy breaches.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems embedded in smart glasses with cameras and AI features. The use of these glasses has directly led to violations of privacy rights and harms to individuals, including secret recordings and sharing of videos without consent, which are breaches of fundamental rights. The lawsuits and public backlash confirm that harm has materialized. The AI system's development and use are central to these harms. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]

Japanese Megabanks to Access Anthropic's Mythos AI, Raising Cybersecurity Concerns
Japan's three largest banks—MUFG, Mizuho, and Sumitomo Mitsui—are set to gain access to Anthropic's advanced Mythos AI system for cybersecurity. While intended to enhance cyber defense, experts and regulators warn that Mythos's powerful vulnerability detection could accelerate cyber threats if misused, highlighting potential future risks.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The Mythos AI system is explicitly mentioned and is used for cybersecurity analysis, which involves AI system use. The article does not report any realized harm but emphasizes fears that the AI could accelerate cyber threats if misused. This constitutes a plausible future risk of harm to critical infrastructure (financial institutions) and potentially to communities or property through cyberattacks. Therefore, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information, as the main focus is on potential harm rather than realized harm or responses to past incidents.[AI generated]

























