The article explicitly mentions the AI system Mythos and its capabilities, highlighting concerns about its disruptive potential and risks to financial systems. However, it does not report any realized harm or incident caused by the AI system. Instead, it focuses on the potential risks and the need for stakeholders to understand and manage these risks. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm but no harm has yet occurred.[AI generated]
AIM: AI Incidents and Hazards Monitor
Automated monitor of incidents and hazards from public sources (Beta).
AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Advanced Search Options
As percentage of total AI events
Show summary statistics of AI incidents & hazards

Global Financial Leaders Warn of AI Risks to Financial Systems from Anthropic's Mythos
Bank of Canada Governor Tiff Macklem and international financial officials have raised concerns about the potential risks posed by Anthropic's upcoming AI model, Mythos, which can rapidly detect cybersecurity vulnerabilities. Discussions among regulators and banks highlight fears that such AI advances could disrupt global financial systems if not properly managed.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?

Zoom Partners with World to Combat Deepfake Fraud in Video Meetings
Zoom has partnered with World, Sam Altman's biometric identity company, to verify meeting participants are human and not AI-generated deepfakes. This move follows major financial losses, including a $25 million fraud at Arup in Hong Kong, caused by deepfake-enabled video call scams targeting businesses globally.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (World's Deep Face biometric verification technology) used to counteract AI-generated deepfake fraud, which has already caused substantial financial harm to companies. The AI system's deployment is a direct response to these harms, indicating the AI system's involvement in the use phase to prevent further incidents. The harms described (financial losses due to deepfake fraud) are materialized and significant, fulfilling the criteria for an AI Incident. Although the article also discusses regulatory and privacy issues, these are complementary concerns and do not overshadow the primary fact that the AI system is involved in addressing an ongoing AI-related harm. Hence, the event is best classified as an AI Incident.[AI generated]

Cal.com Closes Source Code Due to AI-Driven Security Threats
Cal.com, a major open-source scheduling platform, has closed its source code and switched to a proprietary license, citing the growing threat of AI systems like Claude Mythos that can rapidly identify and exploit software vulnerabilities. This move highlights rising security concerns about AI's impact on open-source software.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems that scan open source code to identify vulnerabilities and generate exploits rapidly, which is a clear AI system involvement. The event stems from the use of AI in security analysis, leading to a strategic decision to close source code to mitigate risks. While no actual harm has yet occurred, the concern is that AI's capabilities could plausibly lead to incidents of exploitation and security breaches, which would harm property and organizations. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct harm is reported yet. The event is not Complementary Information because it is not an update or response to a past incident but a new development highlighting potential risks. It is not an AI Incident because no realized harm is described. It is not Unrelated because AI systems are central to the issue.[AI generated]

Google Negotiates Pentagon Deal for Gemini AI with Safeguards
Google is in advanced talks with the U.S. Department of Defense to deploy its Gemini AI models in classified military settings. The company is pushing for contract terms to prevent misuse, specifically banning domestic mass surveillance and fully autonomous weapons without human oversight. No actual deployment or harm has occurred yet.[AI generated]
Industries:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems (Google's Gemini models) in sensitive and potentially high-risk applications (defense and surveillance). However, the article describes negotiations and proposed safeguards rather than any realized harm or malfunction. Therefore, it represents a plausible future risk scenario (AI Hazard) rather than an incident or complementary information. The potential for misuse in military or surveillance contexts aligns with the definition of an AI Hazard due to credible risks of harm if controls fail or are circumvented.[AI generated]

AI Chatbots Defy Brazil Election Rules, Spread Misinformation
Despite Brazil's electoral court banning AI chatbots from offering voting advice, leading chatbots like ChatGPT, Grok, and Gemini continue to provide candidate rankings and opinions. This defiance risks spreading biased and inaccurate political information, potentially contaminating the upcoming presidential election and undermining democratic integrity.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (chatbots) whose use has directly led to the spread of biased and incorrect political information during an election, which is a harm to communities and democratic processes. The chatbots' outputs influence voter perceptions and decisions, fulfilling the criteria for harm under the AI Incident definition. The electoral court's ban and concerns about enforcement highlight the misuse of AI in this context. Therefore, this is classified as an AI Incident rather than a hazard or complementary information, as harm is occurring through misinformation dissemination by AI chatbots.[AI generated]

AI-Generated Disinformation Threatens Democracies, Study Finds
A study by Agência Lupa, analyzing 1,294 professional fact-checks in over ten languages, found that 81.2% of AI-driven disinformation cases emerged in the past two years. AI-generated deepfakes and misinformation, especially on elections and conflicts, are rapidly spreading, undermining public trust and threatening democratic processes globally.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate and disseminate false information, including deepfakes and AI-generated images and texts. The harm is realized and ongoing, as the disinformation affects political processes and public trust, which constitutes harm to communities and a violation of rights to accurate information. The article provides concrete data on the increase in AI-generated fake news and its strategic use in political manipulation, fulfilling the criteria for an AI Incident. It is not merely a potential risk or a complementary update but a documented case of AI-driven harm.[AI generated]

Punjab Government Partners with IIT Ropar to Deploy AI for Crime Control
The Punjab government has partnered with IIT Ropar to develop and deploy AI-driven systems for crime prevention and targeting organized crime. The initiative includes creating structured criminal databases, real-time tracking, and intelligence-led policing, aiming to dismantle gangster networks and enhance public safety in Punjab.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the project centers on AI-powered software for crime prevention. The use of AI for real-time tracking and predictive modeling of criminal activity directly relates to the use of AI systems. While the article does not report any realized harm or incidents caused by the AI system, the deployment of such a system in policing could plausibly lead to harms such as violations of human rights (e.g., privacy infringements, potential misuse or bias in policing). Therefore, this event represents a plausible risk of harm stemming from the AI system's use, qualifying it as an AI Hazard rather than an Incident or Complementary Information.[AI generated]

Bank of England Stress-Tests AI Risks to UK Financial Stability
The Bank of England, responding to parliamentary concerns, is conducting scenario analyses and stress tests to assess potential risks from AI in financial markets, such as herding behavior and cybersecurity threats. No harm has occurred yet, but regulators are proactively addressing plausible future AI-related financial system risks in the UK.[AI generated]
AI principles:
Industries:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article describes ongoing efforts by the Bank of England to understand and test AI-related risks to the financial system, including potential systemic risks from AI-driven trading behaviors and cybersecurity threats. While no direct harm or incident has occurred, the focus is on plausible future harms that AI could cause, such as market disruptions or exploitation of vulnerabilities. Therefore, this event fits the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI systems and their risks are central to the discussion.[AI generated]

US Labor Leaders Warn of AI's Potential Threat to Jobs and Society
US Senator Bernie Sanders, UAW President Shawn Fain, and other labor leaders publicly warned that artificial intelligence could threaten American jobs, worker safety, and economic stability. They called for regulatory safeguards and a moratorium on AI data centers, highlighting concerns about job loss and societal impact if AI is not properly managed.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of their potential economic and social impact, specifically the plausible risk of widespread job displacement. No actual harm or incident caused by AI is reported; rather, the article centers on warnings and advocacy for safeguards. Therefore, this qualifies as an AI Hazard because it plausibly could lead to harm (job losses and economic disruption) but no incident has yet occurred. It is not Complementary Information since it is not updating or responding to a past incident, nor is it unrelated as it directly concerns AI's societal risks.[AI generated]

AI-Generated Deepfake Video Falsely Portrays Indian Finance Minister Endorsing Fraudulent Scheme
An AI-generated deepfake video falsely depicting Indian Finance Minister Nirmala Sitharaman endorsing a high-return investment scheme circulated online, misleading the public and risking financial harm. The Indian government's fact-checking unit debunked the video, warning citizens against falling for such AI-driven misinformation.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The viral video is explicitly described as AI-generated, indicating the involvement of an AI system in creating misleading content. The misinformation falsely claims a high-return investment scheme, which can lead to financial harm to individuals who might be deceived. The fact that the government had to intervene to debunk the video shows that harm is occurring or is imminent. Therefore, the event meets the criteria for an AI Incident due to the AI system's role in generating harmful misinformation that affects the public.[AI generated]

South Korea Launches AI-Based Space Situational Awareness System Development
South Korea's Aerospace Administration has initiated the development of the K-SSA, a national space situational awareness system using AI and machine learning to predict and monitor space object collisions. The project aims to enhance space safety and asset protection, with two surveillance satellites planned for launch by 2029.[AI generated]
Industries:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI/ML-based algorithms for space object orbit determination and risk analysis, which qualifies as an AI system. The event concerns the development and planned deployment of these AI systems to enhance space situational awareness and safety. No current harm or violation is reported; rather, the AI system is intended to predict and prevent potential harms related to space debris and collisions. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to preventing or managing incidents involving harm to national space assets or public safety in the future. It is not Complementary Information because the article focuses on the initiation of the project and its potential impact, not on updates or responses to past incidents.[AI generated]

Smart Locks' Facial Recognition Vulnerabilities Exposed in China
Consumer associations in Beijing, Tianjin, and Hebei tested 30 smart lock models and found that three facial recognition locks could be easily unlocked with photos, revealing serious AI anti-spoofing flaws. Additional risks include unencrypted data transmission and easily copied IC cards, posing threats to property and privacy.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of facial recognition technology in smart locks. The malfunction or inadequacy of the AI system's liveness detection and anti-spoofing features has directly led to security vulnerabilities that allow unauthorized access (harm to property and privacy). The article describes actual security incidents (successful unlocking with photos) and risks of data interception, constituting realized harms. Therefore, this qualifies as an AI Incident due to the direct link between AI system malfunction and harm.[AI generated]
European Banking Authority Warns of AI-Driven Cybersecurity Risks to Banks
Francois-Louis Michaud, the new president of the European Banking Authority, warned that while European banks are currently resilient, they must prepare for emerging cybersecurity threats posed by artificial intelligence. Regulators are prioritizing stress tests and risk assessments to address potential AI-driven cyberattacks on the banking sector.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Mythos by Anthropic) that could launch complex cyberattacks against the banking sector, which is a credible potential threat. However, no actual AI-driven cyberattack or harm has occurred yet. The focus is on regulatory awareness, risk assessment, and preparedness, which fits the definition of an AI Hazard rather than an AI Incident. It is not merely general AI news or product announcement, as it concerns cybersecurity risks from AI with potential significant impact on critical infrastructure (banks). It is not Complementary Information because it does not update or respond to a past AI Incident but rather highlights a new potential risk. Hence, the classification is AI Hazard.[AI generated]

Anthropic Limits AI Cybersecurity Capabilities Amid U.S. Government Concerns
Anthropic's advanced AI model Mythos raised cybersecurity concerns due to its ability to find critical software bugs. In response, the U.S. government is considering protective measures for its use, and Anthropic released Opus 4.7 with intentionally reduced cybersecurity features to mitigate misuse risks.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Mythos) with advanced capabilities in cybersecurity, including finding critical software bugs that could be exploited maliciously. The U.S. government's cautious approach and protective measures indicate awareness of potential risks. No actual harm or incident has been reported yet, but the potential for misuse leading to harm to critical infrastructure or data security is credible and significant. Hence, this is an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving disruption or harm, but such harm has not yet materialized.[AI generated]

Google Uses Gemini AI to Block Billions of Malicious Ads
Google deployed its Gemini AI system to block approximately 8.2 billion online ads in 2023 that violated company policies, including those generated by malicious actors using generative AI. The system intercepted over 99% of harmful ads before reaching users, significantly reducing exposure to deceptive and dangerous content.[AI generated]
Industries:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Gemini) in detecting and blocking malicious ads, including those generated by AI for deceptive purposes. The AI system's role is pivotal in preventing harm by intercepting harmful content before it reaches users, thus directly addressing harm to communities and individuals. Since the AI system's use has directly led to harm prevention and involves real-world impacts, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]

Android Facial Recognition Flaw Allows Unauthorized Access via Photos
Consumer group Which? found that 64% of Android smartphones tested since 2022 can be unlocked using a printed photo, exposing a major security flaw in AI-based facial recognition systems. This vulnerability affects flagship models and risks user privacy and data security in the UK.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The facial recognition systems are AI systems that infer from biometric input to authenticate users. The article documents that 21 phone models' facial recognition can be spoofed by simple printed photos, which directly compromises user security and privacy. This is a realized harm scenario, not just a potential hazard, as it enables unauthorized access to personal data and accounts. The lack of adequate warnings exacerbates the harm. Hence, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's malfunction in security authentication.[AI generated]
Anthropic's Mythos AI Raises Security Concerns for US Financial Database
The American Securities Association warned that Anthropic's new AI model, Mythos, could enable bad actors to exploit the SEC's Consolidated Audit Trail database, risking mass identity theft, exposure of trading portfolios, and insider threats. The group urged regulators to suspend and destroy sensitive data to prevent potential AI-driven harm.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Mythos) designed for cybersecurity vulnerability detection. The trade group's letter and regulatory attention indicate that misuse of this AI tool could plausibly lead to serious harms including identity theft and financial system destabilization, which fall under harm to communities and critical infrastructure. Since these harms have not yet materialized but are credible and imminent risks, the event qualifies as an AI Hazard rather than an AI Incident. The focus is on potential harm and risk mitigation rather than realized harm.[AI generated]

AI-Driven Disinformation Fuels Harm Against Migrants
AI technologies are increasingly used to create and spread sophisticated disinformation targeting migrants, leading to real-world harms such as discrimination and violence. Organizations like the International Organization for Migration and EFE are responding with training to help journalists and the public detect and counteract these AI-enabled false narratives.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
AI system task:
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI is used to enhance the spread of disinformation against migrants, which has led to real-world harms such as discrimination and violence. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to communities and violations of rights. The seminar and organizational responses are complementary information, not the primary event. Therefore, the classification is AI Incident.[AI generated]

AI Deepfake Scams Target Investors on Meta Platforms
Scammers are using AI-generated deepfake technology to create fraudulent ads on Meta platforms (Facebook, Instagram, WhatsApp), impersonating well-known figures to lure victims into investment scams such as pump-and-dump and cryptocurrency fraud, resulting in significant financial losses. Authorities in North Carolina and Hawaiʻi have issued public warnings.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated synthetic media, and their use in creating fake investment ads constitutes the use of AI systems leading directly to harm (financial loss) to individuals. The attorney general's warning highlights ongoing harm caused by these AI-enabled scams. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in scams.[AI generated]

Startup Develops AI Cap to Convert Thoughts into Text, Raising Future Privacy Concerns
California-based startup Sabi is developing a wearable AI-powered cap that uses EEG sensors to convert brain signals into text, offering a non-invasive alternative to Neuralink. While no harm has occurred, the technology raises plausible future risks regarding privacy and misuse of sensitive neural data.[AI generated]
AI principles:
Industries:
Affected stakeholders:
Harm types:
Severity:
Business function:
Autonomy level:
AI system task:
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (brain-computer interface with AI models interpreting neural data) under development, with no current harm reported. The article focuses on the technology's potential and upcoming launch, without any indication of injury, rights violations, or other harms. Thus, it fits the definition of an AI Hazard, as the system could plausibly lead to harm in the future once deployed, but no incident has occurred yet.[AI generated]

























