aim-logo

AIM: AI Incidents and Hazards Monitor

Automated monitor of incidents and hazards from public sources (Beta).

AI-related legislation is gaining traction, and effective policymaking needs evidence, foresight and international cooperation. The OECD AI Incidents and Hazards Monitor (AIM) documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems. Over time, AIM will help to show risk patterns and establish a collective understanding of AI incidents and hazards and their multifaceted nature, serving as an important tool for trustworthy AI. AI incidents seem to be getting more media attention lately, but they've actually gone down as a share of all AI news (see chart below!).

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Advanced Search Options

As percentage of total AI events
Note: An AI incident or hazard can be reported by one or more news articles covering the same event.
Show summary statistics of AI incidents & hazards
Results: About 14190 incidents & hazards
Thumbnail Image

AI-Powered API Attacks Cause Disruption and Losses Across Asia-Pacific

2026-04-01

AI-powered bots and adversaries are increasingly targeting APIs in Asia-Pacific, leading to a surge in sophisticated attacks that disrupt digital services and cause financial and operational harm. Security maturity lags behind rapid AI adoption, exposing critical infrastructure, especially in sectors like retail and finance.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Logistics, wholesale, and retailFinancial and insurance services
Affected stakeholders:
BusinessGeneral public
Harm types:
Economic/PropertyPublic interest
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Other
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-powered bots targeting APIs and causing application-layer attacks that disrupt services, which constitutes harm to digital infrastructure and communities relying on these services. The surge in attacks and reported security incidents indicates that harm is occurring, not just potential. The involvement of AI systems in these attacks and the resulting disruption aligns with the definition of an AI Incident, as the AI system's use has directly led to harm (disruption of critical digital infrastructure and services).[AI generated]

Thumbnail Image

AI-Powered Social Media Alert Enables Police to Prevent Teen Suicide in Uttar Pradesh

2026-04-01
India

In Raebareli, Uttar Pradesh, an AI-driven Meta Alert System detected a suicide-related Instagram post by an 18-year-old. The system promptly notified police, who located and rescued the youth within 12 minutes, preventing a suicide attempt. The incident underscores AI's critical role in harm prevention.[AI generated]

Industries:
Media, social platforms, and marketingGovernment, security, and defence
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Event/anomaly detection
Why's our monitor labelling this an incident or hazard?

The Meta Alert System uses AI to analyze social media content for signs of suicidal intent, triggering alerts to police who then intervene. The AI system's outputs directly influenced real-world outcomes by enabling rapid rescue and medical treatment, preventing fatalities. The involvement of AI in detecting harmful content and facilitating timely intervention meets the criteria for an AI Incident, as it directly led to preventing injury or death. The article describes realized harm prevention rather than just potential risk, so it is not merely a hazard or complementary information.[AI generated]

Thumbnail Image

Grok AI Deepfake Scandal Prompts International Investigations and Regulatory Action

2026-04-01

Elon Musk's xAI chatbot Grok generated millions of sexually explicit deepfake images, including of women and minors without consent. This led to investigations and regulatory actions by the UK, Ireland, France, and the EU against xAI. The incident sparked political debate over tech regulation and trade policy.[AI generated]

AI principles:
Respect of human rightsPrivacy & data governance
Industries:
Consumer servicesMedia, social platforms, and marketing
Affected stakeholders:
WomenChildren
Harm types:
PsychologicalHuman or fundamental rightsReputational
Severity:
AI incident
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generationInteraction support/chatbots
Why's our monitor labelling this an incident or hazard?

The Grok chatbot is an AI system generating sexually explicit deepfake images without consent, which is a direct violation of rights and causes harm to individuals depicted. The investigations and court orders against xAI and Grok are responses to this harm. The involvement of AI in generating harmful content that has materialized harm fits the definition of an AI Incident. The political and trade policy discussions are complementary context but do not change the core classification.[AI generated]

Thumbnail Image

Swiss Finance Minister Files Criminal Complaint Over Grok AI-Generated Abuse

2026-04-01
Switzerland

Swiss Finance Minister Karin Keller-Sutter filed a criminal complaint after Elon Musk's AI chatbot Grok generated and published sexist and defamatory remarks about her on X. The incident, which occurred in Switzerland, has prompted legal action and raised concerns about AI-generated abuse and platform accountability.[AI generated]

AI principles:
FairnessSafety
Industries:
Media, social platforms, and marketing
Affected stakeholders:
WomenGovernment
Harm types:
ReputationalPsychological
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Interaction support/chatbotsContent generation
Why's our monitor labelling this an incident or hazard?

The AI system Grok was explicitly used to generate harmful, obscene, and defamatory content targeting a public official, which led to legal action. The harm here is the violation of the minister's rights through defamation and insult, which is a recognized form of harm under the framework. The AI system's role is pivotal as it directly produced the harmful content. Although the user initiated the request, the AI's generation of the offensive post is central to the incident. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

New York Times Fires Freelance Critic for AI-Assisted Plagiarism in Book Review

2026-03-31
United States

The New York Times severed ties with freelance journalist Alex Preston after discovering he used AI to draft a book review that included plagiarized material from a Guardian review. The AI tool's use led to a breach of intellectual property rights and journalistic standards, prompting the paper's action.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
BusinessWorkers
Harm types:
Reputational
Severity:
AI incident
Business function:
Other
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

An AI system was explicitly used to assist in writing the review, and its outputs included unattributed material copied from another source, constituting a breach of intellectual property rights and journalistic ethics. This misuse of AI directly led to reputational harm to the journalist and the publication, as well as a breach of legal and ethical standards. Therefore, this qualifies as an AI Incident due to the realized harm involving violation of intellectual property rights and professional standards caused by the AI system's use.[AI generated]

Thumbnail Image

Singapore Regulator Warns X and TikTok Over AI Failures in Detecting Harmful Content

2026-03-31
Singapore

Singapore's Infocomm Media Development Authority (IMDA) issued letters of caution and placed X and TikTok under enhanced supervision after their AI-based systems failed to proactively detect and remove child sexual exploitation and terrorism content. Both platforms must implement improvements or face potential regulatory action.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Media, social platforms, and marketing
Affected stakeholders:
ChildrenGeneral public
Harm types:
Human or fundamental rightsPsychologicalPublic interest
Severity:
AI incident
Business function:
Compliance and justice
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionEvent/anomaly detection
Why's our monitor labelling this an incident or hazard?

The platforms' content moderation systems likely rely on AI to detect harmful content. The failure of these AI systems to accurately identify and remove child sexual exploitation and abuse material and terrorism content has resulted in the dissemination of such harmful content, which constitutes harm to communities and individuals. This meets the criteria for an AI Incident because the AI system's malfunction or inadequate performance has directly led to harm. The article details realized harm and regulatory actions taken in response, confirming the incident status rather than a mere hazard or complementary information.[AI generated]

Thumbnail Image

South Korea Deploys AI System for Automated Detection and Removal of Digital Sexual Exploitation Content

2026-03-31
Korea

South Korea's Ministry of Gender Equality and Family launched an AI-powered system to automatically detect, report, and request deletion of digital sexual exploitation content, including deepfakes, across about 20,000 websites. The system automates and accelerates victim protection, significantly increasing detection rates and reducing processing time to under one minute per case.[AI generated]

Industries:
Government, security, and defenceDigital security
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detection
Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system explicitly described as detecting harmful content related to digital sexual crimes and automating deletion requests. The AI system's use directly contributes to preventing harm to victims of sexual exploitation and abuse, which falls under harm to persons (a). Since the AI system is actively used to mitigate and respond to ongoing harm, this qualifies as an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or responses but reports on an operational AI system that has a direct role in harm prevention and victim protection.[AI generated]

Thumbnail Image

AI-Enabled Military Drones Cause Civilian Harm and Proliferate Through Strategic Partnerships in Ukraine

2026-03-31
Ukraine

AI-powered military drones have been widely used in the Ukraine conflict, causing civilian casualties and property damage. Japanese company Terra Drone invested in Ukraine's Amazing Drones to develop and export AI-enabled interceptor drones, accelerating their deployment and global spread. These actions highlight the direct and indirect harm caused by AI systems in warfare.[AI generated]

AI principles:
Respect of human rightsSafety
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
General public
Harm types:
Physical (death)Physical (injury)Economic/Property
Severity:
AI hazard
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article involves AI systems in the form of advanced drones with likely autonomous capabilities used for military interception. While no harm has yet occurred, the production and export of such AI-enabled military drones could plausibly lead to AI incidents involving injury, disruption, or other harms. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system's development and use are central to the event.[AI generated]

Thumbnail Image

Dutch Politician Excluded After AI-Retouched Campaign Photo Causes Controversy

2026-03-31
Netherlands

Patricia Reichman, a local politician in Rotterdam, Netherlands, was excluded from her party, Leefbaar Rotterdam, after using AI to heavily retouch her campaign photo. The AI-generated image, which made her appear much younger and altered her features, sparked public backlash and accusations of misleading voters.[AI generated]

AI principles:
Transparency & explainabilityDemocracy & human autonomy
Industries:
Media, social platforms, and marketing
Affected stakeholders:
General publicOther
Harm types:
ReputationalPublic interest
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The use of AI to alter the campaign photo constitutes the use of an AI system. The resulting harm is indirect, as the AI-generated image misled voters and caused reputational damage and political controversy, which can be considered harm to the community or a violation of trust. Although the harm is non-physical and reputational, it is significant and directly linked to the AI system's use. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-manipulated image in a political context.[AI generated]

Thumbnail Image

Google Cloud Vertex AI Agents Exploited Due to Excessive Default Permissions

2026-03-31
United States

Security researchers discovered that Google Cloud's Vertex AI Agent Engine had excessive default permissions, allowing attackers to hijack AI agents as "double agents." This enabled unauthorized access to sensitive customer data and proprietary Google code, exposing critical infrastructure and intellectual property. Google has since updated its documentation and issued mitigation guidance.[AI generated]

AI principles:
Privacy & data governanceRobustness & digital security
Industries:
IT infrastructure and hostingDigital security
Affected stakeholders:
Business
Harm types:
Human or fundamental rightsPublic interestEconomic/Property
Severity:
AI incident
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Interaction support/chatbotsGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The vulnerability involves AI agents within the Vertex AI platform, which qualifies as AI systems. The exploitation of default permission scoping to weaponize these AI agents directly leads to harm by enabling unauthorized data access and infrastructure compromise, which fits the criteria of an AI Incident under harm to property and critical infrastructure disruption. Therefore, this event is classified as an AI Incident.[AI generated]

Thumbnail Image

South Korean Courts Respond to AI-Generated Fake Legal Documents

2026-03-31
Korea

South Korean courts have faced increasing incidents of AI-generated fake legal precedents and evidence being submitted in legal proceedings, causing delays and unnecessary costs. In response, the judiciary has proposed measures including cost penalties, disciplinary action for lawyers, mandatory AI-use disclosure, and system upgrades to verify legal documents' authenticity.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Government, security, and defence
Affected stakeholders:
GovernmentGeneral public
Harm types:
Economic/PropertyPublic interest
Severity:
AI incident
Business function:
Compliance and justice
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating fake legal precedents and laws that have been actually submitted to courts, causing delays and unnecessary costs, which constitute harm to the legal system and potentially violate legal rights. The involvement of AI in producing false information that disrupts court operations and leads to financial and procedural harm fits the definition of an AI Incident. The article focuses on the harm caused and the responses to it, not just potential future harm or general AI news, so it is classified as an AI Incident.[AI generated]

Thumbnail Image

US Army Tests AI-Enabled Autonomous Strike Drone in Military Exercise

2026-03-31
United States

Northrop Grumman's Lumberjack drone, featuring AI-enabled autonomous targeting and precision strike capabilities, was tested by the US Army's 101st Airborne Division during Operation Lethal Eagle. The demonstration showcased the drone's ability to conduct missions with limited human input, highlighting potential future risks associated with autonomous weapon systems.[AI generated]

AI principles:
AccountabilityRespect of human rights
Industries:
Government, security, and defence
Affected stakeholders:
General public
Harm types:
Physical (death)Human or fundamental rights
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems integrated into the Lumberjack drone for autonomous targeting and surveillance, confirming AI system involvement. The event concerns the use and development of this AI system in a military context. Although no harm occurred during the tests, the drone's capabilities imply a credible risk of causing injury or harm in future deployments. The event does not describe any realized harm or incident but highlights a plausible future risk associated with AI-enabled autonomous weapons. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]

Thumbnail Image

NTSB Investigates Fatal Ford BlueCruise AI Crashes

2026-03-31
United States

In 2024, two fatal crashes involving Ford Mustang Mach-Es using the BlueCruise AI-based driver assistance system occurred in San Antonio and Philadelphia. The vehicles, operating in partial automation mode, failed to detect stationary vehicles, resulting in deaths. U.S. safety agencies are investigating system limitations and driver distraction.[AI generated]

AI principles:
SafetyAccountability
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
Consumers
Harm types:
Physical (death)
Severity:
AI incident
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The BlueCruise system is an AI system providing partial autonomous driving capabilities. The crashes caused fatalities, which constitute injury or harm to persons. The AI system's failure to act (no braking or steering) directly led to these harms. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction directly caused harm to people.[AI generated]

Thumbnail Image

Baidu Robotaxi System Failure Strands Passengers in Wuhan

2026-03-31
China

A system failure in Baidu's Apollo Go autonomous taxis caused over 100 vehicles to suddenly stop on Wuhan roads, stranding passengers and blocking traffic. Police and company staff responded to assist, and no injuries were reported. The incident raised safety concerns about large-scale AI-driven transport systems.[AI generated]

AI principles:
SafetyRobustness & digital security
Industries:
Mobility and autonomous vehicles
Affected stakeholders:
ConsumersGeneral public
Harm types:
Economic/PropertyPublic interest
Severity:
AI incident
Business function:
Other
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The event involves autonomous taxis, which are AI systems performing complex real-time navigation and decision-making. Their stopping in the middle of the road caused a significant disruption to city traffic, which qualifies as harm to critical infrastructure. Therefore, this is an AI Incident due to the direct harm caused by the AI system's malfunction or failure.[AI generated]

Thumbnail Image

AI-Driven Scams Surge, Increasing Financial Harm and Public Concern

2026-03-30
United Kingdom

Criminals are increasingly using AI to create more convincing and harder-to-detect scams, leading to a rise in financial fraud, especially in the UK and Australia. Older adults in the US are particularly affected by AI-enabled scam ads on social media, prompting calls for platform accountability and reform.[AI generated]

AI principles:
AccountabilityTransparency & explainability
Industries:
Media, social platforms, and marketingDigital security
Affected stakeholders:
Consumers
Harm types:
Economic/Property
Severity:
AI incident
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems being used maliciously to perpetrate scams that cause financial and emotional harm to individuals and businesses. The harms include fraud, identity theft, and exploitation of vulnerable groups, which fall under harm to persons and communities. The AI involvement is clear in the use of deepfake technology and AI-generated content to impersonate individuals and create fake companies. Since these harms are already occurring and the AI systems are pivotal in enabling these scams, the event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Red Cat Expands AI-Driven Swarm Robotics for Defense Through Acquisitions and Partnerships

2026-03-30
United States

Red Cat Holdings, a U.S. defense technology firm, has acquired Apium Swarm Robotics and partnered with Ukraine's Spetstechnoexport to advance AI-enabled unmanned and robotic systems. These developments enhance Red Cat's capabilities in autonomous drone swarming and multi-domain operations, raising future risks associated with military AI deployment.[AI generated]

AI principles:
SafetyDemocracy & human autonomy
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
General public
Harm types:
Physical (injury)Human or fundamental rightsPublic interest
Severity:
AI hazard
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Goal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems in the form of autonomous swarming drones with distributed control and multi-agent autonomy, which qualifies as AI systems. The event concerns the acquisition and planned integration of this technology into defense-related drone systems. No actual harm or incident is reported; the article focuses on business and technological development. However, the nature of the technology—autonomous swarming drones for battlefield use—implies a credible potential for future harm, such as injury, disruption, or rights violations, if deployed or misused. Therefore, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential impacts are central to the article.[AI generated]

Thumbnail Image

Ukraine Deploys and Advances AI-Driven Interceptor Drone Swarms in Defense Against Russian Attacks

2026-03-30
Ukraine

Ukraine is deploying and developing AI-powered interceptor drones, including the Strila system and autonomous swarms, to counter Russian UAV attacks. German firm Quantum Systems and Ukrainian company WIY Drones are scaling production, with new swarm capabilities enabling coordinated, semi-autonomous defense. These AI systems are actively used in the ongoing conflict, directly impacting battlefield outcomes.[AI generated]

AI principles:
AccountabilityRespect of human rights
Industries:
Government, security, and defenceRobots, sensors, and IT hardware
Affected stakeholders:
Government
Harm types:
Economic/Property
Severity:
AI hazard
Autonomy level:
Medium-action autonomy (human-on-the-loop)
AI system task:
Recognition/object detectionGoal-driven organisation
Why's our monitor labelling this an incident or hazard?

The article involves AI systems in the context of military drones and their evolving capabilities, including potential future autonomous AI systems. However, it does not report any realized harm or incident caused by AI systems. The discussion is about ongoing use, strategic implications, and potential future developments, which aligns with the definition of an AI Hazard since it plausibly could lead to harm in the future. Yet, since no specific AI system malfunction or misuse causing harm is described, and the focus is more on broad analysis and future risks rather than a particular event, the classification as AI Hazard is appropriate. It is not Complementary Information because it is not updating or responding to a previously reported incident, nor is it unrelated as it clearly involves AI-related military technology and its implications.[AI generated]

Thumbnail Image

AI-Generated TikTok Videos Spread Sexist and Racist Stereotypes

2026-03-30
France

On TikTok, AI-generated animated videos featuring fruit and vegetable characters have gone viral, spreading openly sexist and racist stereotypes. These short clips, created and monetized by content creators, have reached millions, raising concerns about the harmful impact and normalization of discriminatory narratives through AI-driven content.[AI generated]

AI principles:
FairnessRespect of human rights
Industries:
Media, social platforms, and marketing
Affected stakeholders:
WomenGeneral public
Harm types:
Human or fundamental rightsPsychological
Severity:
AI incident
Business function:
Other
Autonomy level:
Low-action autonomy (human-in-the-loop)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The videos are explicitly generated by AI systems and are spreading sexist, racist, and violent stereotypes that harm communities and violate rights. The AI system's use in generating and disseminating this harmful content on a popular platform directly leads to social harm. The article describes realized harm through the propagation of discriminatory and hateful narratives, not just potential harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

AI Detection Tools Falsely Accuse Human Content, Enable Extortion

2026-03-30
United States

Investigations revealed that several AI-powered content detection tools falsely label genuine human-written texts as AI-generated, leading to reputational harm and extortion attempts. These tools mislead users, damage credibility, and exploit individuals financially by offering paid services to 'humanize' content, exacerbating misinformation and trust issues online.[AI generated]

AI principles:
Robustness & digital securityTransparency & explainability
Industries:
Media, social platforms, and marketing
Affected stakeholders:
Consumers
Harm types:
ReputationalEconomic/PropertyPublic interest
Severity:
AI incident
Business function:
Monitoring and quality control
Autonomy level:
High-action autonomy (human-out-of-the-loop)
AI system task:
Other
Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly described as AI-based text detection tools. Their malfunction (false positives) and deceptive use (charging for 'humanizing' texts) have directly caused harm by misleading users, damaging reputations, and contributing to misinformation. The harms include violation of rights (reputational harm), harm to communities (misinformation), and financial exploitation. The article documents realized harms, not just potential risks, and the AI systems' role is pivotal. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]

Thumbnail Image

Persistent AI Hallucinations Highlight Risks in Critical Applications

2026-03-30
United States

Recent research and expert warnings highlight that hallucinations—false outputs generated by large language models (LLMs)—are unavoidable and increase with input size. These inaccuracies pose significant risks in high-stakes fields like law and accounting, challenging the reliability of AI for critical tasks.[AI generated]

AI principles:
Robustness & digital securitySafety
Industries:
Financial and insurance servicesGovernment, security, and defence
Affected stakeholders:
ConsumersBusiness
Harm types:
Economic/PropertyReputational
Severity:
AI hazard
Business function:
Compliance and justice
Autonomy level:
No-action autonomy (human support)
AI system task:
Content generation
Why's our monitor labelling this an incident or hazard?

The event involves AI systems (LLMs) and their use, specifically their tendency to hallucinate false outputs. Although no direct harm is described as having occurred, the article clearly outlines the potential for these hallucinations to cause significant harm in critical domains. Therefore, this situation fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to an AI Incident involving harm to persons, organizations, or communities relying on accurate outputs.[AI generated]