AI Incidents

Exploring how to ensure AI systems are accountable to the public.

Overview

As AI use grows, so do occurrences of AI incidents and hazards. To reinforce trustworthy and beneficial AI systems all actors must manage and treat risks as consistently as possible. This means all parties should to share an understanding of what AI incidents are. Incident reporting and monitoring must be consistent and interoperable globally so AI system operators and policymakers can learn from reported risks and incidents worldwide. The OECD is working with international AI experts to achieve this by developing definitions and monitoring through a common reporting framework.
photo of AI incidents Overview

A shared definition of incidents and monitoring framework

The OECD has begun developing an incident reporting framework that includes definitions. In parallel, a complementary project to develop a global AI Incidents Monitor (AIM) entered its first phase in 2023 and now tracks actual AI incidents in real-time, as reported in the press. While this is not the final product, it provides a “reality check” to ensure the reporting framework and definition work in practice. AI incidents reported in international media are used as a starting point since many other incidents are not disclosed publicly.

While incidents in the media and academic publications are likely to be only a small subset of incidents, the data that does exist shows an exponential increase in AI incidents. The need to inform policy and regulatory choices in real-time by monitoring AI incidents, e.g. identifying high-risk AI systems and new risks as they appear, is growing. This evidence is expected to inform AI system risk and impact assessments by documenting the characteristics of “risky” AI systems that have caused or contributed to actual incidents or near-incidents in the past and the present.

The OECD is engaging with policymakers, experts, and partners from all stakeholder groups to develop a common framework for AI incident reporting. This work advances the OECD’s mandate to help implement the OECD Principles for trustworthy AI.