AI Incidents
Exploring how to ensure AI systems are accountable to the public.
Overview
Experts
The Expert Group on AI Incidents is comprised of multidisciplinary and cross-sector AI and foresight experts from around the world.
AI Incidents Group Co-chairs
From the AI Wonk
More related postsA shared definition of incidents and monitoring framework
The OECD has begun developing an incident reporting framework that includes definitions. In parallel, a complementary project to develop a global AI Incidents Monitor (AIM) entered its first phase in 2023 and now tracks actual AI incidents in real-time, as reported in the press. While this is not the final product, it provides a “reality check” to ensure the reporting framework and definition work in practice. AI incidents reported in international media are used as a starting point since many other incidents are not disclosed publicly.
While incidents in the media and academic publications are likely to be only a small subset of incidents, the data that does exist shows an exponential increase in AI incidents. The need to inform policy and regulatory choices in real-time by monitoring AI incidents, e.g. identifying high-risk AI systems and new risks as they appear, is growing. This evidence is expected to inform AI system risk and impact assessments by documenting the characteristics of “risky” AI systems that have caused or contributed to actual incidents or near-incidents in the past and the present.
The OECD is engaging with policymakers, experts, and partners from all stakeholder groups to develop a common framework for AI incident reporting. This work advances the OECD’s mandate to help implement the OECD Principles for trustworthy AI.