OECD Working Party and Network of Experts on AI
Further to the announcement of an integrated partnership for the Global Partnership on AI (GPAI) with the OECD, the OECD ONE AI and the GPAI Multistakeholder Group are forming a new expert community. Stay tuned for further updates! See here for more information.
Expert Group on AI Incidents
While AI provides tremendous benefits, it also poses risks. Some of these risks are already materialising into harms to people and society like bias and discrimination, the polarisation of opinions, privacy infringements, and security and safety issues. These harms are broadly referred to under the developing term of an “AI incident”.
Key resources
- AI Incidents Monitor (AIM): Documents AI incidents to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the incidents and hazards that concretise AI risks. Over time, AIM will help to show patterns and establish a collective understanding of AI incidents and their multifaceted nature and serve as an important tool for trustworthy AI.
- Stocktaking for the development of an AI incident definition: To effectively monitor and prevent AI incidents, stakeholders require an accurate but flexible definition of AI. This report presents research and findings on terminology and practices related to the definition of an incident, encompassing both AI-specific and cross disciplinary contexts.
About the expert group
As AI continues to be deployed throughout economies and societies, an increase in AI incidents is inevitable. To develop trustworthy and beneficial AI systems, there is a need to treat risks, which requires a rigorous understanding of AI incidents. Monitoring AI incidents requires global consistency and interoperability in incident reporting, so that AI system operators and policy makers can learn from the risks and incidents of other actors internationally. Risks and incidents can then be linked to AI system characteristics and to tools that can help developers, users and policy makers treat those risks.
A common framework for incident reporting would enable global consistency, interoperability and alignment of terminology between regulatory or self-regulatory AI incident reporting in different jurisdictions, ahead of the implementation of either mandatory or voluntary incident reporting schemes, as planned notably in the European Commission’s proposed AI Act.
Two elements are needed to advance this important area of work: definitions and monitoring through a common reporting framework. The OECD has begun working on a reporting framework including definitions. In parallel, the OECD has also started a complementary project to develop a global AI Incidents Monitor (AIM) and begun tracking actual AI incidents in real time and provide a “reality check” to make sure that the reporting framework and definition function in practice. AI incidents reported in international media are being used as a starting point, since many other incidents are not disclosed publicly.
While incidents in the media and academic publications are likely to be only a small subset of incidents, the data that does exist shows an exponential increase in AI incidents. The need to inform policy and regulatory choices in real-time by monitoring AI incidents e.g. identifying high-risk AI systems and identifying new risks as they appear is growing. This evidence is expected to inform the risk and impact assessment of AI systems, by documenting the characteristics of “risky” AI systems that have caused or contributed to actual incidents or near-incidents in the past or the present.
The OECD is engaging with policy makers, experts and partners from all stakeholder groups to develop the common framework for AI incident reporting. This work advances the mandate of the OECD to help implement the OECD Principles for trustworthy AI.
Co-chairs
Marko Grobelnik, AI Researcher & Digital Champion – AI Lab of Slovenia’s Jozef Stefan Institute
Irina Orssich, Head of Sector AI Policy – European Commission
Elham Tabassi, Chief of Staff, Information Technology Laboratory – National Institute of Standards and Technology
Mark Latonero, Head of International Engagement at the U.S. AI Safety Institute – US National Institute of Standards and Technology (NIST)
The group meets virtually every 4 to 5 weeks.