OECD AI Incidents Monitor
Methodology and disclosures
Overview
The AI Incidents Monitor (AIM) was initiated and is being developed by the OECD.AI expert group on AI incidents with the support of the Patrick J. McGovern Foundation. In parallel, the expert group is working on an AI incident reporting framework. The goal of the AIM is to track actual AI incidents and hazards in real time and provide the evidence-base to inform the AI incident reporting framework and related AI policy discussions.
The AIM is being informed by the work of the expert group on defining AI incidents and associated terminology, such as AI hazards and disasters. In parallel, the AIM seeks to provide a ‘reality check’ to make sure the definition of an AI incident and reporting framework function with real-world AI incidents and hazards.
As a starting point, AI incidents and hazards reported in reputable international media globally are identified and classified using machine learning models. Similar models are used to classify incidents and hazards into different categories from the OECD Framework for the Classification of AI systems, including their severity, industry, related AI Principle, types of harms and affected stakeholders.
The analysis is done based on the title, abstract and first few paragraphs of each news article. News articles come from Event Registry, a news intelligence platform that monitors world news and can detect specific event types reported in news articles, with over 150 000 articles English articles processed every day.
While recognising the likelihood that these incidents and hazards only represent a subset of all AI incidents and hazards worldwide, these publicly reported incidents and hazards nonetheless provide a useful starting point for building the evidence base.
Incidents and hazards can be composed of one or more news articles covering the same event. To mitigate concerns related to editorial bias and disinformation, each report’s annotations and metadata are extracted from the most reputable news outlet reporting on such incident and hazard based on the Alexa traffic rank. Additionally, incidents and hazards are sorted by the number of articles reporting on them and their relevance to the specific query, as determined by their semantic similarity. Lastly, links to all the articles reporting on a specific incident or hazard are provided for completeness.
The data collection and analysis for the AIM is done to ensure, to the best extent possible, the reliability, objectivity and quality of the information for AI incidents and hazards. A detailed methodological note is available here.
In the future, an open submission process may be enabled to complement the AI incidents and hazards information from news articles. To ensure consistency in reporting, the existing classification algorithm could be leveraged to process text submissions and provide a pre-selection of tags for a given incident or hazard report. Additionally, it is expected that incident and hazard information from news articles be complemented by court judgements and decisions of public supervisory authorities wherever they exist.
Definitions
Thanks to the work of the OECD.AI expert group on AI incidents an AI incident and related terminology were defined. Published in May 2024, the paper Defining AI incidents and related terms, defines an event where the development or use of an AI system results in actual harm as an “AI incident”, while an event where the development or use of an AI system is potentially harmful is termed an “AI hazard”.
- An AI incident is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to any of the following harms:
(a) injury or harm to the health of a person or groups of people;
(b) disruption of the management and operation of critical infrastructure;
(c) violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights;
(d) harm to property, communities or the environment.
- An AI hazard is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to an AI incident, i.e., any of the following harms:
(a) injury or harm to the health of a person or groups of people;
(b) disruption of the management and operation of critical infrastructure;
(c) violations to human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights;
(d) harm to property, communities or the environment.
Information Transparency Disclosures
- Background: Your use of the OECD AI Incidents Monitor (“AIM”) is subject to the terms and conditions found at www.oecd.org/termsandconditions. The following disclosures do not modify or supersede those terms. Instead, these disclosures aim to provide greater transparency surrounding information included in the AIM.
- Third-Party Information: The AIM serves as an accessible starting point for comprehending the landscape of AI-related challenges. As a result, please be aware that the AIM is populated with news articles from various third-party outlets and news aggregators with which the OECD has no affiliation.
- Views Expressed: Please know that any views or opinions expressed on the AIM are solely those of the third-party outlets that created them and do not represent the views or opinions of the OECD. Further, the inclusion of any news article or incident does not constitute an endorsement or recommendation by the OECD.
- Errors and Omissions: The OECD cannot guarantee and does not independently verify the accuracy, completeness, or validity of third-party information provided in the AIM. You should be aware that information included in the AIM may contain various errors and omissions.
- Intellectual Property: Any of the copyrights, trademarks, service marks, collective marks, design rights, or other intellectual property or proprietary rights that are mentioned, cited, or otherwise included in the AIM are the property of their respective owners. Their use or inclusion in the AIM does not imply that you may use them for any other purpose. The OECD is not endorsed by, does not endorse, and is not affiliated with any of the holders of such rights, and as such, the OECD cannot and do not grant any rights to use or otherwise exploit these protected materials included herein.