From deepfake scams to biased AI: How incident reporting can help us keep ahead of AI’s harms

This initiative, Towards a common reporting framework for AI incidents, was made possible with guidance from Marko Grobelnik, Mark Latonero, Irina Orssich, and Elham Tabassi, co-chairs of the OECD Expert Group on AI Incidents.
A costly illusion: the Brad Pitt scam
Whether they admit it or not, most people have a celebrity crush.
Yet in real life, most fans don’t have the opportunity to be in touch with their favourite celebrities. But what if the tables were turned? What if the celebrity reached out first? Would the fan be able to resist?
Last year, a woman found herself caught in a situation that seemed too good to be true—and it was. She was led to believe she had captured the attention of none other than Brad Pitt himself. Through deepfake videos and AI-generated images, she became convinced that she was interacting with the movie star.
“They” got to know each other by conversing online. Then came requests for money. At first, they were small. But gradually, the demands grew larger until she had handed over 830,000 euros. By the time she realised the truth, the damage had been done. Authorities are still working to recover her funds, but the financial and emotional tolls remain.
Growing threats
AI is reshaping scams, making them more convincing, widespread, and harder to detect. Deepfake technology and AI-generated messages are now powerful tools for fraud. In 2023, France alone recorded over 130,000 online scams—an 8% increase from the previous year. And scams are just one piece of the puzzle. AI is also behind deepfake fraud, non-consensual AI-generated imagery, disinformation campaigns, and even incidents involving self-driving cars and biased algorithms.
But not all AI-related problems stem from malicious intent. Some arise from flawed design. A major tech company once scrapped its AI-powered hiring tool after it was found to discriminate against women. Another firm faced a six-figure fine when its AI system unfairly excluded women over 55—and men, but only after they turned 60.
AI mistakes can have serious consequences. One man lost thousands of dollars and spent ten days in jail after a faulty facial recognition match wrongly linked him to a high-end handbag theft in Louisiana—despite him living in Atlanta. He was only cleared when investigators noticed a small mole on the actual suspect’s face.
Bias in AI has also appeared in healthcare. One system unintentionally prioritised White patients over Black patients by linking healthcare needs to cost instead of medical urgency. Fortunately, AI developers and experts intervened, reducing the bias by over 80%. This case underscores how AI decisions can be shaped by hidden factors.
As AI incidents grow in scale and complexity, so does the need to track and address them.
READ THE REPORT: Towards a common reporting framework for AI incidents
AI incident reporting is critical to risk prevention
Artificial intelligence (AI) is becoming an integral part of our daily lives, offering significant benefits, such as improving healthcare efficiency. However, its growing presence also raises concerns, including misinformation, privacy breaches, and threats to safety and security. Identifying and addressing these risks is a global priority for policymakers.
As part of this endeavour, policymakers have the complex task of reacting to incidents and learning from past events to prevent recurrence. A shared understanding of AI incidents at a global level would help create coordinated and efficient responses across countries and regulatory bodies. If we can understand the origins of AI incidents, their nature and their consequences, we have a greater chance of devising effective responses.
What are AI incidents, and why should we monitor them?
In May 2024, the OECD defined an AI incident as any event, circumstance, or series of events in which an AI system’s development, use, or malfunction causes harm, either directly or indirectly.
The AI Incidents Monitor (AIM) is a tool that tracks and analyses AI-related incidents reported in the media. Its results show that incidents are far-reaching and evolving. As AI advances, new risks emerge, and policymakers must act swiftly to address them in governance responses.

What the common reporting framework does
The OECD developed a standardised reporting framework to address this need. This framework is a consistent yet flexible way to document AI incidents across industries and countries. It helps policymakers align their responses and accommodate diverse legal and policy environments.
By adopting a common framework for AI incident reporting, countries and organisations can:
- Make better-informed decisions: Policymakers can better understand AI incidents by analysing and comparing incident data and using them to assess risks, their severity, and potential consequences.
- Identify and mitigate high-risk AI systems faster: Early identification of AI systems that pose significant risks enables proactive and preemptive interventions for major harms.
- Improve global knowledge-sharing: Learning from AI-related incidents that have already occurred helps policymakers and businesses prevent similar events in the future and develop the appropriate responses to help affected stakeholders.
- Align international regulatory approaches: A standardised framework will foster global cooperation in managing AI risks before they escalate.
A framework built to be comprehensive and flexible
To create an effective and adaptable reporting structure, the OECD studied four key resources: the OECD Framework for AI System Classification, the AI Incidents Database (AIID), the OECD Global Portal on Product Recalls, and the AI Incidents Monitor (AIM).
The OECD identified 88 criteria to characterise an AI system or incidents in this extensive evaluation. A rigorous selection process based on relevance, frequency, and feedback from policymakers and AI experts then refined these criteria to 29.
The framework is concise and comprehensive. It includes seven mandatory criteria to capture essential details of each incident, such as:
- The description of the incident at hand,
- The type of harm caused,
- The severity of the impact,
- The connection between the AI system and the incident
The 29 criteria are encompassed within eight broad dimensions that gather information regarding the economic context, environmental and societal impacts, and technical details about the AI system, including its tasks, inputs, and outputs.
Incident reporting will strengthen AI governance
As AI adoption accelerates, policymakers view enhanced incident tracking as essential to addressing AI-related challenges. In the future, systematic reporting and analysis of AI incidents will yield critical insights that inform policy decisions, helping mitigate risks and supporting responsible AI development.
The OECD tracks AI incidents through AIM and will progressively expand its capabilities by incorporating incidents submitted by stakeholders from diverse backgrounds. An open submission model will create a dynamic, collaborative space for reporting, analysing, and learning from AI incidents.
A global approach to AI risk management
A global AI incident reporting framework will help policymakers identify high-risk AI systems, understand their consequences, and foster trust in AI technology. By providing a clearer picture of AI-related risks and how to address them, AIM will serve as a vital resource for all AI stakeholders, including individuals, businesses, and governments.
By adopting this framework, the global community has a better chance of staying ahead of AI risks, responding swiftly to incidents, and anticipating future challenges before they escalate.