AI-Driven Deepfake Scams Cause Widespread Financial Harm in the U.S.

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Criminals in the U.S. are increasingly using AI-generated deepfakes to impersonate CEOs, managers, and family members, tricking victims into transferring money or leaking sensitive data. Over 105,000 such attacks were recorded in 2024, resulting in more than $200 million in financial losses and significant emotional harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems being used to create deepfake audio and video to impersonate people, leading to scams that have caused significant financial losses and emotional harm. The AI's role is pivotal in enabling these scams by making the impersonations highly convincing and difficult to detect. The harms described include financial fraud (harm to property) and emotional manipulation (harm to communities). Since these harms are occurring and directly linked to the use of AI systems, the event qualifies as an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainabilityPrivacy & data governanceRobustness & digital securitySafetyRespect of human rights

Industries
Financial and insurance servicesDigital security

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Scams and frauds: Here are the tactics criminals use on you in the age of AI and cryptocurrencies

2025-09-18
San Francisco Gate
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to create deepfake audio and video to impersonate people, leading to scams that have caused significant financial losses and emotional harm. The AI's role is pivotal in enabling these scams by making the impersonations highly convincing and difficult to detect. The harms described include financial fraud (harm to property) and emotional manipulation (harm to communities). Since these harms are occurring and directly linked to the use of AI systems, the event qualifies as an AI Incident.
Thumbnail Image

AI, Crypto Scams: Tactics Criminals Use Today

2025-09-18
Mirage News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered scams and deepfakes used to impersonate people and trick victims into transferring money or leaking sensitive data. The harms are financial and emotional, affecting individuals and communities. The AI systems' use is central to the scams' effectiveness and the resulting harm. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm (financial loss and emotional harm). The article does not merely warn about potential harm or discuss responses; it reports on ongoing, realized harm caused by AI-enabled scams.
Thumbnail Image

Tactics criminals use on you in the age of AI and crypto

2025-09-19
Macau Daily Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake attacks used by criminals to impersonate CEOs, managers, or family members, leading to victims transferring money or leaking sensitive data. The harm is realized, with over 105,000 such attacks recorded and significant financial losses reported. The AI systems' misuse directly leads to harm to individuals (financial and emotional), fitting the definition of an AI Incident. The involvement of AI in generating synthetic content that causes harm is central to the event described.
Thumbnail Image

Scams And Frauds: Here Are The Tactics Criminals Use On You In The Age Of AI And Cryptocurrencies - Stuff South Africa

2025-09-20
Stuff
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake attacks that have caused over $200 million in losses in a recent period, indicating realized harm. The AI systems are used in the development and use phases by scammers to impersonate people and manipulate victims, directly leading to financial and emotional harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (financial and emotional harm) and communities (through fraud). The article is not merely about potential risks or general AI developments but about ongoing, realized harms caused by AI-enabled scams.