AI Deepfake Voice Scams Target 1 in 4 Americans, Causing Financial and Emotional Harm

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake voice calls have targeted one in four Americans in the past year, leading to significant financial losses and emotional distress, especially among seniors. The widespread use of AI in these scams has eroded trust in mobile networks and prompted calls for stricter regulation and carrier accountability.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI voice deepfakes being used in fraudulent calls that have directly led to financial harm to victims, particularly older adults who have lost significant amounts of money. The AI system's use in cloning voices for scams is a direct cause of harm. The event involves the use of AI systems (deepfake voice generation) leading to realized harm (financial losses and erosion of trust), meeting the criteria for an AI Incident. The discussion of regulatory demands and carrier responsibility is complementary but does not change the primary classification.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
General public

Harm types
Economic/PropertyPsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

One in four Americans receive deepfake voice calls - BetaNews

2026-03-02
BetaNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice deepfakes being used in fraudulent calls that have directly led to financial harm to victims, particularly older adults who have lost significant amounts of money. The AI system's use in cloning voices for scams is a direct cause of harm. The event involves the use of AI systems (deepfake voice generation) leading to realized harm (financial losses and erosion of trust), meeting the criteria for an AI Incident. The discussion of regulatory demands and carrier responsibility is complementary but does not change the primary classification.
Thumbnail Image

State of the Call 2026: AI Deepfake Voice Calls Hit 1 in 4 Americans as Consumers Say Scammers Are Beating Mobile Network Operators 2-to-1

2026-03-02
StreetInsider.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered deepfake voice calls being used by scammers to defraud consumers, with one in four Americans having received such calls and significant financial losses reported, especially among seniors. The harm includes financial loss, emotional distress, and erosion of trust in communication infrastructure, fitting the definition of harm to persons and communities. The AI system's use in generating deepfake voices is central to the incident, and the failure of mobile network operators to prevent these calls contributes indirectly to the harm. Thus, the event meets the criteria for an AI Incident.
Thumbnail Image

State of the Call 2026: AI Deepfake Voice Calls Hit 1 in 4 Americans as Consumers Say Scammers Are Beating Mobile Network Operators 2-to-1

2026-03-02
Eagle-Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfake voice calls being weaponized by scammers, directly causing harm to consumers through fraudulent calls. The harm is realized and widespread, affecting 1 in 4 Americans according to the report. This meets the criteria for an AI Incident as the AI system's use has directly led to harm to people (scam victims) and communities (consumer trust and network integrity).
Thumbnail Image

State of the Call 2026: AI Deepfake Voice Calls Hit 1 in 4

2026-03-02
MarTech Series
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered deepfake voice calls being used in scams that have caused financial losses and emotional harm to people, particularly seniors. This constitutes direct harm caused by the use of an AI system (deepfake voice generation) in fraudulent activity. The harms include financial loss (harm to persons) and emotional distress (harm to persons). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm. The discussion of potential regulatory responses and AI-based defenses is complementary but does not change the classification.
Thumbnail Image

State of the Call 2026: AI Deepfake Voice Calls Hit 1 in 4 Americans as Consumers Say Scammers Are Beating Mobile Network Operators 2-to-1

2026-03-02
mykxlg.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfake voice calls being used by scammers to deceive and financially harm consumers, with concrete data on the prevalence and impact of these scams. The harm includes financial losses and emotional distress, particularly among seniors. The AI system's role in generating convincing fake voices is central to the scam's success, directly causing harm. The event meets the criteria for an AI Incident because the AI system's use has directly led to harm to people and communities. The discussion of regulatory and liability responses is complementary but does not change the primary classification.