AI-Generated Content Used in Scams Causes Financial and Emotional Harm in the U.S.

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Scammers in the United States are using AI-generated photos, voice clones, and deepfake videos to create convincing scams, including romance, investment, and emergency schemes. These AI-enabled tactics have led to financial loss and emotional harm for victims, prompting warnings from the U.S. Postal Inspection Service.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI-generated content in scams that have already caused harm to victims through financial loss and identity theft. The AI systems are involved in the malicious use of generating realistic fake content to deceive people, which directly leads to harm to individuals (harm to persons). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm through scams.[AI generated]
AI principles
Privacy & data governanceTransparency & explainability

Industries
Digital securityFinancial and insurance services

Affected stakeholders
Consumers

Harm types
Economic/PropertyPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

U.S. Postal Inspectors Warn Customers to Avoid Scams that Use Artificial Intelligence

2026-03-01
StreetInsider.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of AI-generated content used by scammers, which could plausibly lead to harm such as financial loss or identity theft. However, the article does not report any actual harm occurring from these AI scams, only the potential risk and preventive measures. Therefore, this event fits the definition of Complementary Information as it provides societal/governance response and public awareness to a broader AI-related risk without describing a specific AI Incident or AI Hazard.
Thumbnail Image

U.S. Postal Inspectors Warn Customers to Avoid Scams that Use Artificial Intelligence

2026-03-01
WFXG FOX54
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of AI-generated content used by scammers, which could plausibly lead to harm such as financial loss and identity theft. However, the article does not report a specific realized harm or incident but rather warns about potential scams and provides advice to avoid them. Therefore, this is an AI Hazard as it describes a credible risk of harm from AI misuse but no actual incident is detailed.
Thumbnail Image

U.S. Postal Inspectors Warn Customers to Avoid Scams that Use Artificial Intelligence | Weekly Voice

2026-03-01
Weekly Voice
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content in scams that have already caused harm to victims through financial loss and identity theft. The AI systems are involved in the malicious use of generating realistic fake content to deceive people, which directly leads to harm to individuals (harm to persons). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm through scams.
Thumbnail Image

Can You Recognize An Artificial Intelligence (AI) Scam? - The Gonzales Inquirer

2026-03-01
Gonzales Inquirer
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated photos and voice clones are used by scammers to deceive victims, which directly leads to financial and emotional harm. This fits the definition of an AI Incident because the development and use of AI systems have directly led to harm to people (financial loss and emotional harm). The article focuses on realized harm caused by AI-enabled scams, not just potential harm or general information, so it is classified as an AI Incident.
Thumbnail Image

What you need to know about AI scams

2026-03-01
WAPT
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems as it references AI-generated content and deepfakes used in scams, indicating AI system involvement in the fraudulent schemes. However, it does not report a concrete incident of harm caused by AI systems, nor does it describe a specific event where AI misuse or malfunction has led to realized harm. Instead, it provides general information and guidance to consumers about potential AI-enabled fraud risks and how to avoid them. This aligns with the definition of Complementary Information, as it supports understanding of AI-related harms and responses without reporting a new incident or hazard.