AI-Driven Scams Surge, Increasing Financial Harm and Public Concern

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Criminals are increasingly using AI to create more convincing and harder-to-detect scams, leading to a rise in financial fraud, especially in the UK and Australia. Older adults in the US are particularly affected by AI-enabled scam ads on social media, prompting calls for platform accountability and reform.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems being used maliciously to perpetrate scams that cause financial and emotional harm to individuals and businesses. The harms include fraud, identity theft, and exploitation of vulnerable groups, which fall under harm to persons and communities. The AI involvement is clear in the use of deepfake technology and AI-generated content to impersonate individuals and create fake companies. Since these harms are already occurring and the AI systems are pivotal in enabling these scams, the event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
Consumers

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Tech giants unite to fight online scams

2026-03-30
Fox News
Why's our monitor labelling this an incident or hazard?
The article involves AI systems as both tools used by scammers and by companies to detect scams, indicating AI system involvement. However, it does not describe a concrete event where AI use has directly or indirectly caused harm (AI Incident), nor does it describe a specific event where AI use could plausibly lead to harm (AI Hazard). Instead, it reports on a new voluntary industry accord to share data and improve AI-based scam detection, which is a societal and governance response to AI-related harms. Therefore, it fits the definition of Complementary Information, as it provides context and updates on responses to AI-related challenges without reporting a new incident or hazard.
Thumbnail Image

Older adults are losing patience with scam ads on social media - BetaNews

2026-04-01
BetaNews
Why's our monitor labelling this an incident or hazard?
While social media advertising systems likely use AI for targeting and content delivery, the article does not specify any AI system malfunction, misuse, or direct involvement causing harm. The focus is on the broader issue of scam ads on social media and platform responsibility rather than a concrete AI-driven incident or a plausible future AI hazard. Therefore, this is best classified as Complementary Information, as it provides supporting context and societal response considerations related to AI-enabled advertising systems but does not report a new AI Incident or AI Hazard.
Thumbnail Image

Westpac Warns Of AI Driven Scam Surge

2026-04-01
Mirage News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used maliciously to perpetrate scams that cause financial and emotional harm to individuals and businesses. The harms include fraud, identity theft, and exploitation of vulnerable groups, which fall under harm to persons and communities. The AI involvement is clear in the use of deepfake technology and AI-generated content to impersonate individuals and create fake companies. Since these harms are already occurring and the AI systems are pivotal in enabling these scams, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

NCOA Survey: Older Adults Hold Social Media Platforms Accountable for Scam Ads and Call for Reform

2026-03-31
The Norfolk Daily News
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is reasonably inferred because social media platforms use AI-driven algorithms for ad targeting and content dissemination. The harms described—widespread scams causing financial and health damage to older adults—are direct consequences of these AI-enabled advertising systems allowing scam ads to proliferate. The article documents realized harm, not just potential harm, and the AI system's role is pivotal in enabling the scam ads. Although the article also discusses public opinion and calls for reform, the primary focus is on the harm caused by AI-enabled scam ads. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Seven in Ten Consumers Say AI Is Making Fraud and Scam Attempts More Convincing

2026-04-01
Financial IT
Why's our monitor labelling this an incident or hazard?
The article explicitly states that fraudsters are using AI to enhance the sophistication and scale of scams, which has directly led to increased financial harm to consumers. This constitutes harm to people and communities (harm category a and d). The AI system's role is pivotal in making scams more convincing and harder to detect, thus directly causing harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is occurring and linked to AI use.
Thumbnail Image

Tech giants unite to fight online scams

2026-03-30
Fox Wilmington
Why's our monitor labelling this an incident or hazard?
The article discusses the use of AI both by scammers to create more convincing scams and by companies to detect and prevent scams. However, it does not describe a concrete event where an AI system directly or indirectly caused harm (an AI Incident), nor does it describe a specific event where AI use could plausibly lead to harm in the future (an AI Hazard). Instead, it reports on a new collaborative initiative and the general landscape of AI-related scam threats and defenses, which fits the definition of Complementary Information as it provides context and updates on societal and governance responses to AI-related harms.
Thumbnail Image

72% say AI is making scams harder to detect

2026-04-01
FinTech Global
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems by criminals to perpetrate scams, which directly leads to harm to people through financial fraud. The AI's role in enhancing scam tactics is explicitly mentioned, and the resulting harm (financial loss, increased scam attempts) is occurring. Therefore, this qualifies as an AI Incident because the development and use of AI systems by fraudsters have directly led to harm to individuals and communities.
Thumbnail Image

Westpac warns of AI driven scam surge

2026-04-01
westpac.com.au
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by scammers to create personalized and sophisticated scams, including deepfake technology, which is a form of AI. These scams are actively targeting people and businesses, causing financial harm and deception, which qualifies as harm to persons and communities. The AI system's use in generating and spreading these scams directly leads to harm, fulfilling the criteria for an AI Incident. The warning and advice from Westpac further confirm that these harms are occurring and are significant.