WPP Executives Targeted by Deepfake Scam Using AI Voice Cloning

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Hackers attempted to defraud WPP executives by using deepfake and AI voice cloning technology to impersonate CEO Mark Read. They used a fake WhatsApp account and a Microsoft Teams call to solicit personal information and money. The scam was unsuccessful, but highlighted increasing sophistication in cyber attacks.[AI generated]

Why's our monitor labelling this an incident or hazard?

Fraudsters used an AI system (voice-cloning deepfake) to impersonate the WPP CEO and a senior executive in a virtual meeting to solicit money and personal details. This is a realized, AI-enabled scam—even though it was unsuccessful—constituting direct harm from the misuse of AI.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityTransparency & explainabilityAccountabilitySafetyRespect of human rights

Industries
Media, social platforms, and marketingDigital securityIT infrastructure and hosting

Affected stakeholders
WorkersBusiness

Harm types
Economic/PropertyReputationalHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

CEO of world's biggest ad firm targeted by deepfake scam

2024-05-10
Yahoo
Why's our monitor labelling this an incident or hazard?
Fraudsters used an AI system (voice-cloning deepfake) to impersonate the WPP CEO and a senior executive in a virtual meeting to solicit money and personal details. This is a realized, AI-enabled scam—even though it was unsuccessful—constituting direct harm from the misuse of AI.
Thumbnail Image

CEO of world's biggest ad agency falls victim to elaborate deepfake...

2024-05-10
New York Post
Why's our monitor labelling this an incident or hazard?
This event involves the malicious use of AI (voice cloning and deepfake generation) to facilitate fraud. Although the scam was detected and no financial or data loss occurred, it represents a near-miss scenario where AI made a harmful incident plausible. Therefore, it constitutes an AI Hazard rather than a realized incident.
Thumbnail Image

Top WPP advertising executive hit by scammers using voice cloning attack

2024-05-13
TechRadar
Why's our monitor labelling this an incident or hazard?
Hackers employed AI systems (publicly available deepfake video and voice-cloning software) to convincingly impersonate senior executives and orchestrate a fraudulent Teams call aimed at extracting personal details and money. The AI system’s involvement was central to the attempted harm, qualifying this as an AI Incident.
Thumbnail Image

CEO of world's biggest ad firm targeted by deepfake scam

2024-05-10
The Guardian
Why's our monitor labelling this an incident or hazard?
Fraudsters employed a voice-cloning AI deepfake and video manipulation to conduct a phishing scam targeting WPP leadership. Although the attack was thwarted, it constitutes a direct misuse of AI with clear intent to defraud, satisfying the criteria for an AI Incident rather than a mere hazard or complementary update.
Thumbnail Image

WPP boss targeted by deepfake scammers using voice clone

2024-05-10
Financial Times News
Why's our monitor labelling this an incident or hazard?
Scammers used AI voice-cloning and deepfake video content in a real-world attempted fraud, impersonating corporate leaders to extract personal details and money. Although the attack was foiled, it constitutes an AI Incident because the AI system’s malicious use directly led to potential harm.
Thumbnail Image

Hackers Try Steal Money, Personal Information From Executives at the World's Largest Advertising Company Using a Deepfake of the CEO

2024-05-13
Entrepreneur
Why's our monitor labelling this an incident or hazard?
This event describes the malicious use of a deepfake AI system (voice cloning and video synthesis) that directly led to an attempted theft and breach of privacy. Even though the attack was unsuccessful, it constitutes a realized security incident involving AI misuse.
Thumbnail Image

WPP's Read Subject Of Deepfake Scam Using Gen AI

2024-05-13
MediaPost
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI to create a deepfake voice, which is an AI system. The scam attempt was a malicious use of AI aiming to cause financial and reputational harm. Although the harm was averted, the event demonstrates a plausible risk of harm from AI misuse. Therefore, it qualifies as an AI Hazard rather than an AI Incident, as no actual harm materialized but the potential for harm was credible and significant.
Thumbnail Image

Deepfake Fraud Attempts on Business on the Rise

2024-05-13
supplychainbrain.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice clones and deepfake videos being used to impersonate executives and deceive employees, leading to financial fraud and identity theft. These harms fall under (a) injury or harm to persons (financial harm) and (c) violations of rights (identity theft). The AI systems' use directly caused these harms, qualifying the events as AI Incidents rather than hazards or complementary information.
Thumbnail Image

Boss of world's biggest ad firm impersonated by AI in elaborate deepfake scam

2024-05-10
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article describes the direct misuse of AI systems—voice cloning and video deepfake—to impersonate executives and attempt financial fraud and data theft. Although the scam was unsuccessful, it involved actual deployment of AI for criminal activity and attempted harm, fitting the definition of an AI Incident.