Deepfake Scam Uses AI to Impersonate UK Leaders for Crypto Fraud

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfakes of UK Prime Minister Keir Starmer and Prince William are being used in fraudulent ads on Meta platforms to promote a scam cryptocurrency platform, "Immediate Edge." Researchers at Fenimore Harper identified over 250 such ads, misleading users into potential financial harm by falsely endorsing the scheme.[AI generated]

Why's our monitor labelling this an incident or hazard?

Deepfake technology—a generative AI system—has been used to create realistic but fraudulent ads of Keir Starmer and Prince William. This misuse has directly induced victims to provide personal information and invest in a scam, causing financial harm. Therefore, it is an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingFinancial and insurance servicesGovernment, security, and defence

Affected stakeholders
Consumers

Harm types
Economic/PropertyReputationalPublic interestHuman or fundamental rightsPsychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Keir Starmer and Prince William used in deepfake crypto scam

2024-08-14
Finextra Research
Why's our monitor labelling this an incident or hazard?
Deepfake technology—a generative AI system—has been used to create realistic but fraudulent ads of Keir Starmer and Prince William. This misuse has directly induced victims to provide personal information and invest in a scam, causing financial harm. Therefore, it is an AI Incident.
Thumbnail Image

AI deepfake videos of Starmer and Prince William in crypto scam

2024-08-13
The Independent
Why's our monitor labelling this an incident or hazard?
AI systems (Meta’s Llama 3.1) were used to create realistic fake adverts featuring public figures, directly facilitating a scam platform that deceived and defrauded users into depositing money. The AI’s misuse led to actual financial harm, meeting the definition of an AI Incident.
Thumbnail Image

Deepfakes of Prince William Lure Social Media Users into an Investment Scam | McAfee Blog

2024-08-14
McAfee Blogs
Why's our monitor labelling this an incident or hazard?
Scammers leveraged an AI system for deepfake generation and SEO manipulation to deceive users, push them toward a fraudulent platform, and extort money. The harm (financial loss by victims) has occurred, and the AI system’s malicious use was pivotal in executing the scam. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Prince William targeted by AI crypto scammers

2024-08-14
Newsweek
Why's our monitor labelling this an incident or hazard?
The article describes the actual use of an AI deepfake system to create videos impersonating public figures in a scam that defrauded people of their money. The AI system’s deployment directly led to financial harm to victims. This qualifies as an AI Incident because it involves malicious use of AI generating harm (financial loss) to individuals.
Thumbnail Image

Keir Starmer and Prince William AI deepfakes used to scam social media users

2024-08-13
Metro
Why's our monitor labelling this an incident or hazard?
This is an AI Incident because AI deepfake generation and voice-cloning systems were actively used to mislead social media users and facilitate a financial scam. The misuse of the AI system directly contributes to harm through deception, potential financial losses, and the spread of disinformation.
Thumbnail Image

Prince William in trouble after being hit with fake scam

2024-08-14
The News International
Why's our monitor labelling this an incident or hazard?
The incident involves actual misuse of generative AI to create deepfake advertisements that directly lead to financial harm. Victims are deceived by AI-generated content and encouraged to hand over personal data and money, fulfilling the criteria for an AI Incident.
Thumbnail Image

UK Royal Family, Prime Minister Deepfakes Make Rounds on Meta

2024-08-14
Dark Reading
Why's our monitor labelling this an incident or hazard?
The incident involves AI-generated deepfakes of high-profile figures misused for financial fraud. The AI system’s outputs directly caused material harm (financial losses) to consumers, meeting the criteria for an AI Incident.
Thumbnail Image

AI deepfakes of Prince William and Keir Starmer used to sell scam

2024-08-12
thetimes.com
Why's our monitor labelling this an incident or hazard?
The use of AI deepfake technology to create realistic but fake videos of public figures to promote a scam directly leads to harm by misleading people into fraudulent financial activities. The AI system's use in generating these deepfakes is central to the harm caused, fulfilling the criteria for an AI Incident due to harm to people (financial harm and deception).
Thumbnail Image

Prince William dragged into online video scandal as royal urged to take action

2024-08-14
GB News
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI systems to create deepfake videos of public figures, which are then used in scams causing financial harm to victims. The AI-generated content directly leads to harm by misleading people and facilitating fraud. This fits the definition of an AI Incident as the AI system's use has directly led to harm to people (financial loss) and harm to communities (misinformation and deception).
Thumbnail Image

AI deepfakes of UK Prime Minister and Royals exploited in Facebook crypto scams | AI Meta | CryptoRank.io

2024-08-14
CryptoRank
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake videos created with Meta's large language model Llama 3.1 to impersonate public figures and promote a crypto scam. The scam has caused realized harm by deceiving users into providing personal information and losing money on a fake trading platform. This constitutes harm to individuals (financial injury) and harm to communities through disinformation. The AI system's use is central to the incident, as the deepfakes enabled the scam's credibility and reach. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use in fraudulent activities.