AI Deepfakes Used in Celebrity Perfume Scam Mislead Consumers

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos and voiceovers falsely depicted celebrities like Molly-Mae Hague endorsing Nyla Arabiyat Prestige perfume on social media. These unauthorized endorsements misled consumers into purchasing the product, causing financial harm and violating the rights of the individuals impersonated. Public figures have warned followers about the ongoing AI-driven scam.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system generating deepfake video and audio content that impersonates a real person without consent, which has directly caused harm by misleading consumers into buying a product under false pretenses. The AI's role is pivotal in creating the fake endorsement that led to financial harm and reputational damage. This fits the definition of an AI Incident as it involves harm to communities and violations of rights due to the AI system's use.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainability

Industries
Media, social platforms, and marketingConsumer products

Affected stakeholders
ConsumersOther

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Molly Mae hits out at viral AI video

2025-07-21
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating deepfake video and audio content that impersonates a real person without consent, which has directly caused harm by misleading consumers into buying a product under false pretenses. The AI's role is pivotal in creating the fake endorsement that led to financial harm and reputational damage. This fits the definition of an AI Incident as it involves harm to communities and violations of rights due to the AI system's use.
Thumbnail Image

Molly-Mae issues AI warning to followers after fake perfume endorsement

2025-07-22
The Independent
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a synthetic video/audio clip impersonating a celebrity endorsing a product, which is a misuse of AI-generated content. While this could plausibly lead to consumer deception and financial harm, the article only reports a warning and no confirmed incidents of harm. Therefore, this qualifies as an AI Hazard due to the plausible risk of harm from AI-generated fake endorsements.
Thumbnail Image

Martin Lewis joins Molly-Mae Hague to slam deepfake social media ads in 'no protection' admission - Manchester Evening News

2025-07-22
Manchester Evening News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos used in scam ads that mislead consumers, causing financial harm. The AI system's use in generating fake celebrity endorsements is central to the harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (financial scams) and harm to communities (misinformation and deception). The discussion of lack of regulation and ongoing scams supports the assessment that harm is realized, not just potential.
Thumbnail Image

Molly-Mae Hague left 'gobsmacked' as she was forced to hide truth from fan during face-to-face meeting - Manchester Evening News

2025-07-22
Manchester Evening News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos and voice impersonations used to falsely endorse products, causing fans to buy items the celebrity never endorsed. This constitutes a violation of rights (intellectual property and possibly consumer protection) and harm to communities (financial scams). The AI system's misuse is central to the harm occurring, meeting the criteria for an AI Incident.
Thumbnail Image

Martin Lewis and Molly-Mae Hague slam deepfake social media ads - Birmingham Live

2025-07-23
Birmingham Mail
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake videos used in scam advertisements impersonating celebrities, which have caused actual financial harm to consumers. The AI system's role in generating realistic fake endorsements is pivotal to the scam's success. This constitutes a violation of consumer rights and results in harm to individuals (harm to communities and property through financial loss). Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's misuse in deepfake scam ads.
Thumbnail Image

A closer look at the Nyla Perfume scam and the rise of unauthorised endorsement

2025-07-24
Intelligent CIO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated voiceovers and deepfakes used to falsely represent celebrities endorsing a product without their consent. This misuse of AI has directly caused harm by misleading consumers into purchasing under false pretenses and violating the rights of the individuals whose likenesses are exploited. The involvement of AI in creating these fake endorsements is central to the harm described, fulfilling the criteria for an AI Incident under violations of human rights and intellectual property rights. The harm is realized, not just potential, as consumers have been influenced to buy the product based on these AI-generated false endorsements.