AI-Generated Deepfake Scam Targets French TV Host Nikos Aliagas

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Scammers used AI to create fake videos and messages impersonating TV host Nikos Aliagas, employing his face and voice to promote fraudulent cryptocurrency schemes and giveaways on social media. The deepfake content misled victims, prompting Aliagas to publicly warn followers about the AI-driven scam.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions that AI is used to create fake advertisements using the faces and voices of celebrities to deceive people into scams. This involves the use of AI-generated synthetic media (deepfakes) to impersonate individuals, which has directly led to harm by tricking people into fraudulent schemes. The harm is realized as people are targeted by these scams, fulfilling the criteria for an AI Incident due to violations of rights and harm to communities through deception and fraud.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingFinancial and insurance servicesDigital security

Affected stakeholders
Consumers

Harm types
Economic/PropertyReputationalHuman or fundamental rightsPsychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Après Elise Lucet, Nikos Aliagas victime d'une arnaque sur les réseaux sociaux

2024-04-09
Capital.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI is used to create fake advertisements using the faces and voices of celebrities to deceive people into scams. This involves the use of AI-generated synthetic media (deepfakes) to impersonate individuals, which has directly led to harm by tricking people into fraudulent schemes. The harm is realized as people are targeted by these scams, fulfilling the criteria for an AI Incident due to violations of rights and harm to communities through deception and fraud.
Thumbnail Image

Nikos Aliagas victime d'une grave arnaque : l'animateur pris pour cible

2024-04-10
Tribunal Du Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI was used to generate fake videos and messages impersonating celebrities, including Nikos Aliagas, to scam people into believing false promotions. This misuse of AI has directly caused harm by enabling fraud and misleading the public, fitting the definition of an AI Incident due to realized harm involving AI-generated deceptive content.
Thumbnail Image

" Ce sont des escrocs " : la voix et le visage de Nikos Aliagas utilisés dans le cadre d'une arnaque, l'animateur de TF1 réagit (photo)

2024-04-09
Sudinfo.be
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create a fake video with Nikos Aliagas's face and voice, which is being used in a scam promoting a dubious cryptocurrency platform. This misuse of AI-generated synthetic media directly causes harm by misleading people and facilitating fraud, which is a violation of rights and causes harm to communities. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Nikos Aliagas victime d'une énorme arnaque : "soyez vigilants, ce sont des escrocs !"

2024-04-08
Public.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated fake content using the victim's face and voice is being used to scam people. This involves the use of AI systems to create deceptive content that directly leads to harm (financial scams) to individuals. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through fraudulent activities targeting people.
Thumbnail Image

Nikos Aliagas victime d'une arnaque : l'animateur pris pour cible, "ce sont des escrocs"

2024-04-09
Melty.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create fake images, voice, and identity of Nikos Aliagas to perpetrate a scam that has already caused harm to victims who fell for the fraudulent offers. This meets the definition of an AI Incident because the AI system's use directly led to realized harm (financial and personal data loss) through deception and fraud. Therefore, this event is classified as an AI Incident.