Deepfake Scam Targets TV Host Esra Erol with Fake Investment Pitch

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Scammers used deepfake AI to manipulate video and audio of Turkish TV host Esra Erol, adding BOTAS logos and false investment pitches. The fake ad circulated on social media to defraud users. Erol filed a criminal complaint and initiated legal action against the unauthorized use of her likeness and voice.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes an AI system being used to manipulate video content of a person to commit fraud, which constitutes a direct harm to the individual and involves violation of rights and potential financial harm. This fits the definition of an AI Incident because the AI system's use directly led to harm through fraudulent activity. The legal complaint and public reaction confirm the harm has occurred and is being addressed.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainability

Industries
Media, social platforms, and marketingDigital securityEnergy, raw materials, and utilities

Affected stakeholders
ConsumersWomen

Harm types
Economic/PropertyReputationalHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

'Yapay zeka dolandırıcılığı' isyanı! 'Videoyu görünce çok sinirlendim'

2024-01-15
Milliyet
Why's our monitor labelling this an incident or hazard?
The article describes an AI system being used to manipulate video content of a person to commit fraud, which constitutes a direct harm to the individual and involves violation of rights and potential financial harm. This fits the definition of an AI Incident because the AI system's use directly led to harm through fraudulent activity. The legal complaint and public reaction confirm the harm has occurred and is being addressed.
Thumbnail Image

Yapay zeka dolandırıcılığının yeni hedefi olan Esra Erol, yasal yollara başvurdu

2024-01-13
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based deepfake methods to alter Esra Erol's video and audio to perpetrate fraud. This manipulation has caused direct harm by misleading people and facilitating scams, fulfilling the criteria for an AI Incident. The involvement of AI in the creation of deceptive content that leads to realized harm (fraud) justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Yapay zekayla dolandırıcılığa Esra Erol'u da karıştırdılar! Soluğu savcılıkta aldı

2024-01-13
Ak�am
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create manipulated video content that was used to attempt fraud, which is a violation of rights and causes harm to individuals targeted by the scam. Since the fraud attempt is actively occurring and legal action has been initiated, this constitutes an AI Incident due to direct harm caused by the AI system's misuse.
Thumbnail Image

Yapay zeka dolandırıcılığının hedefi Esra Erol oldu

2024-01-13
Yeni Alanya Gazetesi
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (deepfake technology) being used to manipulate video and audio content of a known person to perpetrate fraud. This use of AI directly causes harm by enabling scams that deceive people, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The involvement of AI in the creation of misleading content that leads to financial harm is explicit and realized, not merely potential.
Thumbnail Image

Esra Erol, 'Yapay Zeka dolandırıcılığında araç oldu!

2024-01-15
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to manipulate video and audio content for fraudulent purposes, directly causing harm to individuals targeted by the scam. This fits the definition of an AI Incident because the AI system's use has directly led to harm (financial fraud and violation of rights). The description confirms the harm is realized, not just potential, and legal proceedings are underway, reinforcing the incident classification.
Thumbnail Image

Yapay zeka dolandırıcılığında Esra Erol'da mağdur oldu! Ünlü sunucu soluğu hemen savcılıkta aldı!

2024-01-13
Kartal Gazetesi
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI (deepfake technology) to alter a video and audio of Esra Erol to deceive and defraud people, which is a direct harm to individuals targeted by the scam and a violation of the victim's rights. The harm is realized as the fraudulent content is actively used to scam people. The victim's legal response confirms the seriousness of the incident. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Dolandırıcıların Yeni Hedefi Esra Erol Oldu! Sosyal Medyada Yayılan O Video sahte çıktı!

2024-01-13
Halk TV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (deepfake technology) used maliciously to create fake video and audio of a public figure to perpetrate a scam. This caused direct harm to people targeted by the scam (financial harm) and violates legal and ethical rights. Therefore, it meets the criteria for an AI Incident as the AI system's use directly led to harm through fraud.