Deepfake Video of Bulgarian Singer Azis Used in TikTok Scam

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A deepfake video of Bulgarian pop-folk singer Azis has been circulating on TikTok, falsely offering €20,000 to users. Created using AI, the video challenges viewers to follow the account, potentially deceiving many of its 13,000 followers and 500,000 viewers. Meanwhile, Azis is in Miami, unaware of the scam.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI to create a fake video of a celebrity, which is then used to deceive and defraud people on a social media platform. This constitutes harm to individuals through fraud and deception, which is a significant harm to communities and individuals. The AI system's use in generating the fake video is directly linked to the harm caused by the scam. Therefore, this qualifies as an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainability

Industries
Media, social platforms, and marketingDigital securityArts, entertainment, and recreation

Affected stakeholders
ConsumersOther

Harm types
ReputationalEconomic/PropertyHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Поредна измама: Фалшив профил на Азис подарява 20 000 евро (ВИДЕО)

2025-02-08
Fakti.bg - Да извадим фактите наяве
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create a fake video of a celebrity, which is then used to deceive and defraud people on a social media platform. This constitutes harm to individuals through fraud and deception, which is a significant harm to communities and individuals. The AI system's use in generating the fake video is directly linked to the harm caused by the scam. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Луда измама за 20 хиляди евро с Азис и яйца побърка българския тик ток

2025-02-08
Petel.bg
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to generate fake video content and comments impersonating a public figure to perpetrate a scam. This use of AI directly leads to harm by deceiving and potentially causing financial loss to individuals who trust the fake profile. Therefore, it qualifies as an AI Incident due to realized harm caused by the AI system's use in the scam.
Thumbnail Image

ВНИМАНИЕ: Фалшив Азис дава 20 000 евро! (ВИДЕО)

2025-02-07
Telegraph.bg
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of artificial intelligence to create a fake video of Azis, which is used to deceive users with a false monetary offer. The AI system's use directly leads to harm by enabling fraud and misinformation, which can cause financial and reputational damage. The presence of a large audience increases the scale of potential harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm to people (users) through deception and potential financial loss.
Thumbnail Image

Фалшив Азис дава 20 000 евро

2025-02-07
plovdivmedia.bg
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of artificial intelligence to create a fake video of a public figure, which is used to deceive users into believing a false offer of money. The AI system's output (the deepfake video) is central to the harm, as it misleads and could cause financial or emotional harm to viewers. The harm is realized (not just potential) because the fake profile is active with many followers and views, increasing the risk of actual victimization. This fits the definition of an AI Incident as the AI system's use has directly led to harm (fraud, deception) to people.
Thumbnail Image

Внимание! Фалшив Азис вилнее в Тик Ток и раздава 20 000 евро

2025-02-07
marica.bg
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as 'Artificial Intelligence' used to create a deepfake video of a public figure. The misuse of this AI system has directly led to harm by spreading deceptive content that can mislead and potentially defraud users, fulfilling the criteria for harm to communities and individuals. The harm is realized as the fake profile has thousands of followers and hundreds of thousands of views, indicating active dissemination and impact. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.