AI-Generated Deepfake Ads Target Balkan Celebrities in Fraud Scheme

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Actors Andrija Milošević and Džejla Ramović were victims of online scams using AI-generated deepfakes of their likeness and voices in fraudulent ads promoting gambling and quick-money schemes. Both publicly denied involvement and took legal action, warning the public about the unauthorized use of their identities. The incidents occurred in the Balkans.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved as it generates the likeness of the actor without consent. The use of this AI-generated content directly leads to harm by facilitating online fraud and deception. The actor himself identifies this as a classic scam and is taking legal action. This fits the definition of an AI Incident because the AI system's use has directly led to harm (fraud and deception) affecting people.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Other

Harm types
ReputationalEconomic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Andrija Milošević žrtva online prevare: "Već tri godine ne reklamiram igre na sreću niti namjeravam"

2025-10-21
Klix.ba
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it generates the likeness of the actor without consent. The use of this AI-generated content directly leads to harm by facilitating online fraud and deception. The actor himself identifies this as a classic scam and is taking legal action. This fits the definition of an AI Incident because the AI system's use has directly led to harm (fraud and deception) affecting people.
Thumbnail Image

Andrija Milošević žrtva online prevare: "Ne nasjedajte, nemam veze s tim"

2025-10-21
Radio Sarajevo
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fraudulent content impersonating a public figure, which directly leads to harm by deceiving people and potentially causing financial loss. This fits the definition of an AI Incident because the AI system's use has directly led to harm to individuals (victims of the scam) and harm to the reputation of the person impersonated. The actor's warning and legal actions further confirm the harm is occurring and recognized.
Thumbnail Image

Andrija žrtva prevare, angažovao i advokata: Ja nemam veze sa tim

2025-10-21
Cafe del Montenegro
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it is used to generate the likeness of the actor without consent, constituting misuse of AI-generated content. The harm is realized as a fraud (scam) affecting the individual and potentially the public. Therefore, this qualifies as an AI Incident due to violation of rights and harm caused by the AI system's use.
Thumbnail Image

"Pokušavam da stanem na kraj to POŠASTI" Andrija Milošević angažovao advokata, evo sa čim se suočava

2025-10-21
Srpskainfo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated voice and likeness without consent, which is a misuse of AI technology causing harm to the individuals involved (violation of rights) and potentially to the public (through deception). This fits the definition of an AI Incident because the AI system's use has directly led to harm through unauthorized exploitation and fraudulent advertising.
Thumbnail Image

ANDRIJA MILOŠEVIĆ ŽRTVA PREVARE: Glumac hitno angažovao advokata, ovo su detalji slučaja

2025-10-21
espreso.co.rs
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate the likeness and voice of individuals without their permission, which is a misuse of AI systems. This misuse has directly caused harm by misleading the public and damaging the individuals' reputations, fitting the definition of an AI Incident under violations of rights and other significant harms. The involvement of AI in generating the fraudulent content is clear and central to the harm described.
Thumbnail Image

Omiljeni crnogorski glumac na meti prevaranata, oglasio se o svemu: ''Ne nasjedajte!''

2025-10-21
Showbuzz
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI is used to generate the actor's likeness in fraudulent ads, which is a misuse of AI technology. This misuse has directly led to harm by enabling scams that can cause financial and reputational damage. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (fraud and deception) affecting individuals and the actor's reputation.
Thumbnail Image

Poznati glumac na meti prevaranata: 'Dijelite ovo da vidi što više ljudi'

2025-10-21
direktno.hr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate fake ads featuring the actor's likeness, which is being used in fraudulent schemes. This misuse of AI has directly caused harm by deceiving people, fitting the definition of an AI Incident due to violation of rights and harm to communities through fraud. The actor's warning and legal efforts confirm the harm is ongoing and recognized.