Influencer Nati Jota's Image Misused by AI for Scams

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Influencer Nati Jota reported that her image has been altered using AI by online betting sites and mobile brands for scams. She expressed anger and concern on social media, warning followers about potential fraud. This misuse of her image without consent highlights the risks of AI in violating rights and enabling scams.[AI generated]

Why's our monitor labelling this an incident or hazard?

This is a direct misuse of AI (deepfake image and voice synthesis) to impersonate a public figure, trick consumers, and damage her image. The harm (fraud, reputational damage, deception of followers) has already occurred, constituting an AI-related incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityRobustness & digital securitySafety

Industries
Media, social platforms, and marketingConsumer productsConsumer servicesDigital security

Affected stakeholders
ConsumersWomen

Harm types
ReputationalPsychologicalEconomic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

El drama de Nati Jota al estar vinculada en una estafa con inteligencia artificial: "Me da impotencia

2024-08-06
La Nacion
Why's our monitor labelling this an incident or hazard?
This is a direct misuse of AI (deepfake image and voice synthesis) to impersonate a public figure, trick consumers, and damage her image. The harm (fraud, reputational damage, deception of followers) has already occurred, constituting an AI-related incident.
Thumbnail Image

Nati Jota denunci├│ que usan su imagen para estafar personas

2024-08-07
El Litoral
Why's our monitor labelling this an incident or hazard?
Scammers have deployed AI systems to create falsified endorsements (altered images and synthetic voice video) of Nati Jota, directly leading to fraud against consumers. This is a misuse of AI that has resulted in real harm, meeting the definition of an AI Incident.
Thumbnail Image

"Estoy enojada, preocupada y angustiada": Totalmente indignada, Nati Jota denunci├│ estafas a su nombre

2024-08-06
El Intransigente
Why's our monitor labelling this an incident or hazard?
The article describes a deliberate misuse of AI to generate deepfake videos with Nati Jota’s face and voice to conduct scams. This misuse has directly led to the potential for financial harm and deception of users. Hence, this qualifies as an AI Incident, where the AI system’s use has materialized in harmful outcomes.
Thumbnail Image

Nati Jota denunci├│ que usan su imagen con Inteligencia Artificial para estafar personas | Por las redes

2024-08-06
Los Andes
Why's our monitor labelling this an incident or hazard?
AI systems (deepfake image and voice generators) are explicitly used to create fraudulent endorsements, resulting in scams against users. The harm—fraud and potential monetary loss—has directly occurred or is occurring, making this an AI Incident.
Thumbnail Image

Nati Jota denunció que usan su imagen con Inteligencia Artificial para estafar personas

2024-08-06
La Capital MdP
Why's our monitor labelling this an incident or hazard?
The event describes the actual malicious use of AI (deepfake image and voice generation) to impersonate the influencer and scam individuals. This misuse has directly led to financial harm and deception, meeting the criteria for an AI Incident.
Thumbnail Image

Nati Jota hizo una grave denuncia: usan su imagen con Inteligencia Artificial para estafar personas

2024-08-07
Diario Río Negro
Why's our monitor labelling this an incident or hazard?
This is a direct misuse of AI-generated content (deepfake images and voice) to defraud individuals via fake endorsements, causing financial harm and deception. The AI system’s outputs (altered image and speech) are being used maliciously, fulfilling the criteria for an AI incident.
Thumbnail Image

La furia de Nati Jota tras denunciar que usaron su cara para promocionar estafas en redes: "Estoy enojada, preocupada y angustiada"

2024-08-06
Clarin
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate a video with Nati Jota's face and a similar voice promoting a scam product without her consent. This misuse of AI-generated content has directly led to fraudulent activities (scams) that harm consumers financially and damage the influencer's reputation. Therefore, it qualifies as an AI Incident due to realized harm caused by the AI system's misuse.
Thumbnail Image

El descargo de Nati Jota al ser vinculada con una estafa en redes sociales: "Estoy preocupada"

2024-08-07
Data Diario
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create a fake video of Nati Jota recommending investments she never made, which is part of a scam on social media. The AI system's use here directly leads to harm by deceiving people into falling for a financial scam, fulfilling the criteria for an AI Incident under harm to communities and individuals. The harm is realized, not just potential, as people are being scammed. Therefore, this is an AI Incident.
Thumbnail Image

Nati Jota denunció que usan su imagen con Inteligencia Artificial para estafas

2024-08-06
Rosario3
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create videos with the person's face and a voice similar to hers, recommending investments and products she never endorsed. This constitutes a direct AI Incident because the AI-generated content is being used maliciously to perpetrate scams, causing harm to individuals who might be defrauded. The harm is realized (people have been scammed or at risk), and the AI system's misuse is central to the incident. Therefore, this is classified as an AI Incident.
Thumbnail Image

La furia de Nati Jota por el uso de su imagen con inteligencia artificial para una presunta estafa

2024-07-24
La Nacion
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate manipulated audio-visual content (deepfake) that directly enables a fraud, posing clear harm to individuals. This constitutes an AI Incident because the AI-driven deepfake has already been deployed to perpetrate a scam.
Thumbnail Image

Nati Jota denunció que usaron inteligencia artificial para promocionar una estafa con su cara

2024-07-23
Clarin
Why's our monitor labelling this an incident or hazard?
The incident involves the malicious use of AI-generated content (face and voice deepfake) to deceive viewers into clicking a fraudulent link, directly leading to potential monetary loss and violation of the influencer’s personal rights. This is a realized harm caused by AI misuse, fitting the definition of an AI Incident.
Thumbnail Image

La furia de Nati Jota por el uso de su imagen con inteligencia artificial para una presunta estafa

2024-07-24
es-us.vida-estilo.yahoo.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system being used to generate manipulated video and audio content of a public figure to promote a fraudulent scheme. The harm is realized as the scam is actively being promoted and people are warned about falling victim. The AI system's misuse directly causes harm through identity theft and fraud, meeting the criteria for an AI Incident.