Deepfake Videos on TikTok Falsely Show Celebrities Endorsing Lula for 2026 Election

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos on TikTok falsely depict celebrities like Cristiano Ronaldo, Lionel Messi, and Bruna Marquezine endorsing President Lula for Brazil's 2026 election. These manipulated videos, created with deepfake algorithms, spread misinformation and risk misleading the public and influencing political opinions.[AI generated]

Why's our monitor labelling this an incident or hazard?

The use of AI deepfake technology to create realistic but fake videos of celebrities endorsing a political candidate directly involves an AI system (deepfake algorithms). This manipulation can cause harm to communities by spreading misinformation and undermining trust in political processes, fulfilling the criteria for harm to communities. Since the harm is occurring through the dissemination of these videos, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securityTransparency & explainabilityDemocracy & human autonomySafety

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
General public

Harm types
ReputationalPublic interestHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generationOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Deepfakes no TikTok usam famosos para simular apoio a Lula

2025-08-14
UOL notícias
Why's our monitor labelling this an incident or hazard?
The use of AI deepfake technology to create realistic but fake videos of celebrities endorsing a political candidate directly involves an AI system (deepfake algorithms). This manipulation can cause harm to communities by spreading misinformation and undermining trust in political processes, fulfilling the criteria for harm to communities. Since the harm is occurring through the dissemination of these videos, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfakes no TikTok usam celebridades para simular apoio a Lula nas eleições de 2026

2025-08-15
Home
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology, which is an AI method for generating realistic fake videos. The misuse of these AI-generated videos to simulate political endorsements directly leads to harm by spreading misinformation and potentially manipulating public opinion, which harms communities and violates rights. Since the harm is occurring through the dissemination of these videos, this qualifies as an AI Incident rather than a hazard or complementary information. The AI system's use is central to the harm described, fulfilling the criteria for an AI Incident.
Thumbnail Image

Deepfakes no TikTok usam celebridades para simular apoio a Lula

2025-08-13
Jornal Estado de Minas | Not�cias Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake algorithms) used to generate manipulated videos of public figures falsely endorsing a political candidate. The harm is realized as these videos mislead the public, potentially influencing voter behavior and undermining trust in political processes, which qualifies as harm to communities and a violation of rights. The AI system's use directly led to the dissemination of false political endorsements, meeting the criteria for an AI Incident.
Thumbnail Image

PROJETO COMPROVA: Deepfakes no TikTok usam celebridades para simular apoio a Lula nas eleições de 2026

2025-08-14
abc+ | abcmais.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology to create realistic but false videos of public figures making political endorsements they never made. This use of AI directly leads to harm by misleading the public and potentially manipulating political opinions, which fits the definition of an AI Incident under harm to communities and violation of rights. The harm is realized as the videos have been widely viewed and engaged with, spreading misinformation. Therefore, this is classified as an AI Incident.
Thumbnail Image

Deepfakes no TikTok usam celebridades para simular apoio a Lula nas eleições de 2026

2025-08-14
JC
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake algorithms) to create realistic but fake videos of public figures making political endorsements they never made. This use of AI directly leads to harm by spreading misinformation and potentially manipulating voters, which constitutes harm to communities and a violation of rights related to truthful information and fair political processes. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm through misinformation and political manipulation.