AI Deepfake of Portuguese Politician Used for Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI was used to create a misleading video featuring Portuguese politician André Ventura, simulating his voice to promote a financial platform. This deepfake, identified by MediaLab and reported to the National Election Commission, represents a case of disinformation with commercial motives, potentially impacting democratic processes.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article details an active deployment of AI to impersonate a political leader in a false advertisement, representing a direct misuse of generative AI that has already led to disinformation. Because the harm (misinformation) is occurring and stems directly from the AI system’s outputs, this qualifies as an AI Incident.[AI generated]
AI principles
Transparency & explainabilityRespect of human rightsPrivacy & data governanceDemocracy & human autonomyAccountabilitySafety

Industries
Media, social platforms, and marketingFinancial and insurance servicesGovernment, security, and defence

Affected stakeholders
General public

Harm types
ReputationalPublic interestHuman or fundamental rightsEconomic/Property

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Inteligência artificial simula André Ventura em caso de desinformação

2024-06-07
Correio da Manha
Why's our monitor labelling this an incident or hazard?
The article details an active deployment of AI to impersonate a political leader in a false advertisement, representing a direct misuse of generative AI that has already led to disinformation. Because the harm (misinformation) is occurring and stems directly from the AI system’s outputs, this qualifies as an AI Incident.
Thumbnail Image

Europeias. Inteligência artificial simula André Ventura em caso de desinformação

2024-06-07
Observador
Why's our monitor labelling this an incident or hazard?
This is a realized harm event in which an AI system was used to generate deepfake audio content for deceptive political and economic motives. The AI’s misuse directly led to the spread of false narratives, meeting the definition of an AI Incident (disinformation causing harm to communities and public trust).
Thumbnail Image

Voz de Ventura clonada por IA em caso de publicidade enganosa com conteúdo político

2024-06-07
ECO
Why's our monitor labelling this an incident or hazard?
An AI-powered speech synthesis tool was directly used to clone André Ventura’s voice and generate false campaign messages, which were disseminated to tens of thousands of people. This constitutes a concrete case of AI-driven disinformation with public-harm implications, fitting the definition of an AI Incident.
Thumbnail Image

IA simula voz de André Ventura para anúncio

2024-06-07
ionline
Why's our monitor labelling this an incident or hazard?
The article reports that an AI system was used to create a deep-fake audio of André Ventura’s voice in an advertisement to mislead viewers for apparent commercial gain. This is a realized harm—political disinformation—that directly results from misuse of AI technology, fitting the definition of an AI Incident.
Thumbnail Image

Europeias: Inteligência artificial simula André Ventura em caso de desinformação

2024-06-07
Sapo - Portugal Online!
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic imitation of André Ventura’s voice without consent, and this deepfake was deployed in a deceptive advertisement, causing direct harm through political and economic misinformation. This meets the criteria for an AI Incident because the AI’s misuse has already led to disinformation and potential harm.
Thumbnail Image

Inteligência artificial simula André Ventura em caso de desinformação

2024-06-07
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The event involves an AI system’s use to generate and disseminate falsified political content (deepfake video and AI-generated voice), which has directly led to misinformation and potential political manipulation of the public. This constitutes realized harm via disinformation, so it meets the definition of an AI Incident.
Thumbnail Image

Vídeo de publicidade enganosa usa inteligência artificial para imitar Ventura

2024-06-07
SIC Notícias
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating synthetic voice to impersonate a political figure in a misleading advertisement, which is a direct use of AI technology. The resulting disinformation has caused harm by misleading the public and potentially causing economic loss, as well as reputational harm to the individuals involved. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to communities and economic harm.