AI-Generated Deepfake Videos Used in Financial Scam in Portugal

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos featuring CNN Portugal personalities and Prime Minister Luís Montenegro were used in YouTube ads and fake news sites to promote a fraudulent investment scheme. The scam, promising high returns via a fake AI trading platform, deceived victims and caused financial harm in Portugal.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems used to create deepfake videos, which are manipulated audiovisual content generated by AI. These deepfakes are central to the scam's operation, misleading people and causing financial harm, which qualifies as harm to individuals (a). Since the AI-generated content is directly used to perpetrate fraud and financial loss, this constitutes an AI Incident. The harm is realized, not just potential, as the scam is actively promoted and likely causes victim losses.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketingFinancial and insurance services

Affected stakeholders
Consumers

Harm types
Economic/Property

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Vídeo falsos da CNN Portugal e de Luís Montenegro usados para promover burla

2026-02-22
SAPO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are manipulated audiovisual content generated by AI. These deepfakes are central to the scam's operation, misleading people and causing financial harm, which qualifies as harm to individuals (a). Since the AI-generated content is directly used to perpetrate fraud and financial loss, this constitutes an AI Incident. The harm is realized, not just potential, as the scam is actively promoted and likely causes victim losses.
Thumbnail Image

Vídeos falsos da CNN e de Luís Montenegro usados para promover burla

2026-02-21
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology to create false videos and manipulated content. These AI-generated materials are used to perpetrate a financial scam, misleading people into investing money under false pretenses. The scam has already caused harm by deceiving individuals and damaging reputations, fulfilling the criteria for an AI Incident. The AI system's role is pivotal as it enables the creation of convincing fake videos and content that facilitate the fraud. Hence, this is not merely a potential hazard or complementary information but a realized incident involving AI harm.
Thumbnail Image

Vídeos falsos da CNN Portugal e de Luís Montenegro usados para promover burla

2026-02-21
Correio da Manha
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos that misrepresent public figures to promote a scam. The harm is direct financial harm to victims of the fraud, fulfilling the criteria of harm to persons or communities. The AI system's malfunction or misuse (deepfake generation) is pivotal in enabling the scam. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm is occurring and linked to AI misuse.
Thumbnail Image

Atenção! Vídeos falsos da CNN Portugal e de Luís Montenegro usados para promover burla

2026-02-21
NOTÍCIAS DE COIMBRA
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos and falsified content to perpetrate a financial scam, which has already caused harm by deceiving people and potentially causing financial loss. The AI system's role is pivotal in generating convincing fake videos and webpages that enable the scam. This meets the criteria for an AI Incident because the AI's use has directly led to harm (fraud and deception) and violations of rights. The harm is realized, not just potential, and the AI system's involvement is central to the incident.