Polish Political Party Uses AI-Generated Deepfake Voice of Prime Minister in Campaign Ad

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Platforma Obywatelska released a political ad using AI to generate a synthetic voice of Prime Minister Mateusz Morawiecki, reading alleged leaked emails without clear disclosure. The deepfake audio, initially unlabeled, misled viewers and sparked concerns about misinformation, deepfake risks, and harm to public trust in democratic processes.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that the opposition party used AI to generate a fake voice of the Prime Minister in a political ad without clarifying it was synthetic, which is a direct use of AI technology leading to misinformation and deception. This can be reasonably inferred to cause harm to the community by spreading false information and undermining democratic processes. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's use in generating misleading content that deceives the public.[AI generated]
AI principles
Transparency & explainabilityAccountabilityDemocracy & human autonomySafetyRespect of human rights

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General publicGovernment

Harm types
Public interestReputational

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Premier komentuje spot PO, w którym wykorzystano jego głos. "Spodziewałem się tego"

2023-08-26
Onet Wiadomości
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the opposition party used AI to generate a fake voice of the Prime Minister in a political ad without clarifying it was synthetic, which is a direct use of AI technology leading to misinformation and deception. This can be reasonably inferred to cause harm to the community by spreading false information and undermining democratic processes. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's use in generating misleading content that deceives the public.
Thumbnail Image

Deepfake Mateusza Morawieckiego w spocie Platformy Obywatelskiej. "To szczególnie groźne"

2023-08-24
nextgazetapl
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate the voice of Mateusz Morawiecki, constituting AI involvement. The use of this AI-generated voice without clear labeling can mislead viewers, contributing to misinformation and potentially harming democratic processes, which is a harm to communities. Since the disinformation is actively being disseminated and has caused public concern, this qualifies as an AI Incident due to realized harm linked to the AI system's use in generating misleading content. The event is not merely a potential risk but an actual occurrence of AI-generated content causing harm.
Thumbnail Image

"Platforma Obywatelska oszukuje Polaków". Premier Morawiecki: Platforma podrabia mój głos

2023-08-26
NIEZALEZNA.PL
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a synthetic voice that mimics a real person, used in a political ad without initial disclosure. This use of AI directly leads to misinformation and deception of the public, which is a harm to communities and a violation of trust. The AI-generated voice was used to present fabricated content as if spoken by the Prime Minister, which is a clear case of AI misuse causing harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Skandaliczny spot PO! Budka nie widzi zagrożeń

2023-08-24
wpolityce.pl
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating synthetic voice content used in a political advertisement. The lack of initial disclosure and the realistic nature of the AI-generated voice can mislead the audience into believing false statements were made by the Prime Minister, constituting harm to communities through misinformation. The involvement of foreign intelligence services suggests malicious use, reinforcing the direct link to harm. Hence, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Premier: "Platforma podrabia mój głos, oszukuje Polaków"

2023-08-26
wpolityce.pl
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of artificial intelligence to generate a voice that mimics the Prime Minister's voice in a political ad, which is used to spread misleading information. This constitutes the use of an AI system (voice synthesis) in a way that directly leads to harm by deceiving people, thus violating trust and potentially causing harm to the community through misinformation and propaganda. Therefore, this qualifies as an AI Incident due to the realized harm of misinformation and manipulation of public opinion using AI-generated content.
Thumbnail Image

Oszustwo. Premier ocenił spot PO z jego wizerunkiem

2023-08-26
polsatnews.pl
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a synthetic voice mimicking the Prime Minister, which was then used in a political campaign video without initial disclosure. This use directly led to public deception and controversy, fulfilling the criteria for harm to communities (misinformation and manipulation). The event involves the use of AI (development and use) leading to realized harm, not just potential harm. Hence, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Afera wokół spotu PO z "podrobionym" głosem premiera. Morawiecki oburzony. Budka: Bardzo dobry pomysł

2023-08-26
Dziennik
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a synthetic voice resembling the Prime Minister's voice, which was integrated into a political advertisement without initial disclosure. This use of AI directly led to public accusations of deception and propaganda, indicating realized harm to the community's trust and political integrity. The event meets the criteria for an AI Incident because the AI system's use has directly led to harm in the form of misinformation and manipulation in a political context.
Thumbnail Image

PO wypuściła spot z głosem Morawieckiego wygenerowanym przez AI. Rodzi się niebezpieczne zjawisko

2023-08-25
Wprost
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI for voice synthesis) was used to create deepfake audio of political figures. The use was intentional and undisclosed, leading to misinformation that can harm communities by spreading false impressions about political statements. This constitutes a violation of informational integrity and can be considered harm to communities. Therefore, this qualifies as an AI Incident due to the realized harm of misinformation and potential political manipulation caused by the AI-generated content.
Thumbnail Image

PO posunęła się za daleko? Budka broni spotu: "AI jest bardziej prawdziwa niż premier"

2023-08-25
TOK FM
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system to generate a synthetic voice of a political leader in a political ad without clear upfront disclosure, which has led to public confusion and expert concern about misinformation and deepfake risks. The AI system's use directly caused the misleading content, fulfilling the criteria for an AI Incident involving harm to communities through misinformation and potential violation of rights to truthful information. The subsequent addition of a disclosure after several hours does not negate the initial harm caused. Hence, this is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

B. Budka o głosie wygenerowanym przez sztuczną inteligencję w spocie PO: bardzo dobry pomysł

2023-08-24
wnp.pl
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a voice resembling the prime minister to read alleged leaked emails, blending real and AI-generated content without initial clear disclosure. This use of AI directly leads to potential harm by misleading the public, which fits the definition of an AI Incident due to violation of rights and harm to communities through misinformation. Although the spot later added a disclosure, the initial omission and the nature of the content imply realized harm from the AI system's use.
Thumbnail Image

Premier o spocie PO: Platforma podrabia mój głos; to oszustwo i propaganda

2023-08-26
wnp.pl
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a synthetic voice resembling the Prime Minister's, which was integrated into a political advertisement without initial disclosure. This use of AI directly led to misinformation and deception of the public, constituting harm to the community through propaganda and manipulation. The event describes realized harm caused by the AI system's use, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Krytykują spot PO. Wykorzystano "głos" premiera

2023-08-25
wydarzenia.interia.pl
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate the voice of the Prime Minister in a political advertisement. The use of AI-generated voice without clear labeling can plausibly lead to misinformation and harm to democratic processes, which fits the definition of an AI Hazard. There is no indication that harm has already occurred or that the AI use directly caused harm, so it is not an AI Incident. The article also includes responses and calls for better labeling, which are complementary information but the main focus is on the potential risk of harm from the AI-generated voice. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

Jest afera! PO wykorzystało w spocie sztucznego Morawieckiego! Ekspert alarmuje

2023-08-25
polityka.se.pl
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a synthetic voice of the prime minister, which was then used in a political campaign video without clear disclosure. This use of AI directly led to misinformation and potential harm to the public's trust and democratic processes, fulfilling the criteria for harm to communities. The involvement of AI in generating misleading content and the resulting harm is direct and realized, not merely potential. Hence, this is classified as an AI Incident.