AI-Generated Disinformation Campaign Targets Singapore's Prime Minister

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A coordinated disinformation campaign used AI-generated, Chinese-language YouTube videos to spread false narratives and conspiracy theories about Singapore and Prime Minister Lawrence Wong. Nearly 300 videos, featuring synthetic voiceovers and deepfake avatars, amassed millions of views, undermining political trust and exploiting search engine optimization tactics.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly mentions the use of AI-generated videos and computer-generated voiceovers to spread disinformation, which is a direct use of AI systems. The disinformation campaign has already caused harm by spreading false narratives and conspiracy theories that can disrupt social and political cohesion, thus harming communities. The scale and persistence of the campaign, along with the millions of views, indicate realized harm rather than a potential risk. Hence, this fits the definition of an AI Incident as the AI system's use has directly led to harm to communities through misinformation.[AI generated]
AI principles
Respect of human rightsDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Singapore and PM Lawrence Wong targeted in AI-driven disinformation campaign on YouTube

2026-02-24
CNA
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-generated videos and computer-generated voiceovers to spread disinformation, which is a direct use of AI systems. The disinformation campaign has already caused harm by spreading false narratives and conspiracy theories that can disrupt social and political cohesion, thus harming communities. The scale and persistence of the campaign, along with the millions of views, indicate realized harm rather than a potential risk. Hence, this fits the definition of an AI Incident as the AI system's use has directly led to harm to communities through misinformation.
Thumbnail Image

Singapore prime minister attacked by hundreds of Chinese-language fake AI videos

2026-02-25
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is reasonably inferred from the description of fake AI videos and automated bot traffic. The event involves the use of AI-generated content to spread misinformation and conspiracy theories, which constitutes harm to communities. Since the harm is occurring through the dissemination of false narratives and political misinformation, this qualifies as an AI Incident under the framework.
Thumbnail Image

Singapore hit by AI disinformation blitz, hundreds of fake YouTube videos target PM Wong

2026-02-25
Malay Mail
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of artificial intelligence to generate hundreds of videos spreading disinformation, which have attracted millions of views and are part of a coordinated campaign. This constitutes harm to communities through misinformation and manipulation, fulfilling the criteria for an AI Incident. The AI system's use in content generation and search engine manipulation is directly linked to the harm caused.
Thumbnail Image

False claims about Singapore and PM Lawrence Wong spread in AI-driven disinformation campaign on YouTube - Singapore News

2026-02-25
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated videos with computer-generated voiceovers and deepfakes, indicating AI system involvement. The disinformation campaign has already caused harm by misleading millions of viewers, undermining political trust, and spreading false narratives, which constitutes harm to communities. The AI system's use in generating and disseminating this content is a direct factor in the harm. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Coordinated AI disinformation network targets Prime Minister Lawrence Wong

2026-02-26
The Online Citizen
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models, text-to-speech, synthetic imagery, deepfake avatars) used to generate and disseminate false narratives. The disinformation campaign has already materialized, with millions of views and coordinated activity designed to erode institutional trust and create domestic uncertainty. This meets the criteria for harm to communities and violation of rights through misinformation. The AI system's use directly led to this harm, making it an AI Incident rather than a hazard or complementary information.