Vietnam Uses AI for Online Propaganda and Censorship

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Vietnam's Communist Party is implementing a strategy to use AI-powered moderation tools and social media influencers to control online narratives and suppress dissent. The plan involves recruiting thousands of AI experts to remove content and guide discussions, leading to ongoing violations of freedom of expression and information.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems explicitly for content moderation and propaganda dissemination, which directly impacts human rights by censoring dissent and controlling information. The article details concrete plans and ongoing actions, indicating realized harm rather than just potential risk. Hence, this qualifies as an AI Incident due to the direct role of AI in violating rights and harming communities through ideological control and censorship.[AI generated]
AI principles
Respect of human rightsDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General publicCivil society

Harm types
Human or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Communist-run Vietnam eyes influencers, AI to spruce up propaganda, documents show

2026-05-08
Reuters
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems explicitly for content moderation and propaganda dissemination, which directly impacts human rights by censoring dissent and controlling information. The article details concrete plans and ongoing actions, indicating realized harm rather than just potential risk. Hence, this qualifies as an AI Incident due to the direct role of AI in violating rights and harming communities through ideological control and censorship.
Thumbnail Image

Communist-Run Vietnam Eyes Influencers, AI to Spruce up Propaganda, Documents Show

2026-05-08
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The event involves the planned use of AI systems for content moderation and propaganda dissemination, which could plausibly lead to violations of rights and harm to communities through censorship and suppression of dissenting views. Since the article discusses a draft strategy and future plans rather than realized harms, it fits the definition of an AI Hazard rather than an AI Incident. There is no indication of direct or indirect harm having already occurred due to AI use, only a credible risk of such harm in the future.
Thumbnail Image

Vietnam's Communist Party targets social media influencers, AI for propaganda

2026-05-08
The Jerusalem Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools developed by Vietnamese tech companies to remove content that infringes party guidelines and to lead social discussion, which implies AI system involvement in content moderation and propaganda. The use of AI to enforce censorship and control public narratives directly leads to violations of human rights, including freedom of expression and access to information, harming communities by suppressing dissent and spreading propaganda. These harms are materialized and ongoing as part of the party's strategy. Hence, the event meets the criteria for an AI Incident due to the direct role of AI in causing violations of rights and harm to communities.
Thumbnail Image

Communist-run Vietnam eyes influencers, AI to spruce up propaganda, documents show

2026-05-08
ThePrint
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems developed by Vietnamese tech companies to monitor and remove online content that infringes party guidelines, directly leading to suppression of dissent and control over public narratives. This constitutes a violation of human rights, specifically freedom of expression and access to information, which is a breach of fundamental rights protected by law. The AI system's use in this context is not hypothetical but actively planned and implemented, causing direct harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AsiaOne

2026-05-08
AsiaOne
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems explicitly for content moderation and removal to enforce party guidelines, which directly leads to violations of human rights (freedom of expression) and harm to communities by suppressing dissent and controlling information. The AI's role is pivotal in enabling rapid and large-scale censorship. The article describes an active plan and partial implementation, indicating realized harm rather than just potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Vietnam eyes influencers, AI to spruce up propaganda

2026-05-08
Bangkok Post
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems explicitly for content moderation and censorship to enforce party ideology, which directly leads to violations of human rights (freedom of expression and information). The AI's role is pivotal in removing content rapidly and at scale, thus causing harm to communities by restricting access to diverse information and suppressing dissent. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is ongoing and realized through the AI system's use.
Thumbnail Image

Communist-run Vietnam eyes influencers, AI to spruce up propaganda, documents show

2026-05-08
bdnews24.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems developed by Vietnamese tech companies to remove content that infringes party guidelines rapidly and to lead social discussion in favor of the ruling party's ideology. The intended use of AI for censorship and propaganda dissemination poses a credible risk of violating human rights and harming communities by suppressing dissent and controlling information. Although no direct harm is reported yet, the planned scale and scope of AI deployment for these purposes make it plausible that such harms could occur. Hence, this is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Vietnam Plans AI-Driven Propaganda Push With Influencers and Podcasts - EconoTimes

2026-05-08
EconoTimes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-powered moderation tools to remove content and guide online discussions, indicating AI system involvement. The event stems from the planned use of AI systems (development and use) for propaganda and censorship purposes. Although no direct harm has yet occurred, the strategy's goal to control narratives and censor content could plausibly lead to violations of human rights and harm to communities. Since the harm is potential and not yet realized, this fits the definition of an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to past incidents, so it is not Complementary Information. It is clearly related to AI systems and their societal impact, so it is not Unrelated.