Vietnamese YouTubers Fined for Using AI to Create Harmful Fake Videos

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Lâm Đồng, Vietnam, several individuals, including N.T.K. and a group of three others, used AI tools to produce and publish hundreds of fabricated, sensational videos on YouTube. These videos spread misinformation, caused public alarm, and damaged reputations, leading authorities to impose administrative fines for the misuse of AI-generated content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly mentions the use of AI to create fabricated video content that spread false and harmful narratives, leading to legal penalties for misinformation and defamation. The AI system's role in generating and disseminating false information that harmed reputations and misled the public fits the definition of an AI Incident, as it caused violations of rights and harm to communities. The harm is realized, not just potential, and the AI system's involvement is central to the incident.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Dùng AI bịa đặt nhiều vụ án rùng rợn ở Đà Lạt

2026-04-02
vnexpress.net
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create fabricated video content that spread false and harmful narratives, leading to legal penalties for misinformation and defamation. The AI system's role in generating and disseminating false information that harmed reputations and misled the public fits the definition of an AI Incident, as it caused violations of rights and harm to communities. The harm is realized, not just potential, and the AI system's involvement is central to the incident.
Thumbnail Image

3 sinh viên bị chôn sống ở Đà Lạt là tin giả

2026-04-02
TUOI TRE ONLINE
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate fabricated video content that spread false and harmful information, which constitutes a violation of rights and harm to communities through misinformation. The harm has already occurred as the videos were widely viewed and caused reputational damage and misinformation. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm through the dissemination of false information and reputational damage.
Thumbnail Image

Xử phạt người phụ nữ dùng AI tạo hơn 400 video bịa đặt 'câu view' trên YouTube

2026-04-02
VietNamNet News
Why's our monitor labelling this an incident or hazard?
The event explicitly states that AI was used to create fabricated videos that spread false and sensational stories, which were viewed by millions, causing misinformation harm to the public. The harm is realized as the videos mislead viewers and violate laws against sharing false information. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The legal penalty confirms the recognition of harm caused by the AI-generated content.
Thumbnail Image

Sử dụng AI để 'câu view' tiêu cực: Có thể bị phạt đến 7 năm tù

2026-04-05
Thanh Niên
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake and generative AI tools) to create false videos and images that have been disseminated widely, causing harm to individuals' reputations and misleading the public, which fits the definition of harm to communities and violations of rights. The article reports actual incidents where harm occurred and legal actions were taken, confirming direct or indirect harm caused by AI use. Hence, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Xử phạt 3 người dùng AI dựng clip bịa đặt, xuyên tạc hình ảnh quân đội để câu view

2026-04-05
Báo điện tử Tiền Phong
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technologies to produce false and manipulated video content that misrepresents the military, causing harm to the community by spreading misinformation and damaging the reputation of a public institution. This constitutes a violation of rights and harm to communities as defined in the framework. Since the harm has already occurred and legal penalties have been imposed, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Xử phạt 1 phụ nữ dùng AI sản xuất 415 video kinh dị

2026-04-02
Báo Pháp Luật TP. Hồ Chí Minh
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create harmful, misleading video content that caused social harm by spreading fear and falsehoods. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities through misinformation and harmful content dissemination. The administrative penalty confirms recognition of the harm caused. Therefore, this is classified as an AI Incident.
Thumbnail Image

Dùng AI bịa chuyện 3 sinh viên bị chôn sống tại Đà Lạt: Cái kết đắng cho YouTuber

2026-04-02
Techz.vn
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to produce fabricated content that was widely viewed and caused public alarm and negative effects on local security, which constitutes harm to communities. The use of AI to generate and spread false information that disrupts social order fits the definition of an AI Incident, as the AI system's use directly led to harm (harm to communities and violation of trust). The administrative penalty imposed further confirms the recognition of harm caused. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Dùng AI dựng hơn 400 video 'rùng rợn' ở Đà Lạt, một YouTuber bị phạt 7,5 triệu đồng

2026-04-02
suckhoedoisong.vn
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create false and sensational videos, which were disseminated widely and attracted millions of views. The use of AI in generating these videos directly led to the spread of misinformation and harm to public trust and individual reputations, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The legal penalty confirms the harm was recognized and materialized, not just potential. Therefore, this is classified as an AI Incident.
Thumbnail Image

N . T . K . Bị phạt vì bịa đặt thông tin gây sốc ở Đà Lạt

2026-04-02
afamily.vn
Why's our monitor labelling this an incident or hazard?
The event explicitly states that AI was used to create false and shocking videos that are not based on real events, which were then shared widely to attract views and likes. This use of AI-generated misinformation constitutes a violation of rights related to reputation and dignity, and the spread of false information can harm communities by misleading the public. Since the AI system's use directly led to these harms and legal penalties were imposed, this qualifies as an AI Incident under the framework.
Thumbnail Image

Từ vụ dùng AI sản xuất 415 video kinh dị: Hệ lụy ma trận tin độc hại trên mạng

2026-04-04
Báo Pháp Luật TP. Hồ Chí Minh
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate harmful content that has been widely disseminated, causing psychological harm to individuals (notably children), harm to community reputation, and spreading misinformation that manipulates public perception. These effects constitute harm to communities and violations of rights related to truthful information and mental well-being. The AI system's use directly led to these harms, qualifying this as an AI Incident under the framework definitions.
Thumbnail Image

Dùng AI dựng video xúc phạm Quân đội, cái kết đắng cho nhóm thanh niên câu view

2026-04-05
Báo Pháp Luật TP. Hồ Chí Minh
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI tools were used to generate false and defamatory video content targeting a national institution, leading to social harm and legal consequences. The harm is realized, not just potential, as the videos caused public distress and legal action was taken against the perpetrators. The AI system's role in fabricating and spreading misinformation is pivotal to the incident, meeting the criteria for an AI Incident under the OECD framework.