AI-Generated Fake Sensitive Images Cause Harm Among Students in Đồng Nai

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Đồng Nai, Vietnam, a male student used AI software to create fake sensitive images of a female classmate, which were then spread on social media due to personal conflict. The incident caused psychological and reputational harm, prompting police intervention and highlighting risks of AI misuse among students.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI technology to generate fake images, which were then shared and caused harm to the victim's psychological well-being and reputation. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person and harm to the community through misinformation and reputational damage. The involvement of law enforcement and educational responses further confirms the recognition of harm caused by AI misuse.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
PsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Một vụ việc rất nghiêm trọng vừa xảy ra: Các nhà trường, phụ huynh cần đặc biệt chú ý

2026-05-02
Kenh14.vn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology to generate fake images, which were then shared and caused harm to the victim's psychological well-being and reputation. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person and harm to the community through misinformation and reputational damage. The involvement of law enforcement and educational responses further confirms the recognition of harm caused by AI misuse.
Thumbnail Image

Đồng Nai: Học sinh lớp 8 dùng AI ghép ảnh nhạy cảm nữ sinh cùng lớp

2026-05-02
Thanh Niên
Why's our monitor labelling this an incident or hazard?
The use of AI to generate manipulated sensitive images that were then disseminated constitutes a direct involvement of an AI system in causing harm to an individual, specifically psychological harm and violation of personal dignity. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (psychological and reputational harm). The event also involves violation of rights and the spread of false information, which are harms covered under the AI Incident definition. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Nam sinh dựng ảnh nhạy cảm bạn học nữ, công an vào cuộc

2026-05-02
Báo điện tử Tiền Phong
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI software to create manipulated images that harmed an individual's reputation and caused psychological distress, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the victim suffered from the spread of false and sensitive images. The involvement of AI in generating the harmful content and the resulting violation of personal rights and dignity clearly classify this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Học sinh lớp 8 ở Đồng Nai dùng AI tạo ảnh nhạy cảm giả mạo bạn cùng lớp

2026-05-02
Báo điện tử Tiền Phong
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI technology to create fake sensitive images that were spread among students, causing real psychological harm and damage to the victim's reputation. This meets the criteria for an AI Incident because the AI system's use directly led to harm to a person (psychological and reputational harm) and violation of rights. The involvement of AI is clear, and the harm is realized, not just potential.
Thumbnail Image

Nữ sinh Đồng Nai bị bạn nam cùng lớp dùng AI chế ảnh nhạy cảm vì mâu thuẫn

2026-05-02
VOV.vn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to create manipulated images that harmed a person's reputation and caused psychological distress, which constitutes harm to a person and violation of rights. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The involvement of AI in generating fake images and the resulting harm to the victim's dignity and mental health clearly meets the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Nam sinh dựng ảnh nhạy cảm bạn học nữ, công an vào cuộc

2026-05-02
afamily.vn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI software to create fake sensitive images, which were disseminated causing harm to the victim's psychological health and personal dignity. The AI system's use directly led to violations of personal rights and harm to the individual. The police intervention and educational responses are complementary but do not negate the fact that harm occurred due to AI misuse. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Giật mình học sinh dùng AI tạo ảnh nhạy cảm của bạn trong lớp rồi phát tán mạng xã hội

2026-05-02
Báo Người Lao Động Online
Why's our monitor labelling this an incident or hazard?
The use of AI to generate fake sensitive images that are then shared and cause reputational and emotional harm fits the definition of an AI Incident. The AI system's use directly led to harm to a person (violation of rights and harm to dignity) and harm to the community (spread of false information and social disruption). The event is not merely a potential risk but a realized harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Vụ nam sinh dùng AI ghép ảnh nhạy cảm bạn học nữ: Công an vào cuộc

2026-05-02
Đời sống pháp luật
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI software to create fake sensitive images, which were then disseminated, causing harm to the victim's psychological well-being and reputation. The harm is realized and directly linked to the AI system's use. The incident involves violation of personal rights and harm to the community (the school environment). Therefore, it meets the criteria for an AI Incident rather than a hazard or complementary information.