AI-Generated Deepfake Video Causes Misinformation and Reputational Harm to Indonesian Actor

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated deepfake video falsely depicted Indonesian actor Ari Wibowo marrying Clara Oktavia, leading to widespread misinformation and reputational harm. Ari Wibowo publicly clarified the hoax, expressing concern over the increasing misuse of AI for creating fake news and misleading the public.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system used to create a fabricated video (deepfake) that misrepresents a real person, leading to misinformation and reputational harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm in the form of false information dissemination and violation of personal rights. The harm is materialized, not just potential, as the actor publicly addresses and takes action against the hoax.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
OtherGeneral public

Harm types
Reputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Bantah Menikah Lagi, Ari Wibowo Keluhkan Maraknya Berita Hoaks Menggunakan AI

2026-03-20
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to create a fabricated video (deepfake) that misrepresents a real person, leading to misinformation and reputational harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm in the form of false information dissemination and violation of personal rights. The harm is materialized, not just potential, as the actor publicly addresses and takes action against the hoax.
Thumbnail Image

Viral Video Ari Wibowo Nikah Lagi dengan Clara Oktavia, Ini Faktanya!

2026-03-20
VIVA.co.id
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake video (deepfake) that misrepresents real people, leading to reputational harm and misinformation. This constitutes a violation of rights and harm to communities through the spread of falsehoods. Since the AI-generated content has already been disseminated and caused harm, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

Jawab Isu Nikahi Clara Oktavia, Ari Wibowo Beber Fakta Foto Baju Pengantin: Semoga Langgeng ya

2026-03-20
Banjarmasin Post
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake video and news article falsely depicting a marriage event, which is a clear case of AI-generated misinformation causing harm to the individual's reputation and misleading the community. The harm is realized as the false news is believed by some people, and the individual affected has taken action to report the source. This fits the definition of an AI Incident because the AI system's use directly led to harm to the community (misinformation) and violation of rights (reputational harm).
Thumbnail Image

Ari Wibowo Tanggapi Kabar Pernikahan dengan Clara Oktavia : Okezone Celebrity

2026-03-20
https://celebrity.okezone.com/
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated fake news, which is a misuse of AI technology. However, the article does not report any direct or indirect harm resulting from this AI-generated content, such as public harm or rights violations. The main focus is on the actor's clarification and concern about AI misuse, which constitutes a societal response and awareness raising rather than a new incident or hazard. Therefore, this is best classified as Complementary Information, as it provides context and response to AI misuse without describing a new AI Incident or AI Hazard.
Thumbnail Image

Ari Wibowo Bantah Kabar Menikahi Perempuan Bernama Clara Oktavia

2026-03-21
detik hot
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a fake video (deepfake) that directly led to reputational harm and misinformation about a person, which can be considered harm to communities and individuals. The AI-generated false content caused real-world consequences, including public confusion and the need for the individual to respond and report the misinformation. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm through misinformation and reputational damage.