Deepfake AI Used to Impersonate Indonesian Celebrities in Illegal Gambling Hoax

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A series of deepfake videos and audio has circulated online falsely depicting Najwa Shihab, Raffi Ahmad, and Atta Halilintar promoting illegal gambling sites. Victims reported reputational harm and urged regulators to tighten AI policies, as perpetrators exploit voice and face synthesis to advance fraudulent schemes and mislead the public.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system generating synthetic audio to create a misleading video that falsely attributes statements to public figures. This use of AI directly leads to harm by spreading misinformation and defaming individuals, which can damage reputations and mislead the public. Therefore, it qualifies as an AI Incident due to the realized harm to individuals and communities through misinformation and reputational damage.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
General publicOther

Harm types
ReputationalPublic interestHuman or fundamental rightsPsychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Waspada Hoaks Video Najwa Shihab, Raffi Ahmad, dan Atta Halilintar Promosi Judi Online

2024-01-17
suara.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating synthetic audio to create a misleading video that falsely attributes statements to public figures. This use of AI directly leads to harm by spreading misinformation and defaming individuals, which can damage reputations and mislead the public. Therefore, it qualifies as an AI Incident due to the realized harm to individuals and communities through misinformation and reputational damage.
Thumbnail Image

Viral Video Diduga Promosikan Judi Online, Atta Halilintar Merasa Dirugikan - Wartakotalive.com

2024-01-17
Warta Kota
Why's our monitor labelling this an incident or hazard?
The event describes a viral video where AI technology was used to edit the voices of public figures to falsely promote online gambling. This misuse of AI has directly led to reputational harm and misinformation, which falls under violations of rights and harm to communities. The AI system's use in creating manipulated content that causes harm to a person's reputation meets the criteria for an AI Incident. The harm is realized, not just potential, as the video is viral and the individual feels harmed.
Thumbnail Image

Beredar Video Suara Raffi Ahmad - Atta Halilintar dan Najwa Shihab Layaknya Sedang Promosikan Judi Online, Diedit Pakai AI

2024-01-17
KapanLagi.com
Why's our monitor labelling this an incident or hazard?
The video involves the use of AI to create manipulated content (deepfake) that falsely represents individuals promoting online gambling. This constitutes misinformation and potential reputational harm to the individuals involved and could mislead the public. Since the AI-generated content is actively spreading and causing harm to the reputation and potentially misleading communities, this qualifies as an AI Incident due to harm to communities and individuals' rights through misinformation.
Thumbnail Image

Viral Iklan Situs Judi Online Gunakan Teknologi AI Artis dan Publik Figur sebagai Media Promosi - Mantra Sukabumi

2024-01-17
Mantra Sukabumi
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to create fake promotional videos featuring public figures without their consent, which is a direct misuse of AI-generated content causing harm. The AI system's role is pivotal in fabricating false narratives that can mislead viewers and damage reputations, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Waspada Penyebaran Konten Hoaks Melalui Teknologi AI - Mantra Sukabumi

2024-01-18
Mantra Sukabumi
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create and disseminate false content (deepfake video) that misrepresents public figures and promotes illegal activities. This misuse of AI has directly led to harm by spreading hoaxes and misleading the public, which fits the definition of an AI Incident due to harm to communities. The AI system's use in generating the fake audio and video is explicit and central to the incident.
Thumbnail Image

Waspada Hoaks Iklan Judi Online Najwa Shihab, Raffi, Atta Pakai AI

2024-01-16
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake and speech synthesis) to create manipulated content that spreads false information, which is a form of harm to communities and a violation of rights. The AI system's use directly led to the dissemination of a hoax, fulfilling the criteria for an AI Incident. The article explicitly mentions the AI-generated nature of the video and the resulting misinformation harm, not just a potential risk, so it is not merely a hazard or complementary information.
Thumbnail Image

Promosi Judi Online Disebut Makin Nekat, Pakai Deepfake Raffi Ahmad

2024-01-18
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake AI technology to generate fake videos and audio of public figures promoting online gambling, which is illegal and harmful. This misuse of AI has directly led to harm by facilitating the spread and promotion of illegal gambling, which can cause financial and social damage to individuals and communities. The involvement of AI in creating deceptive content that misleads the public and promotes harmful activities fits the definition of an AI Incident. The article also discusses the government's response to this harm, but the primary focus is on the realized harm caused by the AI misuse, not just the response, so it is not Complementary Information.
Thumbnail Image

Abimanyu Tanggapi Video Viral Wawancara Najwa Shihab, Raffi Ahmad, dan Atta Halilintar Promosikan Judi Online - Jawa Pos

2024-01-19
JawaPos.com
Why's our monitor labelling this an incident or hazard?
The video is described as not authentic but created using AI, implying deepfake or synthetic media technology. Such AI-generated fake content can cause reputational harm and misinformation, which are harms to communities and individuals. Since the AI system's use directly leads to this harm, this qualifies as an AI Incident.
Thumbnail Image

Viral Raffi Ahmad dan Najwa Shibab Promosi Judi Online, Ternyata AI - Teknologi Katadata.co.id

2024-01-19
katadata.co.id
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of a deepfake video using AI, which falsely portrays individuals endorsing online gambling. This constitutes a violation of rights, including reputational harm and potential misinformation to the public. Since the AI-generated content is actively misleading and could harm the individuals' reputations and misinform the community, it qualifies as an AI Incident due to harm to communities and violation of rights. The harm is realized as the video is viral and circulating, not merely a potential risk.
Thumbnail Image

Atta Halilintar Minta Pemerintah Membuat Kebijakan Penggunaan Teknologi AI

2024-01-18
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being used to create fake audio of a public figure to promote illegal online gambling, which is a misuse of AI leading to harm. This constitutes a violation of rights (reputational harm) and harm to communities (illegal gambling promotion). Therefore, it qualifies as an AI Incident because the AI system's use has directly led to harm. The request for government regulation is a response to this incident but does not change the classification.