AI-Generated Celebrity Voice Scams Target Online Shoppers in Indonesia

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Indonesian authorities warn of rising online shopping scams using AI to manipulate public figures’ voices and likenesses, deceiving consumers into buying products falsely endorsed by celebrities. The misuse of AI has caused financial harm to consumers and reputational damage to public figures, prompting increased monitoring and public advisories.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated content (videos and voices) used for fraudulent purposes, which has caused harm by misleading the public. This fits the definition of an AI Incident because the AI system's use has directly led to harm (fraud and deception). The involvement of AI in generating fake videos and voices is clear, and the harm is realized as people are being scammed or misled. Therefore, this is classified as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityRobustness & digital securityAccountabilitySafety

Industries
Consumer servicesMedia, social platforms, and marketingDigital security

Affected stakeholders
ConsumersOther

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Awas Ketipu Video Mirip Artis di Facebook, Tak Cuma Melaney Ricardo

2024-04-16
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content (videos and voices) used for fraudulent purposes, which has caused harm by misleading the public. This fits the definition of an AI Incident because the AI system's use has directly led to harm (fraud and deception). The involvement of AI in generating fake videos and voices is clear, and the harm is realized as people are being scammed or misled. Therefore, this is classified as an AI Incident.
Thumbnail Image

Wamen Nezar Ingatkan Masyarakat Waspadai Penipuan Belanja Online yang Memanfaatkan AI

2024-04-17
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used to create manipulated voices of public figures to promote products fraudulently, which constitutes a direct harm to consumers (financial harm through scams) and to the public figures (reputational harm). This fits the definition of an AI Incident because the AI system's use has directly led to harm. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Wamen Nezar minta publik waspada penipuan belanja daring manfaatkan AI

2024-04-16
Antara News Palu
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used maliciously to create manipulated voice content for fraudulent online shopping promotions, which could harm consumers and public figures. Although no specific harm is reported as having occurred yet, the described misuse and warnings indicate a plausible risk of AI-driven fraud and reputational damage. The article focuses on the potential for harm and ongoing monitoring and mitigation efforts, fitting the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI misuse is central to the issue.
Thumbnail Image

Publik harus waspada penipuan belanja daring manfaatkan AI

2024-04-16
Antara News Manado
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated manipulated voices to deceive consumers in online shopping scams, which constitutes a direct harm to consumers (financial harm) and reputational harm to public figures. The AI system's misuse has directly led to realized harm, fulfilling the criteria for an AI Incident. The article also details responses to this harm but the primary focus is on the ongoing harm caused by AI misuse in scams, not just complementary information or future risks.
Thumbnail Image

Terbaru! Wamen Nezar Minta Publik Waspada Penipuan Belanja Daring Manfaatkan Ai - Beritaja

2024-04-16
Beritaja
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of voice manipulation used for fraudulent online shopping promotions, which could plausibly lead to harm such as consumer fraud and reputational damage to public figures. However, it does not describe any specific incident where such harm has already occurred. Instead, it focuses on warnings, monitoring efforts, and preventive actions by authorities and platforms. Therefore, this qualifies as an AI Hazard, reflecting a credible risk of harm from AI misuse in the future rather than an AI Incident or Complementary Information.