Generative AI Deepfakes Enable Celebrity Impersonation Scams in Japan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Generative AI is increasingly used in Japan to create deepfake images and audio for online scams impersonating celebrities, leading to financial losses and theft of personal information. Meta is deploying facial recognition AI to detect and block such fraudulent ads, but privacy and security concerns remain regarding biometric data use.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (facial recognition technology) in the detection of fraudulent ads impersonating celebrities. The AI system's use is directly linked to preventing harm caused by impersonation and advertising fraud, which can mislead users and damage reputations. Although the article does not report a specific incident of harm caused by the AI, the deployment of this AI system is intended to mitigate ongoing harms from fraudulent ads. Since the AI system's use is active and directly related to harm prevention, this qualifies as an AI Incident rather than a hazard or complementary information. The event describes the resumption of AI use to address an existing harm problem, not just a potential future risk or a governance update.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainability

Industries
Media, social platforms, and marketingDigital securityFinancial and insurance services

Affected stakeholders
Consumers

Harm types
Economic/PropertyHuman or fundamental rightsReputationalPsychological

Severity
AI incident

Business function:
Monitoring and quality controlMarketing and advertisementICT management and information security

AI system task:
Content generationRecognition/object detectionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Meta、詐欺対策のため顔認識技術を再導入 執筆: Investing.com

2024-10-22
Investing.com 日本
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) in its deployment phase to combat online fraud, which is a form of harm to communities and individuals (harm category d). The AI system's use directly aims to prevent harm by identifying and blocking scam ads. Although the article does not report a specific incident of harm caused by the AI system, it describes the deployment of AI to address existing harms from fraudulent ads. This is a proactive use of AI to mitigate harm rather than causing harm itself. Therefore, this event is best classified as Complementary Information, as it provides context on societal and governance responses to AI use in fraud prevention, without reporting a new AI Incident or AI Hazard.
Thumbnail Image

米メタ、顔認識技術の使用再開 なりすまし広告詐欺対策で

2024-10-22
JP
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) in the detection of fraudulent ads impersonating celebrities. The AI system's use is directly linked to preventing harm caused by impersonation and advertising fraud, which can mislead users and damage reputations. Although the article does not report a specific incident of harm caused by the AI, the deployment of this AI system is intended to mitigate ongoing harms from fraudulent ads. Since the AI system's use is active and directly related to harm prevention, this qualifies as an AI Incident rather than a hazard or complementary information. The event describes the resumption of AI use to address an existing harm problem, not just a potential future risk or a governance update.
Thumbnail Image

メタ、なりすまし広告対策で顔認識「復活」 著名人限定

2024-10-23
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) in the context of preventing impersonation ads, which are linked to investment fraud—a form of harm to individuals (harm to persons through financial scams). The facial recognition AI is used to automatically detect if the person in an ad photo matches a celebrity's profile photo, thus directly contributing to preventing harm. Since the article describes the deployment of this AI system to address a real and ongoing problem (impersonation ads causing fraud), this qualifies as an AI Incident due to the direct link between AI use and harm prevention in a context where harm has occurred or is occurring.
Thumbnail Image

Meta、詐欺広告からの利用者保護やアカウント復旧に顔認証を活用

2024-10-24
ケータイ Watch
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (facial recognition) in the development and use phases to detect and block fraudulent ads impersonating celebrities, which directly prevents harm to users from scams and protects rights related to impersonation. Since the AI system's use is directly linked to preventing harm, this qualifies as an AI Incident involving violation prevention and user protection.
Thumbnail Image

Meta、顔認識でなりすまし詐欺広告を検出する手法

2024-10-22
PC Watch
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (face recognition technology combined with machine learning) in the use phase to detect and block scam ads impersonating celebrities. The system's role is to prevent harm to users by reducing exposure to fraudulent ads, which can cause financial or reputational harm. Since the system is actively used to prevent harm and the article describes its deployment and testing, this qualifies as an AI Incident involving harm prevention through AI use.
Thumbnail Image

FacebookとInstagramがビデオセルフィー認証をテスト中、懸念指摘も

2024-10-25
マイナビニュース
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (facial recognition via video selfie authentication) being tested and deployed by Meta for account recovery and fraud detection. Although no actual harm is reported, concerns about privacy and security risks are raised, indicating plausible future harm. The AI system's development and use could lead to violations of privacy rights or misuse of biometric data, which are significant harms. Since no harm has yet occurred but plausible risks exist, this fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

発表! オンライン詐欺で悪用されやすい日本の著名人TOP10~マカフィー調べ - 週刊アスキー

2024-10-24
週刊アスキー - 週アスのITニュースサイト
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for deepfake images and audio) in the commission of online fraud, which directly leads to harm (financial loss) to individuals. The AI system's use is malicious and instrumental in enabling the scams. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm through online fraud and impersonation.
Thumbnail Image

メタ、顔認証技術で著名人なりすまし詐欺に対抗-画像比較し広告削除

2024-10-22
Bloomberg.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition technology) in the development and use phases to combat fraud (celebrity impersonation scams) and to assist in user account recovery. While the AI system is actively used to prevent harm (fraud leading to theft of personal information or financial loss), the article does not report any realized harm caused by the AI system itself. Instead, it describes AI being used as a tool to mitigate harm. There is mention of past issues where legitimate accounts were mistakenly blocked due to automated errors, but this is presented as a known problem rather than a new incident. Therefore, this event does not describe an AI Incident or AI Hazard but rather provides information about AI deployment and responses to prior issues, fitting the definition of Complementary Information.