AI-Generated Deepfake Impersonation of Japanese Defense Minister Prompts Warning

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Japanese Defense Minister Shinjiro Koizumi warned the public after reports of fraudulent calls using AI-generated video and audio to impersonate him via WeChat. The sophisticated scam involved individuals posing as his secretary and connecting victims to deepfake video calls, prompting police involvement and public caution.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated video and audio impersonating a politician, used in fraudulent video calls during an election. This involves an AI system's use (generative AI for deepfakes) leading directly to harm: deception, misinformation, and potential election-related harm. The harm is realized, not just potential, as multiple people have received suspicious calls. This fits the definition of an AI Incident due to violation of rights (misinformation, impersonation) and harm to communities (election interference risk).[AI generated]
AI principles
SafetyDemocracy & human autonomy

Industries
Digital securityGovernment, security, and defence

Affected stakeholders
ConsumersGeneral public

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

【衆院選】"偽物進次郎"出没中 小泉防衛相が緊急警告「私の"偽物"」がチグハグなテレビ電話 - 政治 : 日刊スポーツ

2026-01-29
nikkansports.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated video and audio impersonating a politician, used in fraudulent video calls during an election. This involves an AI system's use (generative AI for deepfakes) leading directly to harm: deception, misinformation, and potential election-related harm. The harm is realized, not just potential, as multiple people have received suspicious calls. This fits the definition of an AI Incident due to violation of rights (misinformation, impersonation) and harm to communities (election interference risk).
Thumbnail Image

小泉進次郎防衛相 なりすましに注意喚起、秘書かたる人物から不審な電話、"偽物"とビデオ通話も

2026-01-29
毎日新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-generated video and audio to impersonate a public figure, which constitutes the use of an AI system. The malicious use of this AI-generated content has directly led to harm in the form of deception and potential fraud against individuals receiving these calls. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through impersonation and potential misuse.
Thumbnail Image

小泉進次郎防衛相「なりすましにご注意」AI生成の「偽物」とのビデオ通話の事例も「チグハグな受け答え」

2026-01-30
J-CAST ニュース
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated video and audio used in fraudulent calls impersonating a public official, which directly leads to harm by deceiving people. The AI system's use in generating fake content is central to the incident. The harm includes deception, potential fraud, and violation of trust, which fits within harm to communities and individuals. Since the harm is occurring and the AI system's role is pivotal, this is classified as an AI Incident.
Thumbnail Image

小泉進次郎、なりすましに注意喚起「手口も巧妙かつ悪質」|秋田魁新報電子版

2026-01-29
秋田魁新報電子版
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it is used to generate fake video and audio of the public figure, enabling impersonation. This misuse of AI has directly led to harm in the form of deception and potential fraud targeting individuals, which constitutes harm to persons or communities. Therefore, this event qualifies as an AI Incident due to the realized harm caused by malicious AI-generated impersonation.
Thumbnail Image

小泉進次郎、なりすましに注意喚起「手口も巧妙かつ悪質」 | 岩手日報ONLINE

2026-01-29
IWATE NIPPO 岩手日報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated video and audio to impersonate a public figure, which is a misuse of AI technology causing harm by deceiving people. This fits the definition of an AI Incident because the AI system's use has directly led to harm through malicious impersonation and potential fraud. Therefore, the event qualifies as an AI Incident.
Thumbnail Image

小泉進次郎、なりすましに注意喚起「手口も巧妙かつ悪質」:紀伊民報AGARA|和歌山県のニュースサイト

2026-01-29
agara.co.jp
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated video and audio to impersonate a public figure in a malicious scam. This constitutes an AI system's use leading directly to harm through deception and impersonation, which can be considered a violation of rights and harm to individuals. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm through malicious impersonation.