AI-Generated Deepfake Video of Japanese Prime Minister Spreads on Social Media

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A fake video of Prime Minister Fumio Kishida, created using generative AI to mimic his voice and image and falsely display a news program's logo, was widely spread on social media. The incident caused public misinformation and reputational harm, prompting strong protests from Nippon TV and raising concerns about AI misuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The video uses generative AI technology to create manipulated content that falsely represents a public figure and a news organization, leading to misinformation and reputational harm. The AI system's use directly leads to harm by spreading false and harmful content. Therefore, this qualifies as an AI Incident under the definition of harm to communities and violation of rights caused by AI-generated misinformation.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securityDemocracy & human autonomyRespect of human rights

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General publicGovernmentBusiness

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

"岸田首相の声"のフェイク動画拡散...ニュース画面に見せかけ(日テレNEWS NNN) - Yahoo!ニュース

2023-11-03
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
The video uses generative AI technology to create manipulated content that falsely represents a public figure and a news organization, leading to misinformation and reputational harm. The AI system's use directly leads to harm by spreading false and harmful content. Therefore, this qualifies as an AI Incident under the definition of harm to communities and violation of rights caused by AI-generated misinformation.
Thumbnail Image

岸田文雄首相の偽動画拡散 生成AIか、日テレロゴも

2023-11-04
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI to create a fake video that misrepresents a public figure, spreading misinformation and potentially harming the individual's reputation and public trust. The AI system's use directly leads to harm to communities by disseminating false information and violating rights related to personal dignity and reputation. Therefore, this qualifies as an AI Incident.
Thumbnail Image

生成AIでニュース番組に似せた偽広告 日本テレビ"注意を" | NHK

2023-10-31
NHKオンライン
Why's our monitor labelling this an incident or hazard?
The event describes the use of generative AI to produce realistic fake videos impersonating real news anchors to promote an investment site, which constitutes misinformation and deception. This misuse of AI has directly led to harm by misleading viewers, potentially causing financial loss and violating trust. The involvement of AI in generating the fake content and the resulting harm to individuals and communities fits the definition of an AI Incident.
Thumbnail Image

番組に似せた岸田首相の偽動画拡散 日本テレビが注意呼びかけ | NHK

2023-11-04
NHKオンライン
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI to produce manipulated audiovisual content that misrepresents a public figure, causing harm to the individual's reputation and misleading the public. The AI system's outputs have been used maliciously to create and disseminate false information, which is a violation of rights and harms the community. The harm is realized, not just potential, as the fake video has been spread on social media and prompted official warnings. Therefore, this meets the criteria for an AI Incident due to direct harm caused by AI-generated misinformation and impersonation.
Thumbnail Image

これも生成AI? 実在するアナウンサーの映像を用いた偽動画が拡散、TV局が注意を呼び掛け【やじうまWatch】

2023-11-02
INTERNET Watch
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system to create a deepfake video that misleads viewers into a fictitious investment, which constitutes harm to communities and individuals through misinformation and potential financial loss. The AI system's use directly led to the dissemination of harmful content, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The warning by the TV station and concerns about future harder-to-detect deepfakes further support the incident classification rather than a mere hazard or complementary information.
Thumbnail Image

岸田首相のフェイク動画拡散、日テレが注意喚起:朝日新聞デジタル

2023-11-04
朝日新聞デジタル
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a generative AI system used to create a fake video that misleads the public by impersonating a political figure and a news organization. The spread of this AI-generated misinformation on social media platforms causes harm to communities by distorting public discourse and trust. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through the dissemination of false and harmful content.
Thumbnail Image

女子アナが「もう働く必要ない」と偽投資広告 生成AI悪用か、日テレのニュース番組を加工

2023-11-03
ITmedia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI to manipulate a real news program to produce a fake investment advertisement that is spreading on social media. The AI system's use here directly leads to harm by misleading the public and potentially causing financial losses, which is harm to communities and individuals. The involvement of AI in generating realistic fake content that causes misinformation and deception fits the definition of an AI Incident. The harm is realized, not just potential, as the fake video is already circulating and misleading viewers.
Thumbnail Image

首相の偽動画が拡散、生成AIで作成か 日テレの番組ロゴ使用

2023-11-04
毎日新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system to create a deepfake video that falsely depicts the Prime Minister making inappropriate statements. This misuse of AI has directly led to harm by spreading misinformation and damaging the reputation of a public figure, which falls under harm to communities and violation of rights. Therefore, it qualifies as an AI Incident.
Thumbnail Image

岸田文雄首相の「フェイク動画」拡散で日テレが注意喚起「放送、番組ロゴを悪用」卑猥な言葉も - 芸能 : 日刊スポーツ

2023-11-04
nikkansports.com
Why's our monitor labelling this an incident or hazard?
The event describes the use of a generative AI system to create a fake video (deepfake) that misrepresents a public figure and misuses a broadcaster's logo, leading to misinformation and reputational harm. The AI system's use directly leads to harm to communities (misinformation) and violation of rights (reputational harm, misuse of intellectual property). Therefore, this qualifies as an AI Incident under the framework, as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

茂木健一郎氏「今後テクノロジーが発展すると...」岸田文雄首相のフェイク動画について言及 - 社会 : 日刊スポーツ

2023-11-04
nikkansports.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI) to create a deepfake video that misleads the public by falsely representing a political figure. The harm is realized as the video is spreading misinformation, which harms communities by undermining trust and potentially influencing public opinion. The involvement of AI in generating the fake video directly leads to this harm. Although the current deepfake is described as technically crude, the harm from its dissemination is occurring, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

生成AIで岸田首相の偽動画、SNSで拡散...専門家「印象操作という点で悪質」

2023-11-03
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI) used to create a fake video of a political figure with fabricated speech, which has been widely disseminated on social media. This constitutes an AI Incident because the AI's use directly caused harm by spreading misinformation and manipulating public opinion, which harms communities and violates rights. The malicious use of AI-generated deepfakes for political manipulation fits the definition of an AI Incident under harm to communities and violation of rights.
Thumbnail Image

岸田首相の偽動画拡散 日テレのニュース装う:時事ドットコム

2023-11-04
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The article describes a deepfake video created using generative AI technology that manipulates the Prime Minister's voice and video to produce false and harmful content. This misuse of AI has directly caused harm by spreading misinformation and potentially undermining public trust, which fits the definition of an AI Incident due to realized harm involving an AI system's use.
Thumbnail Image

女子アナが「もう働く必要ない」と偽投資広告 生成AI悪用か、日テレのニュース番組を加工

2023-11-02
産経ニュース
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (generative AI) used to create manipulated video content (deepfake) that misleads viewers by impersonating a real news anchor. This misuse of AI has directly led to the spread of false information and potentially fraudulent investment solicitations, which constitutes harm to communities and individuals (harm category d). The incident has already occurred, with the videos spreading on social media before being removed. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's malicious use.
Thumbnail Image

X民さん、岸田首相のフェイク動画を投稿→番組ロゴを悪用された日本テレビが激怒→投稿者がビビりまくる・・・ : オレ的ゲーム速報@刃

2023-11-04
����Ū������®��@��
Why's our monitor labelling this an incident or hazard?
The event describes a video generated using AI technology to manipulate the Prime Minister's voice and image, creating false and harmful content that is actively being disseminated on social media. The misuse of the news program's logo further exacerbates the harm by misleading viewers. These factors meet the criteria for an AI Incident because the AI system's use has directly led to harm to communities (misinformation) and a breach of intellectual property rights (unauthorized logo use).
Thumbnail Image

日本テレビが猛抗議 岸田首相フェイク動画に番組ロゴ悪用され「到底許すことはできません」 - スポニチ Sponichi Annex 芸能

2023-11-04
スポニチ Sponichi Annex
Why's our monitor labelling this an incident or hazard?
The event describes a fake video created using generative AI technology that manipulates the Prime Minister's voice and misuses a news program's logo, spreading misinformation. This constitutes harm to communities by misleading the public and damaging reputations. The AI system's role in generating the fake content is pivotal. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI-generated deepfake video.
Thumbnail Image

岸田首相の偽動画 | とんとん最近のできごと

2023-11-04
�Ȥ�Ȥ�Ƕ�ΤǤ�����
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for video and voice synthesis) to create and distribute a fake video of a political figure. This directly leads to harm to communities by spreading misinformation and undermining trust in public information, which fits the definition of an AI Incident. The article describes the harm as occurring (spread on social media), not just a potential risk, so it is not merely a hazard or complementary information.
Thumbnail Image

岸田首相の偽動画が拡散 生成AIか、日テレロゴも

2023-11-04
福島民友新聞社
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of generative AI to create a fake video that misrepresents a public figure, which is spreading on social media. This is a direct use of an AI system leading to harm in the form of misinformation and reputational damage, which fits the definition of an AI Incident under violations of rights and harm to communities. The involvement of the AI system is clear, and the harm is realized, not just potential.
Thumbnail Image

生成AIで岸田首相の偽動画、SNSで拡散...生中継のようにニュース番組のロゴも表示

2023-11-04
gogotorimaru.blog.fc2.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system to produce a fake video of a political figure, which was then spread widely on social media, causing misinformation and potential social harm. The AI system's use directly led to the dissemination of false information that can mislead the public and disrupt societal trust. The harm is realized, not just potential, as millions have viewed the video. This fits the definition of an AI Incident due to harm to communities and violation of rights through misinformation and impersonation. The article also discusses the broader societal implications and calls for regulation, but the primary event is the harmful AI-generated fake video dissemination.
Thumbnail Image

イスラエルとハマスの衝突 100万回以上見られた偽動画など33に | NHK

2023-11-07
NHKオンライン
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos and misinformation that have been widely viewed and shared, directly contributing to social harm by spreading false and inflammatory content. This fits the definition of an AI Incident because the AI-generated content has directly led to harm to communities through misinformation and social division. The article also mentions the removal of large numbers of fake accounts and videos, indicating active harm and response to it.
Thumbnail Image

岸田文雄首相のフェイク動画、中国メディアも報じる (2023年11月6日) - エキサイトニュース

2023-11-06
Excite
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos, which is a clear AI system involvement. The videos are spreading misinformation, which could plausibly lead to harm such as disruption of social trust and political manipulation (harm to communities). Since the article focuses on the ongoing spread and societal concern without reporting actual harm or incidents caused by these videos, it fits the definition of an AI Hazard rather than an AI Incident. The article also includes commentary on the need for regulation and the risks posed by such AI-generated misinformation, reinforcing the potential for future harm.
Thumbnail Image

岸田文雄首相のフェイク動画、中国メディアも報じる(2023年11月6日)|BIGLOBEニュース

2023-11-06
BIGLOBEニュース
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to create a deepfake video that falsely depicts a political leader saying inappropriate things. This constitutes a direct AI Incident because the AI-generated content is actively causing harm by misleading the public, potentially influencing opinions, and damaging reputations. The widespread dissemination and public reaction confirm that harm to communities and political processes is occurring. The discussion about regulation and societal impact further supports the significance of the incident.
Thumbnail Image

新潮社"名物部長"が悪質なフェイク画像・動画に早急な法整備訴え「誰が被害に遭ってもおかしくない」

2023-11-09
毎日新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create deepfake images and videos that have already caused harm by spreading false information about a public figure, which can harm communities and individuals' reputations. The article explicitly states that such AI-generated content is currently spreading and causing problems, thus constituting realized harm. The call for urgent legal regulation is a governance response to this AI Incident. Since the harm is occurring and the AI system's use is central to the issue, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中瀬ゆかり氏、生成AIによるフェイク動画の今後に危機感「先回りして法整備をしないと」 - 芸能 : 日刊スポーツ

2023-11-09
nikkansports.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create deepfake videos, which can plausibly lead to harms such as misinformation, reputational damage, and social disruption. The article emphasizes the risk of future harm and the necessity of legal frameworks to prevent such outcomes. Since no actual harm has been reported yet, but the risk is credible and foreseeable, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

岸田首相のフェイク動画にロゴ使われ日テレ大激怒...投稿者は「平穏な生活を取り戻したい」と謝罪 | 女性自身

2023-11-07
WEB女性自身
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically generative AI used to create manipulated video content (deepfake). The harm has materialized as the fake video spread misinformation and damaged reputations, which falls under harm to communities and violation of rights. The misuse of the broadcaster's logo also constitutes unauthorized use of intellectual property. Therefore, this is an AI Incident because the AI-generated fake video directly led to harm through misinformation and rights violations.
Thumbnail Image

日テレ岸田首相フェイク動画に日テレ激怒「到底許すことはできません」→ 作者らしき人が謝罪「震える」「吐きそう」 | ガジェット通信 GetNews

2023-11-05
ガジェット通信 GetNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI to create a fake video (deepfake) that misrepresents a public figure and a news organization, leading to misinformation and reputational harm. This constitutes harm to communities and a violation of intellectual property rights, fulfilling the criteria for an AI Incident. The apology and removal of the videos are responses but do not negate the fact that harm has occurred. Therefore, this is classified as an AI Incident.
Thumbnail Image

岸田首相『下品なAI偽動画』拡散、日テレ激怒「到底許すことはできない」 作成者「どうか訴訟等は停止を」謝罪も炎上やまず★5 [Hitzeschleier★]

2023-11-07
hen-news.net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate a fake video (deepfake) of a public figure, which has been disseminated and caused reputational and social harm. This meets the definition of an AI Incident because the AI system's use directly led to harm to communities (defamation, misinformation) and violations of rights (reputational harm, misuse of broadcast logos). The harm is realized, not just potential, as the video has been widely spread and caused public and legal reactions. Therefore, this is classified as an AI Incident.
Thumbnail Image

岸田首相の"動画"も拡散 生成AI悪用の"フェイク動画" 必要な対策は?|日テレNEWS NNN

2023-11-09
日テレNEWS
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create deepfake videos that have been disseminated widely, causing misinformation and potential harm to individuals and communities. The AI system's use has directly led to the spread of false and harmful content, fulfilling the criteria for an AI Incident due to harm to communities and violation of trust. The article describes realized harm (spread of fake videos and fraudulent ads) rather than just potential harm, so it is classified as an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

フェイク動画 「岸田首相」動画も... ニュース映像「加工」生成AIか 政府の対応は|日テレNEWS NNN

2023-11-09
日テレNEWS
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as generative AI used to create manipulated videos and audio (deepfakes) that impersonate public figures, including the Prime Minister. These AI-generated fake videos have been used maliciously to promote fraudulent investment schemes, causing harm to individuals and communities by spreading misinformation and enabling scams. The harm is realized and ongoing, as the videos are actively disseminated and have prompted government discussion. Therefore, this meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

首相の偽動画、官房長官「民主主義を傷つけかねず、罪になる場合も」:朝日新聞デジタル

2023-11-06
朝日新聞デジタル
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI) was used to create a fake video of a political figure, which is being disseminated on social media. This misinformation can harm communities by undermining democracy and causing social confusion, which fits the definition of harm to communities. Since the harm is occurring through the spread of the AI-generated fake video, this qualifies as an AI Incident. The official's comments about potential legal consequences and societal harm further support the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

岸田首相の偽動画が拡散 官房長官「民主主義の基盤を傷つける」

2023-11-06
毎日新聞
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI) was used to create a fake video that misrepresents the Prime Minister, leading to misinformation spread on social media. This misinformation harms the community by undermining trust and potentially destabilizing democratic foundations, which qualifies as harm to communities. Since the harm is occurring due to the AI-generated content, this is an AI Incident.
Thumbnail Image

松野官房長官

2023-11-06
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
The event involves AI systems implicitly, as the misuse of the Prime Minister's voice in videos on social media strongly suggests AI-generated or AI-manipulated content (deepfake or synthetic media). The harm is realized as the videos spread false information that can undermine democracy and cause social disruption, fitting the definition of an AI Incident. The government's response and concern further support the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

岸田首相の偽動画が拡散 官房長官「民主主義の基盤を傷つける」(毎日新聞) - Yahoo!ニュース

2023-11-06
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI) was used to create a fake video that misrepresents a public figure, leading to misinformation spread on social media. This constitutes harm to communities by undermining trust and potentially damaging democratic processes. Since the harm is occurring through the dissemination of the AI-generated fake video, this qualifies as an AI Incident under the definition of harm to communities and violation of rights through misinformation.
Thumbnail Image

岸田首相の偽動画「罪になる場合も。行われるべきでない」松野官房長官が会見で警鐘鳴らす(日刊スポーツ) - Yahoo!ニュース

2023-11-06
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
The event describes an AI-generated deepfake video that misrepresents a public figure, leading to misinformation and potential harm to democratic processes and social stability. The AI system's use in creating false content that is actively spreading and causing harm fits the definition of an AI Incident, as it directly leads to harm to communities and violations of rights through misinformation. The involvement of AI in generating the video and the resulting social harm justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

「罪になる場合もあるので慎んで」岸田総理の

2023-11-06
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a fake video of the Prime Minister generated by AI, which is misinformation causing social disruption and potential harm. The official's statement about the possibility of criminal liability underscores the harm caused. The AI system's use in generating false content that misleads the public and disrupts social order fits the definition of an AI Incident due to harm to communities and violation of democratic principles. Therefore, this event is classified as an AI Incident.
Thumbnail Image

首相の偽動画「民主主義の基盤傷つける」 松野官房長官

2023-11-06
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-generated fake video of the Prime Minister that has been spread, which constitutes misinformation. The government official highlights the risk of such AI-generated false information causing societal harm and confusion, which fits the definition of harm to communities. Since the AI system's use has directly led to the dissemination of harmful misinformation, this qualifies as an AI Incident under the framework.
Thumbnail Image

ウクライナ軍総司令官の偽動画が拡散 政府が注意呼びかけ | NHK

2023-11-09
NHKオンライン
Why's our monitor labelling this an incident or hazard?
The use of generative AI to create a deepfake video that spreads false information and aims to disrupt government and military relations constitutes a violation of rights and causes harm to communities. Since the AI system's use has directly led to the spread of harmful misinformation with real societal impact, this qualifies as an AI Incident under the framework.
Thumbnail Image

官房長官 岸田総理偽動画めぐり"政府の偽情報発信 行うべきでない" | NHK

2023-11-06
NHKオンライン
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system to create a fake video impersonating a public figure, which has been disseminated widely on social media. This has caused misinformation and social confusion, which is a harm to communities and potentially undermines democratic rights. The AI system's use directly led to this harm. Therefore, this qualifies as an AI Incident under the definition of harm to communities and violation of rights through misinformation caused by AI-generated content.
Thumbnail Image

ウクライナ軍トップの偽動画がネット上で拡散...「ゼレンスキーは我が国の敵」とディープフェイク

2023-11-09
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of generative AI technology to create a deepfake video that spreads false and harmful information. This misinformation campaign is intended to disrupt social order and cause harm to the community by fostering division and panic. Since the AI system's use has directly led to harm in the form of misinformation and potential social disruption, this qualifies as an AI Incident under the harm to communities category.
Thumbnail Image

[深層NEWS]ウクライナやイスラエル情勢で繰り広げられる情報戦、AIで誰もが世論工作

2023-11-08
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology enabling the creation of fake videos and images used in information warfare, which is currently happening and affecting public perception. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to communities by spreading false information and manipulating public opinion. Therefore, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI規制 いたずらでは済まない偽動画 

2023-11-06
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of generative AI to create fake videos and images that have been widely viewed and caused confusion, as well as legal action taken against a perpetrator for defamation. The AI system's use directly led to harm in terms of reputational damage and social disruption. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to communities and violations of rights.
Thumbnail Image

岸田首相の偽動画、松野官房長官「偽情報の発信は民主主義の基盤傷つける」

2023-11-06
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI to create a fake video that misrepresents a public figure, which is a direct use of an AI system leading to harm in the form of misinformation and social disruption. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (harm category d) by spreading false information that can damage democratic processes and social trust. The government's response and discussion of international AI guidelines are complementary but do not negate the incident classification.
Thumbnail Image

首相偽動画「民主主義傷つける」 松野官房長官:時事ドットコム

2023-11-06
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The article describes a deepfake video generated using AI technology that falsely portrays the Prime Minister making sexual remarks. This misuse of AI-generated content has led to social disruption and undermines democratic processes, constituting harm to communities and potentially violating rights. Since the AI system's use has directly led to this harm, this qualifies as an AI Incident.
Thumbnail Image

岸田首相の性的発言偽動画 松野長官「民主主義の基盤、傷つける」

2023-11-06
産経ニュース
Why's our monitor labelling this an incident or hazard?
The fake video uses AI-based deepfake technology to manipulate the Prime Minister's image and voice, creating false content that is being actively disseminated on social media. This misinformation can directly harm the social fabric and democratic processes, which qualifies as harm to communities under the AI Incident definition. The event involves the use and misuse of an AI system leading to realized harm, not just a potential risk, so it is classified as an AI Incident.
Thumbnail Image

河野太郎デジタル相「きわめて問題は大きい」 岸田文雄首相の偽動画拡散で

2023-11-10
産経ニュース
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of generative AI to fabricate a video falsely depicting the Prime Minister making indecent remarks. This AI-generated content has been spread on social media, causing reputational harm and misinformation dissemination. Such harm to communities and violation of rights fits the definition of an AI Incident, as the AI system's use directly led to these harms. The government's recognition of the problem's seriousness further supports the classification as an AI Incident.
Thumbnail Image

政府情報を偽っての発信、行われるべきでない=首相の偽動画で官房長官

2023-11-06
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions an AI-generated fake video (deepfake) of the Prime Minister being disseminated, which constitutes misinformation causing harm to communities by potentially undermining democratic foundations and social trust. The AI system's use in creating the fake video directly led to this harm. Therefore, this qualifies as an AI Incident due to realized harm from AI-generated misinformation affecting society and democratic processes.
Thumbnail Image

SNSで岸田首相の偽動画拡散 「政府情報の偽った発信は民主主義の基盤傷つけかねない」松野官房長官|FNNプライムオンライン

2023-11-06
FNNプライムオンライン
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI) was used to create a fake video of a political leader, which is being actively disseminated on social media. This misinformation can harm communities by undermining trust in government and democracy, constituting harm to communities and potentially violating rights related to truthful information. Since the fake video is already spreading, harm is occurring, making this an AI Incident. The official's comments about risks and future responses are complementary but the core event is the active spread of AI-generated misinformation causing harm.
Thumbnail Image

ウクライナ軍司令官の偽動画拡散 「大統領はわれらの敵」|秋田魁新報電子版

2023-11-08
秋田魁新報電子版
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI used to create deepfake video) whose use has directly led to harm in the form of disinformation spreading, which harms communities by attempting to create panic and division within Ukraine's military and government. This fits the definition of an AI Incident because the AI-generated content is actively causing harm through misinformation and social disruption.
Thumbnail Image

偽情報投稿の中止呼びかけ 官房長官、首相動画巡り

2023-11-06
神戸新聞
Why's our monitor labelling this an incident or hazard?
The article discusses the spread of fake videos (deepfakes) involving the Prime Minister, which are a form of misinformation. The use of AI to generate such fake videos is reasonably inferred, as deepfake technology typically involves AI systems. The harm described includes social confusion and potential damage to democratic foundations, which qualifies as harm to communities. However, the article focuses on a call to stop such postings rather than reporting actual incidents of harm caused by these videos. Since the harm is implied as potential and the event is a warning or appeal rather than a report of realized harm, this fits the definition of an AI Hazard, where AI-generated misinformation could plausibly lead to harm but no specific incident is detailed as having occurred yet.
Thumbnail Image

岸田首相の偽動画、1時間で作成 AI使いSNSに拡散:山陽新聞デジタル|さんデジ

2023-11-09
山陽新聞デジタル|さんデジ
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create manipulated video and audio content that falsely portrays a public figure in a damaging way. The AI system's outputs were disseminated on social media, leading to misinformation and potential harm to the community's trust and the individual's reputation. This fits the definition of an AI Incident because the AI system's use directly led to harm (misinformation and reputational harm).
Thumbnail Image

ウクライナ軍司令官の偽動画拡散 「大統領はわれらの敵」(共同通信)|熊本日日新聞社

2023-11-08
熊本日日新聞社
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI used to create deepfake video) whose use has directly led to harm in the form of disinformation spreading, which harms communities by attempting to divide the military and government and create panic. This fits the definition of an AI Incident because the AI-generated content is actively causing harm, not just posing a potential risk. The harm is to communities and national stability, which is a significant clearly articulated harm under the framework.
Thumbnail Image

ウクライナ軍司令官の偽動画拡散

2023-11-08
鹿児島のニュース - 南日本新聞 | 373news.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI used to create deepfake videos) whose use has directly led to harm in the form of misinformation and social disruption, which qualifies as harm to communities. The disinformation campaign aims to create panic and division, fulfilling the criteria for an AI Incident under harm to communities. Therefore, this is classified as an AI Incident.
Thumbnail Image

岸田首相の偽動画、1時間で作成

2023-11-09
鹿児島のニュース - 南日本新聞 | 373news.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create a realistic but fake video and audio of a public figure, which was then spread on social media. This constitutes an AI system's use leading directly to harm in the form of misinformation and reputational damage, which falls under harm to communities and violation of rights. Since the harm is realized and the AI system's role is pivotal in creating and spreading the false content, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ウクライナ軍司令官の偽動画拡散|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題

2023-11-08
��ʐV���b��ʂ̍ŐV�j���[�X�E�X�|�[�c�E�n��̘b��
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of generative AI used to create a deepfake video. The use of this AI-generated content has directly led to harm by spreading false information that could disrupt military operations and harm community trust, fitting the definition of harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and disinformation.
Thumbnail Image

偽情報投稿の中止呼びかけ|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題

2023-11-06
��ʐV���b��ʂ̍ŐV�j���[�X�E�X�|�[�c�E�n��̘b��
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of fake videos created with the Prime Minister's voice and image, which implies AI-generated deepfakes. The spread of such misinformation harms communities by undermining democratic foundations, fulfilling the harm criteria. The government's call to stop posting such fake information confirms the harm is occurring. Therefore, this event is classified as an AI Incident due to the direct involvement of AI-generated misinformation causing societal harm.
Thumbnail Image

岸田首相の偽動画、1時間で作成|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題

2023-11-09
��ʐV���b��ʂ̍ŐV�j���[�X�E�X�|�[�c�E�n��̘b��
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI) to produce a fake video that misrepresents a public figure, leading to misinformation and reputational harm. This is a direct harm to communities through the spread of false information and manipulation of public perception. Therefore, it qualifies as an AI Incident because the AI system's use has directly led to harm.
Thumbnail Image

偽情報投稿の中止呼びかけ 官房長官、首相動画巡り

2023-11-06
福島民友新聞社
Why's our monitor labelling this an incident or hazard?
The article describes the spread of AI-generated fake videos (deepfakes) of the Prime Minister, which is a direct use of AI systems to create misleading content. This misinformation can harm communities by causing social confusion and undermining democratic foundations, fitting the definition of harm to communities. Since the harm is occurring and the AI system's role is pivotal in generating the fake videos, this qualifies as an AI Incident.
Thumbnail Image

岸田首相の偽動画、1時間で作成 AI使いSNSに拡散

2023-11-09
福島民友新聞社
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create and spread a fake video that misleads the public and harms the reputation of a political figure. This constitutes harm to communities and possibly a violation of rights, as the AI-generated content is used maliciously to deceive and cause social disruption. Since the harm is realized and the AI system's use is central to the incident, this qualifies as an AI Incident.
Thumbnail Image

スパゲティの木 | | 有明抄 | 佐賀新聞

2023-11-06
佐賀新聞LiVE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake videos that have been spread on social media, causing harm by misleading the public and potentially damaging reputations, which constitutes harm to communities and a violation of rights. This is a realized harm caused by the use of AI systems (deepfake generation). Therefore, this qualifies as an AI Incident. Additionally, the article discusses governance responses, but the primary focus is on the incident of harmful AI-generated misinformation.
Thumbnail Image

フェイクAI動画の次は「なりすまし」 岸田首相騙る偽アカウント、SNSで横行 犯罪予告、投資勧誘など悪質投稿も

2023-11-12
�˂Ƃ��
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of generative AI to create fake videos of the Prime Minister making inappropriate statements. Additionally, the impersonation accounts on social media, likely leveraging AI-generated content or automated methods, post harmful content such as crime threats and fraudulent investment solicitations. These actions constitute violations of rights and cause harm to communities, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the fake videos and posts have been disseminated and caused public concern and official responses.
Thumbnail Image

「岸田首相フェイク動画」にみる、生成AIとフェイクニュースの関係 加速する誤情報にどう対処すべきか

2023-11-14
ITmedia
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI to create and spread a fake news video, which is a clear example of AI-generated misinformation causing harm to communities by misleading the public. The harm is realized as the video was widely disseminated and prompted official warnings and government inquiries. Therefore, this qualifies as an AI Incident due to the direct role of the AI system in producing harmful misinformation.
Thumbnail Image

岸田首相のフェイク動画、お粗末すぎるが問題大あり GPT-4以降の迷走放置、グローバルには米大統領選への思惑か | JBpress (ジェイビープレス)

2023-11-10
JBpress(日本ビジネスプレス)
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create deepfake videos that misrepresent a public figure, leading to misinformation and potential reputational and societal harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (through misinformation) and potentially to individuals' reputations. The article describes realized harm from the AI-generated content, not just potential harm, and discusses responses to these harms, confirming the incident classification.
Thumbnail Image

「首相偽動画」が拡散、精巧化するディープフェイクのリスク 技術向上で簡易に

2023-11-14
産経ニュース
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for deepfake video and audio synthesis) to create manipulated videos that have been disseminated online, causing social confusion and potential harm to public trust and political discourse. The AI system's use has directly led to harm in the form of misinformation and reputational damage, which qualifies as harm to communities and violation of rights. The article describes an actual incident where the AI-generated deepfake video was created and spread, thus constituting an AI Incident rather than a mere hazard or complementary information. The involvement of AI in the creation and spread of the deepfake video and the resulting harm meets the criteria for an AI Incident.
Thumbnail Image

岸田総理「フェイク動画騒動」 5年後にやってくる"やっかいな状況"とは | デイリー新潮

2023-11-10
デイリー新潮
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI) used to create a fake video that misrepresents a public figure, leading to public confusion and official backlash. The harm is realized as misinformation spreading, which harms communities by undermining trust and potentially affecting democratic processes. This fits the definition of an AI Incident because the AI's use directly caused harm to communities and violated rights related to truthful information dissemination. The involvement is in the use of the AI system to generate misleading content that caused harm.