AI-Driven Disinformation Campaign Targets Japanese Election

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

During Japan's House of Representatives election, around 400 China-linked social media accounts used generative AI to produce and spread disinformation targeting Prime Minister Sanae Takaichi. The campaign involved AI-generated images and coordinated posts, aiming to manipulate public opinion and undermine the democratic process.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to generate fake images and coordinate mass disinformation campaigns that have directly led to harm by manipulating public opinion and undermining democratic processes, which qualifies as harm to communities. The AI system's use in generating misleading content and coordinating fake accounts is explicit and central to the incident. Therefore, this is an AI Incident due to the realized harm caused by AI-enabled disinformation operations.[AI generated]
AI principles
AccountabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicGovernment

Harm types
Public interestReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

鎖定高市早苗瘋狂抹黑!她揭中共驚人盤算:台灣才是主戰場 | 要聞 | NOWnews今日新聞

2026-02-23
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake images and coordinate mass disinformation campaigns that have directly led to harm by manipulating public opinion and undermining democratic processes, which qualifies as harm to communities. The AI system's use in generating misleading content and coordinating fake accounts is explicit and central to the incident. Therefore, this is an AI Incident due to the realized harm caused by AI-enabled disinformation operations.
Thumbnail Image

中國認知戰干預選情!矢板明夫示警:台日面對相同挑戰應深化合作 | 國際要聞 | 全球 | NOWnews今日新聞

2026-02-23
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate and disseminate disinformation at scale during an election, which is a direct use of AI leading to harm by manipulating public opinion and interfering with democratic processes. The article details the deployment of thousands of AI-generated posts and images by coordinated accounts, which is a clear example of AI-enabled cognitive warfare causing harm to communities and political rights. The harm is realized, not just potential, as the disinformation campaign actively targeted a political figure and party during an election period. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

日經揭400中國帳號用AI協作 發動反高市早苗認知戰 | 國際要聞 | 全球 | NOWnews今日新聞

2026-02-22
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by Chinese-operated accounts to generate and coordinate disinformation campaigns during an election, directly causing harm to communities by manipulating public opinion and potentially interfering with democratic processes. The AI involvement is clear through AI-generated images and AI collaboration tools used to accelerate the spread of harmful content. The harm is realized and ongoing, as the disinformation has reached millions and affected the election environment. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

自由日日shoot》中國海量假訊息對日介選 高市︰借鏡台灣 升級國家情報防禦 - 國際 - 自由時報電子報

2026-02-22
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI models producing images and text) to create and spread false information during an election, directly impacting the democratic process and public perception, which is a harm to communities. The article reports that the disinformation campaign has already occurred and influenced voters, fulfilling the criteria for an AI Incident. The AI system's use in generating and amplifying false narratives is central to the harm described, not merely a potential risk or background context.
Thumbnail Image

插手日本眾院大選!中國數百AI帳號暗中「反高市認知作戰」 - 國際 - 自由時報電子報

2026-02-22
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating content and images used in a coordinated disinformation campaign that influenced public perception during an election, which is a clear harm to communities and a violation of democratic rights. The AI-generated accounts and content directly contributed to this harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

日經揭400中國帳號疑介入眾院選 散布不利高市內容 | 國際 | 中央社 CNA

2026-02-22
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems explicitly mentioned as generative AI for image creation and advanced text generation to spread disinformation. The coordinated operation of hundreds of accounts with AI-generated content directly led to harm by influencing public discourse and potentially undermining democratic processes, which is harm to communities. The article describes realized harm (disinformation spread during an election) and the AI system's pivotal role in enabling this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

日經曝日眾院選前「400中國帳號」進行反高市認知作戰...他警惕台灣地方選戰在即,當心社群1類人- 今周刊

2026-02-22
businesstoday.com.tw
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to produce coordinated disinformation content that was actively disseminated during an election, directly impacting political discourse and social trust. The involvement of AI in generating and optimizing misleading posts and images is clear, and the harm to communities through manipulation of democratic processes is realized. This meets the definition of an AI Incident, as the AI system's use has directly led to harm to communities and a violation of democratic rights. The event is not merely a potential risk or a complementary update but a concrete case of AI-enabled harm.
Thumbnail Image

中共網軍大規模攻擊高市早苗! 她曝背後動機並示警:台灣才是主戰場 | 政治 | Newtalk新聞

2026-02-23
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake images and coordinate large-scale misinformation campaigns, which directly harm communities by spreading false information and attempting to influence election outcomes. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and political manipulation. The article describes realized harm rather than just potential harm, and the AI-generated content is central to the incident.
Thumbnail Image

日經揭400中國帳號疑介入眾院選 散布不利高市內容 | 國際焦點 | 國際 | 經濟日報

2026-02-22
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for images and coordinated AI-assisted social media operations) in the active dissemination of false and misleading information during an election, which is a direct cause of harm to communities by undermining democratic integrity and public discourse. The coordinated nature and AI involvement in content generation and distribution meet the criteria for an AI Incident, as the harm is realized and the AI system's role is pivotal in the incident.
Thumbnail Image

干涉日本大選?日經:400中國帳號疑發動「反高市」認知作戰│TVBS新聞網

2026-02-22
TVBS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated images and coordinated AI-driven behavior by numerous accounts to spread disinformation targeting a political figure during an election. The coordinated use of AI to manipulate public opinion and spread false narratives constitutes harm to communities and political processes, fulfilling the criteria for an AI Incident. The harm is realized as the disinformation is actively disseminated and influences public discourse, not merely a potential risk. Hence, the event is classified as an AI Incident.
Thumbnail Image

日選舉前涉華帳號疑以AI生成 展開反高市早苗認知作戰

2026-02-22
on.cc東網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images and content used by nearly 400 social media accounts to conduct a coordinated campaign against a political figure. The use of AI in generating these accounts and content directly contributes to the spread of misinformation and manipulation, which harms communities by undermining trust and potentially affecting election outcomes. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to communities through disinformation and cognitive manipulation during an election period.
Thumbnail Image

日媒揭400中國帳號 疑介入眾院選舉| 台灣大紀元

2026-02-22
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create images and coordinated AI-driven social media accounts spreading disinformation during an election. This meets the definition of an AI system's use leading to harm (harm to communities via misinformation and election interference). The harm is realized, not just potential, as the disinformation was actively spread during the election period. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中共網軍頻攻擊高市早苗!她揭背後焦慮:台灣才是主戰場 | 政治 | 三立新聞網 SETN.COM

2026-02-23
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake images and coordinating numerous accounts to spread false narratives targeting a political figure, which directly harms communities by undermining democratic processes and spreading misinformation. The use of AI-generated content and large-scale coordinated accounts fits the definition of an AI system's use leading to harm. The article documents actual ongoing disinformation campaigns, not just potential risks, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

日經:「400中國假帳號」發文反高市早苗 疑介選手法曝光

2026-02-22
mnews.tw
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate and distribute coordinated disinformation through fake social media accounts. This activity has directly led to harm to communities by spreading false narratives and attempting to influence election outcomes, which fits the definition of an AI Incident. The article describes realized harm (disinformation spread) and the AI system's role is pivotal in producing fluent language and images, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

快新聞/媒體揭400中國社群帳號「介入日本大選」 散布不利高市內容 - 民視新聞網

2026-02-22
民視新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI images and coordinated account behavior indicative of AI system involvement. The disinformation spread during an election constitutes harm to communities, fulfilling the criteria for an AI Incident. The AI system's use in generating and amplifying false content directly contributed to the harm. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中國帳號「反高市文」殘留簡體字!他揭滲透黑幕「提到台灣」2018年早示警 - 民視新聞網

2026-02-23
民視新聞網
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of generative AI images and sophisticated language generation to produce coordinated disinformation campaigns. The use of these AI systems directly led to harm by spreading false narratives that could influence election outcomes and public perception, which is a harm to communities and a violation of democratic rights. The article also references similar past incidents and governmental responses, confirming the realized harm and the AI system's pivotal role in the incident.
Thumbnail Image

'다카이치 비난 여론 조작' 중국계 계정 394개 찾았다

2026-02-22
Chosun.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated images by coordinated accounts spreading disinformation during an election, which is a clear example of AI system use causing harm to communities through manipulation of public opinion. The AI system's role in generating images and enabling large-scale coordinated campaigns directly contributes to the harm described. Therefore, this qualifies as an AI Incident under the framework because the AI system's use has directly led to harm to communities and violations of rights.
Thumbnail Image

"중국계 SNS 계정 400개 일본 총선 때 여론 공작"

2026-02-22
YTN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as it explicitly mentions the use of AI-generated videos in coordinated information operations. The coordinated spread of disinformation by numerous accounts directly harms the democratic process and community trust, fulfilling the harm to communities criterion. The AI system's use in generating content and amplifying disinformation directly led to this harm. Although the impact was described as limited, the harm is realized and significant enough to classify as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"중국계 SNS 계정 400여개, 日총선 때 '反다카이치' 여론 조성"

2026-02-22
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of AI-generated videos by coordinated social media accounts engaging in information operations. The coordinated spread of disinformation targeting a political figure during an election directly harms communities by undermining democratic processes and public trust. The article confirms the presence and use of AI in the disinformation campaign, and the harm is realized as the disinformation was actively spread, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

日 중의원 선거 뒤흔든 외부 세력의 '조직적 여론 조작'... 中 선거개입? | 아주경제

2026-02-23
아주경제
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate images and coordinate social media accounts to manipulate public opinion during an election, which directly harms democratic integrity and social cohesion. The AI-generated content and the strategic deployment of accounts to spread misinformation and psychological operations meet the criteria for an AI Incident, as the AI system's use has directly led to harm to communities and violations of rights. The article provides concrete evidence of realized harm rather than potential harm, distinguishing it from an AI Hazard or Complementary Information.