AI-Generated Fake Video Claims Chinese J-20 Fighter Can Vertically Take Off, Spreads Disinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated video falsely depicting China's J-20 fighter jet performing vertical takeoff was widely shared by Chinese officials and social media, misleading the public and serving as military propaganda. Fact-checks by French media and experts confirmed the video was fabricated, highlighting AI's role in spreading harmful disinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that the video claiming the J-20 can perform vertical takeoff is AI-generated and physically impossible, confirmed by military experts. The AI-generated fake video is part of a disinformation campaign by Chinese authorities, which constitutes harm to communities through misinformation. Since the AI system's use directly led to the spread of false information with potential geopolitical and social consequences, this qualifies as an AI Incident under the framework's definition of harm to communities and violation of rights through misinformation.[AI generated]
AI principles
Transparency & explainabilityAccountabilityRobustness & digital securityDemocracy & human autonomyRespect of human rightsHuman wellbeing

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
General public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

中國軍事假新聞大外宣 稱殲20垂直起降遭法媒打臉 - 國際 - 自由時報電子報

2025-06-15
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the video claiming the J-20 can perform vertical takeoff is AI-generated and physically impossible, confirmed by military experts. The AI-generated fake video is part of a disinformation campaign by Chinese authorities, which constitutes harm to communities through misinformation. Since the AI system's use directly led to the spread of false information with potential geopolitical and social consequences, this qualifies as an AI Incident under the framework's definition of harm to communities and violation of rights through misinformation.
Thumbnail Image

中國軍事假新聞大外宣 稱殲20垂直起降遭法媒爆AI生成 | 國際 | 中央社 CNA

2025-06-15
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article describes an AI-generated video falsely showing the J-20 fighter jet performing vertical takeoff, which is physically impossible and factually incorrect. The video was widely shared by Chinese officials and social media accounts as part of a military propaganda campaign. The AI system's role in generating this false content directly contributes to the spread of disinformation, which harms communities by misleading the public and distorting international military perceptions. This meets the criteria for an AI Incident because the AI system's use has directly led to harm through misinformation and manipulation of public opinion.
Thumbnail Image

中共軍事假新聞大外宣!稱殲20「可垂直起降」 遭法媒打臉:AI生成動畫 | 國際 | 三立新聞網 SETN.COM

2025-06-15
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the video claiming the J-20's vertical takeoff capability is AI-generated and false. The AI system's use in fabricating this misleading content directly contributes to the spread of military disinformation, which harms communities by distorting public understanding and potentially escalating geopolitical tensions. The harm is realized as the misinformation is actively shared by officials and widely viewed, fulfilling the criteria for an AI Incident under harm to communities. The AI system's role is pivotal in generating the fake video, making this a clear case of AI Incident rather than a hazard or complementary information.
Thumbnail Image

先進戰力/AI輔助 單人操作雙座機

2025-06-16
大公报
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the development and use phases for military aircraft, with AI assisting pilots and enabling autonomous functions. While the AI integration in fighter jets could plausibly lead to significant harms (e.g., in warfare scenarios), the article only describes ongoing development, testing, and potential future applications without any realized harm or malfunction. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no incident has yet occurred.
Thumbnail Image

網傳AI版殲16戰機圖 後座艙或放智能機械人

2025-06-17
on.cc東網
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system integrated into a military fighter jet, which could plausibly lead to significant harms such as injury, disruption, or violations of rights if deployed in conflict. However, since the article only reports on the AI system's development and testing phase without any actual harm or incident occurring, it fits the definition of an AI Hazard rather than an AI Incident. The AI system's presence and potential for future harm are reasonably inferred from the description of its intended autonomous attack decision-making role.
Thumbnail Image

"沒人敢再看輕大陸武器了!"巴黎航展引台輿論熱議

2025-06-17
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The article discusses AI-related military equipment (e.g., drones) and their operational use but does not report any realized harm or incident caused by AI systems. It also does not present a credible or imminent risk of harm from these AI systems. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. The article provides contextual information about AI-enabled military technology and public perception, which fits the definition of Complementary Information.