China Begins Mass Production of AI-Enabled Hypersonic Missiles

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Chinese company Lingkong Tianxing Technology has started the world's first mass production of the YKJ-1000 hypersonic missile, which uses AI for autonomous target identification and evasion. The deployment of such AI-enabled weapons raises significant risks of harm due to their advanced destructive capabilities.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the development and use of an AI-enabled hypersonic missile system, which could plausibly lead to significant harm if deployed in conflict, given its advanced targeting and guidance capabilities. However, since the article only reports the unveiling and technical features without any realized harm or incident, it fits the definition of an AI Hazard rather than an AI Incident. The AI system's involvement is inferred from the guidance and control technologies described, which typically rely on AI for real-time decision-making. Therefore, this event is best classified as an AI Hazard due to the plausible future risk posed by the missile's capabilities.[AI generated]
AI principles
AccountabilityHuman wellbeingRespect of human rightsSafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

抢先美国:土耳其无人机使用超视距空空导弹击中目标

2025-12-02
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system integrated into a military drone's radar and missile guidance system that directly led to the destruction of a target, which is harm to property and potentially to human life. The AI system's use in autonomous target detection and missile guidance is explicit and central to the event. The harm is realized (target destroyed), not just potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or responses but reports an actual event where AI-enabled systems caused harm.
Thumbnail Image

评论 7

2025-12-01
guancha.cn
Why's our monitor labelling this an incident or hazard?
The 'Red Apple' UCAV integrates AI-enabled radar and missile guidance systems to autonomously detect, track, and engage a high-speed aerial target, leading to the destruction of the target drone. This is a direct use of AI in a lethal military context, causing harm to property (the target drone) and potentially to human life in broader military applications. The article explicitly describes the successful use of AI-enabled systems in combat testing, which meets the criteria for an AI Incident as the AI system's use directly led to harm. Although the target is a drone, the event marks a significant milestone in autonomous lethal AI weapon systems, which is a recognized harm category under the framework.
Thumbnail Image

驭空戟-1000导弹为何神奇 民企创新突破高超音速技术

2025-11-29
中华网军事频道
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI-enabled hypersonic missile system, which could plausibly lead to significant harm if deployed in conflict, given its advanced targeting and guidance capabilities. However, since the article only reports the unveiling and technical features without any realized harm or incident, it fits the definition of an AI Hazard rather than an AI Incident. The AI system's involvement is inferred from the guidance and control technologies described, which typically rely on AI for real-time decision-making. Therefore, this event is best classified as an AI Hazard due to the plausible future risk posed by the missile's capabilities.
Thumbnail Image

21世纪经济报道记者骆轶琪 12月1日,通宇通讯开盘即封涨停,延续了此前两天(11月27-28日)涨停行情。 前一日晚间,该公司发布《股票交易异常波动公告》称,近期公司经营状况正常,内外部经营环境未发生重大变化;公司未发现近期公共传媒报道了可能或已经对公司股票交易价格产生较大影响的未公开重大信息。 不过在11月25日,通宇通讯发布公告称,其与专业投资机构共同设立的"广东通宇华真空天产业基金合伙企业(有限合伙)",将出资1亿元专项投资北京凌空天行科技有限责任公司 (下称"凌空天行")。 通宇通讯在同一日发布的投资者关系管理信息显示,该基金已投的京济通信及将投资的凌空天行,在业务上与公司均有协同效应。 引发该公司股价持续异动的,或许就与这家被投资公司有关。 根据央视频原创短视频平台"小央视频"11月26日晚间发布,中国民营公司凌空天行于近日发布"驭空戟-1000"高超音速导弹,射程500-1300km,5 -- 7马赫飞行速度,360秒动力巡航时间,其具备自动识别目标、自动躲避威胁等能力。 前述公告显示,凌空天行是一家致力于高超音速技术服务、航天发动机和高超音速飞行器产品研发的商业航天企业。高超音速是当今航天航空尖端的技术领域,技术成果正在全面赋能新型战略装备、高速飞行器以及各类型高速运输系统。 值得关注的是,这家从事尖端技术领域的公司,是一家民营企业。 公告显示,凌空天行成立于2012年,为专精特新"小巨人"企业。该公司于今年初发给21世纪经济报道的资料显示,其在五年时间里建立了临近空间高速飞行领域的技术壁垒,并构建了低成本、快速响应的供应链体系。公司年收入的复合增长率超过200%。截至2024年末,凌空天行已经完成80多个履约项目,大部分是服务于"国家队"。 此前,凌空天行在超音速飞机方面取得了多个进展。 2024年10月,旗下"云行"系列超音速飞机验证机完成试飞,飞行速度超过4马赫,采用高升阻比乘波体气动布局,标志高速飞机研发进入工程实践阶段;12月"筋斗云"高速冲压发动机(代号"JINDOU400")试飞成功,根据发布会公开的视频影像,发动机空中试飞工作时长达45秒,验证了发动机工作的稳定性和可靠性,标志着该系列发动机从原理样机进入产品化阶段。 根据公司规划,2026年"窜天石猴"将首飞,彼时将是一次对全系统的综合考核,将是从技术验证进入工程化落地的关键环节。 不过在超音速飞机的首飞动作前,该公司已率先在高超音速导弹方面取得进展。

2025-12-01
证券之星
Why's our monitor labelling this an incident or hazard?
The article describes the development and investment in an AI-enabled hypersonic missile system with autonomous capabilities. While no actual harm or incident has occurred or been reported, the autonomous target recognition and threat avoidance features imply AI system involvement. The potential use of such weapons could plausibly lead to serious harms including injury, disruption, or violations of rights. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it highlights a credible future risk from AI-enabled military technology.
Thumbnail Image

21Tech 三天三板!通宇通讯缘何被爆炒?

2025-12-01
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event involves the development and investment in an AI-enabled hypersonic missile system with autonomous capabilities, which qualifies as an AI system. The article does not report any actual harm or incident caused by this system but highlights its advanced capabilities and potential military applications. Given the nature of autonomous weapons, their development and proliferation constitute an AI Hazard due to the plausible risk of future harm such as injury, disruption, or violations of human rights. Since no harm has yet occurred, and the article focuses on progress and investment rather than an incident, the classification is AI Hazard.
Thumbnail Image

美国会被气死! 中国高超导弹用水泥造: 想了100种可能都没试过水泥_手机网易网

2025-11-30
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system integrated into a hypersonic missile capable of autonomous target selection and swarm coordination, which is already mass-produced and operational. The missile's use and deployment directly relate to potential harm to human life, communities, and international stability, fulfilling the criteria for an AI Incident. The AI system's development and use have directly led to a significant military capability that can cause injury, harm, or disruption. The article's focus on the missile's operational status and strategic threat confirms realized or imminent harm rather than mere potential, distinguishing it from an AI Hazard or Complementary Information. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

大陆民企高超音速导弹已量产 蔡正元激动敲桌_手机网易网

2025-12-01
m.163.com
Why's our monitor labelling this an incident or hazard?
The missile system includes AI capabilities such as automatic target recognition and threat evasion, which qualifies it as an AI system. The event concerns the development and production of this AI-enabled weapon system, which could plausibly lead to significant harm if used in conflict, thus constituting an AI Hazard. Since no actual harm or incident is reported, it does not meet the criteria for an AI Incident. The article is not merely general AI news or a product launch without risk, as the weapon's autonomous features imply credible future risks.
Thumbnail Image

美国会被气死! 中国高超导弹用水泥造: 想了100种可能都没试过水_手机网易网

2025-12-01
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system indirectly, as the missile's guidance and control systems use automotive-grade SOC chips originally designed for autonomous driving, which implies AI or AI-related technology in missile operation. The use of AI-enabled components in a weapon system that can be mass-produced and deployed at scale poses a credible risk of harm to communities and international security. Although no specific incident of harm is reported, the article highlights the plausible future harm from widespread deployment of such advanced missiles, which could disrupt critical infrastructure and cause injury or death in conflict scenarios. Therefore, this qualifies as an AI Hazard due to the credible potential for significant harm stemming from AI-enabled military technology.
Thumbnail Image

美国吃惊!中国水泥导弹技术突破:百试不爽_手机网易网

2025-12-01
m.163.com
Why's our monitor labelling this an incident or hazard?
The missile system described incorporates AI components (consumer-grade cameras for guidance) that enable autonomous or semi-autonomous targeting capabilities. The article confirms the missile is in mass production and operational use, implying the AI system's use has directly led to a significant strategic military threat, which constitutes harm to communities and national security. The event is not merely a potential hazard but a realized incident involving AI-enabled military technology with direct implications for harm and disruption. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

震撼世界,四川民营企业造出高超弹了!它的技术到底有多强?_手机网易网

2025-11-30
m.163.com
Why's our monitor labelling this an incident or hazard?
The article describes the development and announcement of a hypersonic missile by a private company, which likely incorporates AI systems for navigation and targeting. While no harm has yet occurred, the missile's capabilities imply a credible risk of future harm, including injury, disruption, or geopolitical conflict. The event is not a realized incident but a plausible future risk stemming from AI-enabled military technology. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

中国民企甩出"水泥导弹"王炸:高超音速武器进入白菜价时代_手机网易网

2025-12-01
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in the missile's guidance and target recognition, inferred from the description of civilian camera modules adapted for target identification and the use of navigation chips, which typically involve AI or advanced algorithmic processing. The article does not report any actual harm or incident caused by these missiles yet but emphasizes the potential for significant future harm due to the low cost and mass production enabling widespread deployment. This aligns with the definition of an AI Hazard, where the AI system's development and use could plausibly lead to harm, including disruption of critical infrastructure or harm to communities in a conflict scenario. Therefore, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

中国飞龙300D自杀无人机卖给伊朗,射程2000公里报价仅1万美元,_手机网易网

2025-12-01
m.163.com
Why's our monitor labelling this an incident or hazard?
The Feilong 300D drone is an AI system due to its autonomous navigation and targeting capabilities using inertial navigation and satellite guidance with anti-jamming features. The article focuses on the sale and potential military use of this AI-enabled weapon system, which could plausibly lead to significant harm (destruction of military targets, escalation of conflict). Since the article does not report an actual incident of harm but highlights the potential for large-scale harm from proliferation and use, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or complementary information, as it directly concerns the potential for harm from an AI system's use in warfare.
Thumbnail Image

重磅消息!凌空天行表示,射程1300公里的高超弹基本型已经量产!_手机网易网

2025-11-29
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system integrated into hypersonic missiles capable of autonomous decision-making and coordinated attacks, which are already in mass production. The AI's role in target discrimination and threat evasion directly relates to potential harm to human life and security. The article describes the system as operational and deployed, not merely a future possibility, indicating realized harm potential. Given the military application and the direct involvement of AI in lethal autonomous weapons, this constitutes an AI Incident under the framework, as it involves direct or indirect harm to people and communities through the use of AI-enabled weaponry.
Thumbnail Image

制造出速度7马赫的高超音速导弹民企称已完成实际试射 - cnBeta.COM 移动版

2025-11-29
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the development and actual test flights of hypersonic missiles, which are highly complex systems likely incorporating AI for guidance, control, and navigation. The article explicitly mentions completed test flights, indicating the AI system is operational. While no direct harm is reported, the nature of hypersonic missiles as weapons with high destructive potential means their development and deployment could plausibly lead to significant harm. Therefore, this event qualifies as an AI Hazard due to the credible risk of future harm stemming from the AI system's use in weaponry. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the development and testing of a potentially harmful AI system.
Thumbnail Image

民企造出高超音速导弹?又在美国伤口上撒盐了_手机网易网

2025-12-02
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system integrated into a hypersonic missile for autonomous decision-making and coordinated attacks, which clearly fits the definition of an AI system. The development and deployment of such a weapon system directly relate to potential harm in terms of military conflict and security risks. Given that the missile is already produced and capable of precise strikes, the AI system's use has a direct link to potential harm (harm to communities, property, and possibly human life). Therefore, this qualifies as an AI Incident due to the realized deployment of an AI-enabled weapon system with significant harm potential.
Thumbnail Image

美国人天塌了!中国高超导弹用水泥造: 想了100种可能都没试过水泥_手机网易网

2025-12-02
m.163.com
Why's our monitor labelling this an incident or hazard?
The missile system explicitly incorporates AI capabilities (automatic target recognition, threat evasion, and planned AI-enabled swarm coordination). The article discusses the missile's deployment and strategic impact, which directly relates to harm to people and communities through military conflict. The AI system's use in this weapon system is a direct factor in the potential for injury, death, and disruption of critical infrastructure. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information. The article does not merely warn of potential harm but describes an operational AI-enabled weapon system with significant implications.
Thumbnail Image

"白菜价"的高超音速导弹性价比如何 全球防务或将洗牌

2025-12-03
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The missile's autonomous target recognition and threat evasion functions imply AI system involvement. The article does not report any actual use causing harm but highlights the mass production and low cost, which could plausibly lead to widespread deployment and associated harms. The event concerns the development and use of an AI-enabled weapon system with high potential for misuse and harm, fitting the definition of an AI Hazard. Since no realized harm is described, it is not an AI Incident. It is not merely complementary information because the focus is on the potential impact and risk of the AI system's deployment, not on responses or updates to past incidents.