Iranian Revolutionary Guard Utilizes AI for Military Targeting

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Iran's Revolutionary Guard leader, Major General Hossein Salami, announced the use of AI in military operations to accurately and swiftly target ships and aircraft. While emphasizing ethical considerations, Salami highlighted AI's role in identifying targets without harming innocent crew members, reflecting potential future military applications of AI technology.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems supporting military targeting and operational decisions, which are likely autonomous or semi-autonomous AI systems used to identify and select targets. The use of AI in this context directly relates to potential harm, as it supports military actions that could lead to injury, harm to persons, or damage to property. Although the article emphasizes ethical considerations and avoidance of harm to innocent personnel, the deployment of AI in military targeting inherently involves risks of harm. Therefore, this event qualifies as an AI Hazard because it describes the use and development of AI systems that could plausibly lead to harm in military conflict. However, since the article does not report any actual harm or incident resulting from AI use but rather discusses capabilities and intentions, it is best classified as an AI Hazard rather than an AI Incident.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsTransparency & explainabilityDemocracy & human autonomyRobustness & digital security

Industries
Government, security, and defenceMobility and autonomous vehiclesLogistics, wholesale, and retailTravel, leisure, and hospitality

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Economic/PropertyPublic interestHuman or fundamental rights

Severity
AI hazard

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

قائد الحرس الثوري الإيراني: الذكاء الاصطناعي يساعدنا في استهداف السفن والطائرات بدقة وسرعة

2025-01-30
سبوتنيك عربي
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting and weaponry, including AI-guided missiles launched from drones. These systems are actively used in military operations, which inherently carry risks of injury, death, and destruction. The AI's role in selecting targets and guiding strikes means it directly contributes to harm. Therefore, this qualifies as an AI Incident under the definition, as the AI system's use has directly led to harm or the potential for harm in conflict situations.
Thumbnail Image

الحرس الثوري الإيراني: الذكاء الاصطناعي يدعم قدراتنا في الجو والبحر

2025-01-30
24.ae
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems supporting military targeting and operational decisions, which are likely autonomous or semi-autonomous AI systems used to identify and select targets. The use of AI in this context directly relates to potential harm, as it supports military actions that could lead to injury, harm to persons, or damage to property. Although the article emphasizes ethical considerations and avoidance of harm to innocent personnel, the deployment of AI in military targeting inherently involves risks of harm. Therefore, this event qualifies as an AI Hazard because it describes the use and development of AI systems that could plausibly lead to harm in military conflict. However, since the article does not report any actual harm or incident resulting from AI use but rather discusses capabilities and intentions, it is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

قائد الحرس الثوري الإيراني: الذكاء الاصطناعي يساعدنا في استهداف السفن والطائرات بدقة وسرعة | Mustaqbal Web

2025-01-30
Mustaqbal Web
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to identify and target ships and aircraft in military operations, which involves the use of AI in potentially lethal force applications. This constitutes the use of AI systems in a way that can directly lead to harm to persons (crew members on targeted ships or aircraft) and property (ships, aircraft). The involvement of AI in selecting targets and speeding up attack decisions is a direct use of AI that can cause harm, fitting the definition of an AI Incident. Although the article mentions ethical considerations and attempts to avoid harming innocent personnel, the AI's role in targeting in military conflict inherently involves risk of injury or death, thus meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

قائد الحرس الثوري الإيراني: نستخدم الذكاء الاصطناعي في استهداف السفن والطائرات

2025-01-30
جريدة الشروق
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI is used in targeting ships and aircraft with high precision and speed, including missile launches from drones equipped with AI. This is a direct use of AI systems in military operations that can cause injury, death, and destruction, fulfilling the criteria for an AI Incident. The harm is direct and materializes through the use of AI-enabled weapons. The article also discusses ethical considerations but confirms the operational use of AI in targeting, which is a direct cause of harm or potential harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

قائد الحرس الثوري: الذكاء الاصطناعي يعزز دقة استهداف السفن والطائرات

2025-01-30
AlJadeed.tv
Why's our monitor labelling this an incident or hazard?
The involvement of AI in military targeting systems that can directly lead to harm (injury or death) to people aboard targeted ships or aircraft qualifies this as an AI Incident. The AI system's use in enhancing targeting precision is directly linked to potential injury or harm to persons, fulfilling the criteria for an AI Incident under harm category (a). Although the commander mentions ethical considerations and intent to avoid harm to innocent crew, the use of AI in lethal targeting inherently involves direct risk of harm.
Thumbnail Image

فرمانده کل سپاه: در استفاده از هوش مصنوعی ملاحظات اخلاقی را رعایت می‌کنیم

2025-01-29
iranintl.com
Why's our monitor labelling this an incident or hazard?
The article mentions AI use in military applications and ethical considerations but does not report any actual incident or harm caused by AI systems. There is no indication of malfunction, misuse, or harm, nor a credible imminent risk described. Therefore, it is not an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides context on governance and ethical considerations related to AI use in a specific sector.
Thumbnail Image

سرلشکر سلامی: بهره‌ زیادی از هوش مصنوعی در عرصه‌های نظامی خواهیم برد

2025-01-29
خبرگزاری مهر | اخبار ایران و جهان | Mehr News Agency
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in military applications, including image recognition and targeting. While it discusses ethical considerations and operational use, it does not report any actual harm or incident caused by AI. The content is forward-looking and discusses potential uses and ethical frameworks, which fits the definition of an AI Hazard, as the development and use of AI in military targeting could plausibly lead to harm. There is no description of a realized harm or incident, so it is not an AI Incident. It is not merely complementary information because the focus is on the potential and intended use of AI in military contexts, which carries plausible risk of harm.
Thumbnail Image

سرلشکر سلامی: از هوش مصنوعی در عرصه‌های نظامی بهره های زیادی می‌بریم

2025-01-29
خبرگزاری مهر | اخبار ایران و جهان | Mehr News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for military targeting and decision-making, which can directly lead to harm (injury or death) in conflict scenarios. Although the commander emphasizes ethical use and minimizing harm, the deployment of AI in military targeting inherently carries risks of harm. Since no actual incident of harm is reported but the use of AI in military targeting is ongoing or imminent, this fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving injury or harm to persons. The article does not describe a realized harm event, so it is not an AI Incident. It is more than general AI news or complementary information because it highlights the potential for harm in military AI applications.
Thumbnail Image

استفاده از هوش مصنوعی در عرصه‌های نظامی با رعایت ملاحظات اخلاقی

2025-01-29
Jamejam Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in military operations, such as identifying targets and minimizing harm to non-combatants, indicating the presence of AI systems. However, it only discusses intentions and ethical considerations without reporting any actual harm or incidents resulting from AI use. The focus is on potential future applications and the dual-use nature of technology, which could plausibly lead to harm if misused or malfunctioning occurs. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

استفاده از هوش‌مصنوعی در عرصه‌های نظامی

2025-01-30
Jamejam Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and development of AI systems in military applications, such as targeting ships with precision to avoid harming crew, which implies AI system involvement. However, it does not describe any realized harm, malfunction, or misuse resulting in injury, rights violations, or other harms. Instead, it focuses on future-oriented statements about AI's role and ethical use, as well as national strategic goals. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is not merely general AI news or product launch, but rather a policy and ethical discourse, which fits best as Complementary Information providing context and governance-related perspectives on AI in military use.
Thumbnail Image

Iran to increase use of AI in military areas: IRGC chief

2025-01-29
Mehr News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the IRGC's intention to expand AI use in military fields, which involves AI system development and use with potential for harm. Although no harm has yet occurred, the deployment of AI in military operations, especially in battle scenarios, presents a credible risk of injury, disruption, or other serious harms. Therefore, this situation qualifies as an AI Hazard rather than an Incident, as the harm is plausible but not yet realized.
Thumbnail Image

IRGC equipped with IA for complex military operations: Chief commander

2025-01-29
IRNA English
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used for complex military operations, such as target identification and air defense, which involve AI development and use. However, it does not report any realized harm, injury, violation of rights, or disruption caused by these AI systems. The content points to the potential for AI to impact military operations, which could plausibly lead to harm in the future, but no actual incident is described. Therefore, this qualifies as an AI Hazard due to the plausible future risk associated with AI-enabled military capabilities, especially in a conflict context.
Thumbnail Image

IRGC's Onshore Missile Facility Comes into Operation - Politics news - Tasnim News Agency

2025-02-02
خبرگزاری تسنیم
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered missiles with advanced autonomous features that improve their targeting and evasion capabilities. These missiles are designed for military use with long-range and high destructive power, which inherently poses a significant risk of harm to people, property, and communities if used. The development and deployment of such AI-enabled weapons systems constitute a plausible and credible risk of harm, qualifying this event as an AI Hazard. There is no indication that these missiles have been used yet to cause harm, so it is not an AI Incident. The event is not merely complementary information or unrelated, as it involves the development and operationalization of AI systems with clear potential for harm.
Thumbnail Image

IRGC to Use AI for Military Applications - Politics news - Tasnim News Agency

2025-01-30
خبرگزاری تسنیم
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in military drones and missiles that have been deployed in war games, indicating active use of AI systems in potentially lethal operations. The AI systems are used to detect, target, and strike enemy vessels and targets with precision, which directly relates to harm to persons and property. The involvement of AI in these military applications and the actual firing of AI-enabled missiles constitute an AI Incident as the AI's use has directly led to harm or the potential for harm in a conflict context.
Thumbnail Image

IRGC chief highlights AI's growing role in military and civilian sectors

2025-01-29
Tehran Times
Why's our monitor labelling this an incident or hazard?
The article discusses the development and intended use of AI systems in military and civilian contexts, emphasizing future capabilities and strategic integration. There is no indication of actual harm, violation of rights, or disruption caused by AI at this time. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides complementary information about AI's evolving role and governance considerations in Iran's military and civil sectors.
Thumbnail Image

IRGC to use AI in air defense, naval operations

2025-01-30
GlobalSecurity.org
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the planned use of AI in military drones and air defense systems by the IRGC, which are AI systems by definition. The use of AI for targeting and defense in military contexts inherently carries a credible risk of harm to persons and property, as well as potential violations of human rights. Since the article discusses plans and intentions rather than realized harm, it fits the definition of an AI Hazard, reflecting plausible future harm from AI deployment in military operations.
Thumbnail Image

IRGC to reveal new missile and defense systems Monday - Shafaq News

2025-02-01
Shafaq News
Why's our monitor labelling this an incident or hazard?
The article mentions AI developments in the context of military technology but does not report any incident or harm caused by AI systems. The announcement of new military AI technologies and defense systems could imply potential future risks, but the article does not provide details suggesting a credible or imminent AI hazard. Therefore, this is best classified as Complementary Information, providing context on AI-related military advancements without describing an AI Incident or AI Hazard.