Greece Launches AI-Enabled Autonomous Weapons and Military 5G Network Programs

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Hellenic Center for Defense Innovation (ELKAK) announced procurement competitions to develop AI-powered autonomous loitering munitions (drone swarms) capable of lethal targeting and a modular 5G military communication network. These AI-enabled systems pose credible future risks of harm, including injury or rights violations, if deployed in conflict.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the development and planned use of AI systems in military applications, including autonomous weapons and AI-enabled command systems. However, no actual harm, malfunction, or misuse has occurred yet. The article focuses on the announcement and strategic planning stages, which could plausibly lead to future harms given the nature of autonomous weapons and military AI, but no incident or direct harm is reported. Therefore, this qualifies as an AI Hazard due to the credible potential for future harm from these AI-enabled defense technologies, rather than an AI Incident or Complementary Information.[AI generated]
AI principles
SafetyRespect of human rightsRobustness & digital securityAccountabilityTransparency & explainabilityDemocracy & human autonomyPrivacy & data governanceHuman wellbeing

Industries
Government, security, and defenceRobots, sensors, and IT hardwareDigital securityIT infrastructure and hostingMobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interest

Severity
AI hazard

Business function:
Research and developmentICT management and information securityProcurementMonitoring and quality control

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planningEvent/anomaly detectionForecasting/prediction


Articles about this incident or hazard

Thumbnail Image

Πώς οι Ένοπλες Δυνάμεις θα αποκτήσουν τεχνητή νοημοσύνη και μηχανική μάθηση | Pagenews.gr

2025-07-03
Pagenews.gr
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of AI systems in military applications, including autonomous weapons and AI-enabled command systems. However, no actual harm, malfunction, or misuse has occurred yet. The article focuses on the announcement and strategic planning stages, which could plausibly lead to future harms given the nature of autonomous weapons and military AI, but no incident or direct harm is reported. Therefore, this qualifies as an AI Hazard due to the credible potential for future harm from these AI-enabled defense technologies, rather than an AI Incident or Complementary Information.
Thumbnail Image

Στη δημιουργία drone - καμικάζι με τεχνητή νοημοσύνη προχωρούν οι ελληνικές Ένοπλες Δυνάμεις

2025-07-01
Newsbeast.gr
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system—autonomous loitering munitions with AI and machine learning capabilities for lethal targeting. While no harm has yet occurred, the autonomous nature and lethal purpose of these AI systems create a credible risk of injury, death, or violations of human rights. The article focuses on the development and procurement of these AI-enabled weapons, which could plausibly lead to AI incidents involving harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI is central to the system's operation and the potential harms.
Thumbnail Image

ΕΛΚΑΚ: Δύο νέες προκηρύξεις για περιπλανώμενα πυρομαχικά ακριβείας και δίκτυο επικοινωνιών 5G | OnAlert

2025-06-30
OnAlert
Why's our monitor labelling this an incident or hazard?
The announcement involves AI systems explicitly: autonomous loitering munitions with AI/ML for target recognition and adaptive behavior, and a 5G military communication network enhancing operational effectiveness. No harm has yet occurred, but the nature of these AI systems—autonomous weapons capable of lethal action and critical military communications—implies a credible risk of future harm (injury, disruption, or rights violations). The event is about the development and procurement process, not about an incident or realized harm. Hence, it fits the definition of an AI Hazard, as the AI systems could plausibly lead to significant harm in the future.
Thumbnail Image

ΕΛΚΑΚ: Διαγωνισμός για περιπλανώμενο πυρομαχικό στις ΕΔ ύψους 15 εκατ.

2025-06-30
Business Daily
Why's our monitor labelling this an incident or hazard?
The announcement explicitly involves AI systems integrated into autonomous loitering munitions capable of lethal targeting and coordinated swarm operations. The event concerns the development and planned deployment of these AI-enabled weapons, which could plausibly lead to injury or death (harm to persons) and other significant harms if used in conflict. Although no incident (harm) has yet occurred, the nature of the AI system and its intended use in military operations create a credible risk of future harm, qualifying this as an AI Hazard. The 5G network development supports these AI systems but does not itself constitute harm or hazard beyond enabling the AI weapons. Hence, the overall event is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

ΕΛΚΑΚ: Δύο διαγωνισμοί για λύσεις αιχμής - sofokleous10.gr

2025-07-02
sofokleous10.gr
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned deployment of AI-enabled military systems (loitering munitions with swarm control and advanced communication networks) that have a high potential for misuse and could plausibly lead to significant harm, including injury or harm to persons and disruption of critical infrastructure. However, the article describes the announcement of procurement and development plans, not an actual incident or harm occurring. Therefore, this constitutes an AI Hazard, as the development and intended use of these AI systems could plausibly lead to AI Incidents in the future.
Thumbnail Image

ΕΛΚΑΚ: Πρωτοποριακά αυτόνομα σμήνη πυρομαχικών και αρθρωτό στρατιωτικό δίκτυο 5G για το μέλλον της άμυνας

2025-06-30
STARTUPPER
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI and machine learning integration in autonomous loitering munitions capable of lethal targeting and swarm coordination, which are AI systems with high potential for harm. The event is about the launch of procurement competitions for these systems, not about any actual harm or malfunction. Since the development and potential deployment of such autonomous weapons systems and military communication networks could plausibly lead to injury, violation of human rights, or other significant harms, this fits the definition of an AI Hazard. There is no indication of realized harm or incident yet, so it is not an AI Incident. It is not merely complementary information because the main focus is on the potential risks inherent in these AI-enabled military technologies, not on responses or updates to past events. It is not unrelated because the AI systems and their potential impacts are central to the announcement.