AI-Driven Cyberattacks and Military Integration Raise Security Concerns in Europe

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google warned of a surge in AI-powered cyberattacks exploiting software vulnerabilities, including bypassing two-factor authentication, and highlighted the growing use of generative AI by cybercriminals. Simultaneously, European militaries, notably Germany and Ukraine, are rapidly integrating AI into weapons and battlefield systems, raising concerns about AI-driven harm in both cybersecurity and military contexts.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI was used by a known cybercrime group to find a new software vulnerability and create an exploit tool, which is a direct use of AI in malicious operations. The target is critical infrastructure software, and the attack was only stopped before widespread damage, indicating a direct link between AI use and a serious cybersecurity threat. The involvement of AI in the development and use phases of the attack, and the resulting harm or near-harm to critical infrastructure, fits the definition of an AI Incident. The report also discusses the broader implications and ongoing risks, but the primary event is the AI-enabled cyberattack attempt, which is a realized harm scenario or very close to it.[AI generated]
AI principles
Robustness & digital securitySafety

Industries
Digital securityGovernment, security, and defence

Affected stakeholders
General publicGovernment

Harm types
Economic/PropertyHuman or fundamental rightsPhysical (death)

Severity
AI incident

AI system task:
Content generationGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

حرکت به سمت "دستیار هوشمند دولت" با توسعه AI در وزارتخانه‌ها

2026-05-15
خبرگزاری مهر | اخبار ایران و جهان | Mehr News Agency
Why's our monitor labelling this an incident or hazard?
The article discusses the use and development of AI systems (intelligent assistants for government and AI education for students) but does not describe any harm or incidents caused by these systems. There is no indication of injury, rights violations, disruption, or other harms. The content is primarily informative about AI adoption and plans, which fits the definition of Complementary Information as it provides context and updates on AI ecosystem developments without reporting an incident or hazard.
Thumbnail Image

Em Rooz

2026-05-14
iran-emrooz.net
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used by a known cybercrime group to find a new software vulnerability and create an exploit tool, which is a direct use of AI in malicious operations. The target is critical infrastructure software, and the attack was only stopped before widespread damage, indicating a direct link between AI use and a serious cybersecurity threat. The involvement of AI in the development and use phases of the attack, and the resulting harm or near-harm to critical infrastructure, fits the definition of an AI Incident. The report also discusses the broader implications and ongoing risks, but the primary event is the AI-enabled cyberattack attempt, which is a realized harm scenario or very close to it.
Thumbnail Image

حرکت به سمت "دستیار هوشمند دولت" با توسعه AI در وزارتخانه‌ها

2026-05-15
خبرگزاری مهر | اخبار ایران و جهان | Mehr News Agency
Why's our monitor labelling this an incident or hazard?
The article primarily provides information about AI development and deployment plans, educational initiatives, and government strategies without indicating any direct or indirect harm caused by AI systems. There is no mention of injury, rights violations, disruption, or other harms linked to AI use or malfunction. Nor does it highlight any credible risk of future harm. Therefore, the content fits best as Complementary Information, offering context and updates on AI ecosystem developments rather than describing an AI Incident or AI Hazard.
Thumbnail Image

نیاز سینما به یک زبان مشترک برای مواجهه با محتوای بی‌ارزش AI

2026-05-15
جوان‌آنلاين
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of film production and addresses concerns about their impact. However, it does not describe any realized harm or incident caused by AI, nor does it report a specific plausible future harm event. The main focus is on the introduction of a transparency standard and industry discussions about AI's role and risks, which constitutes a governance and societal response. Therefore, this is best classified as Complementary Information, as it provides context and updates on the AI ecosystem in cinema without reporting an AI Incident or AI Hazard.
Thumbnail Image

هشدار گوگل در مورد افزایش حملات سایبری مبتنی بر هوش مصنوعی

2026-05-14
باشگاه خبرنگاران جوان | آخرین اخبار ایران و جهان | YJC
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used maliciously to exploit a software vulnerability and conduct large-scale cyberattacks, including bypassing security measures and automating malware behavior. These actions constitute violations of security and privacy rights, causing harm to individuals and organizations. Since the AI involvement has directly led to realized harm through cyberattacks, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

اروپا چگونه هوش مصنوعی را وارد عملیات نظامی می‌کند

2026-05-15
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems integrated into military weapons and decision-making processes that are currently in use or being deployed, such as AI-enabled drones and battlefield management systems. These systems directly influence lethal operations and battlefield outcomes, which involve direct or indirect harm to people and communities. The presence and use of AI in these contexts meet the definition of an AI System and the harms described (potential injury or harm to persons in conflict) meet the criteria for an AI Incident. The article does not merely discuss potential future risks or general AI developments but reports on active deployment and operational use, thus constituting an AI Incident rather than a hazard or complementary information.