Slovakia Plans National AI Cybersecurity Laboratory

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Slovakia's Ministry of Investments, Regional Development and Informatization is planning a National AI Cybersecurity Laboratory (AI CyberLab) to develop, test, and validate AI solutions for protecting critical infrastructure. The initiative aims to enhance national resilience against cyber threats, with funding from national and EU sources. No AI-related incident has occurred yet.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the development and intended use of AI systems for cybersecurity, but no actual harm or incident has occurred yet. The article focuses on the planning and consultation phase to build AI capabilities to prevent cyber threats and improve security. Since no realized harm or incident is described, but the project could plausibly lead to AI-related impacts in the future, this qualifies as an AI Hazard. It is not Complementary Information because it is not an update or response to an existing incident or hazard, nor is it unrelated since it clearly involves AI systems and their potential impact on critical infrastructure security.[AI generated]
Industries
Digital securityGovernment, security, and defence

Severity
AI hazard

Business function:
ICT management and information security

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Ministerstvo investícií spustilo pripomienkovanie národného AI laboratória pre kyberbezpečnosť

2026-02-20
Hospodarske Noviny
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of cybersecurity, but it is about the development and planning of an AI laboratory rather than an incident or hazard. There is no indication of any harm caused or plausible harm occurring yet. The article is primarily about a governance and development initiative, which fits the definition of Complementary Information as it provides context and updates on AI ecosystem developments and governance responses without reporting an AI Incident or AI Hazard.
Thumbnail Image

Štát pripravuje AI laboratórium pre kybernetickú bezpečnosť

2026-02-20
trend.sk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as the laboratory will develop and test AI solutions for cybersecurity, which is relevant to AI system development and use. However, since the project is still in the planning stage and no harm or incident has occurred or is imminent, this does not qualify as an AI Incident or AI Hazard. The article serves as complementary information about AI governance and ecosystem development, providing context on efforts to improve AI safety and cybersecurity resilience.
Thumbnail Image

Slovensko pripravuje Národné laboratórium umelej inteligencie pre kybernetickú bezpečnosť -

2026-02-20
sita.sk
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of AI systems for cybersecurity, but no actual harm or incident has occurred yet. The article focuses on the planning and consultation phase to build AI capabilities to prevent cyber threats and improve security. Since no realized harm or incident is described, but the project could plausibly lead to AI-related impacts in the future, this qualifies as an AI Hazard. It is not Complementary Information because it is not an update or response to an existing incident or hazard, nor is it unrelated since it clearly involves AI systems and their potential impact on critical infrastructure security.
Thumbnail Image

Slovensko sa vyzbrojuje proti hackerom. Štát vybuduje elitné laboratórium s umelou inteligenciou

2026-02-20
Živé.sk
Why's our monitor labelling this an incident or hazard?
The event involves the planned development and use of AI systems for cybersecurity, which could plausibly lead to preventing or mitigating AI-related cyber incidents affecting critical infrastructure. Since the AI system is not yet operational and no harm has occurred, this constitutes a potential risk management initiative rather than an incident or realized harm. Therefore, it fits the definition of an AI Hazard, as the AI system's future use could plausibly lead to preventing or addressing harms related to cyber threats.
Thumbnail Image

Slovensko pripravuje Národné laboratórium umelej inteligencie pre kybernetickú bezpečnosť

2026-02-20
Omediach.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of cybersecurity but is about the development and planning phase of a national AI laboratory. There is no indication of any direct or indirect harm caused by AI systems, nor any plausible immediate risk of harm. The article is primarily about a governance and capacity-building initiative to support safe AI use in cybersecurity, which fits the definition of Complementary Information as it provides context and updates on AI ecosystem development without describing an incident or hazard.
Thumbnail Image

Slovensko pripravuje Národné laboratórium umelej inteligencie pre kybernetickú bezpečnosť

2026-02-21
TOUCHIT
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of cybersecurity and their future development and deployment. However, it is a preparatory and planning phase without any current AI system malfunction, misuse, or harm. The article discusses potential future benefits and risks but does not report any actual AI-related harm or incident. Therefore, it fits the definition of an AI Hazard, as the development and use of AI in cybersecurity could plausibly lead to incidents in the future, but no incident has yet occurred.