NHS England Restricts Open-Source Code Access Over AI Vulnerability Fears

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

NHS England is making most of its public code repositories private due to concerns that advanced AI models, such as Anthropic's Mythos, could autonomously identify and exploit software vulnerabilities. This precautionary policy aims to mitigate potential cybersecurity risks posed by AI-driven vulnerability scanning, with no actual harm reported yet.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system (Mythos) that can analyze software for vulnerabilities, which is the basis for NHS England's policy change. However, there is no indication that the AI system has directly or indirectly caused any harm such as security breaches or data loss. The event is about the potential risk of AI-enabled hacking, which could plausibly lead to an AI Incident in the future if exploited. Since no harm has occurred yet, and the main focus is on the potential threat and the policy response, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its capabilities are central to the event.[AI generated]
AI principles
Robustness & digital securitySafety

Industries
Digital securityGovernment, security, and defence

Severity
AI hazard

Business function:
ICT management and information security

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

NHS England rushes to hide software over AI hacking fears

2026-05-01
New Scientist
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Mythos) that can analyze software for vulnerabilities, which is the basis for NHS England's policy change. However, there is no indication that the AI system has directly or indirectly caused any harm such as security breaches or data loss. The event is about the potential risk of AI-enabled hacking, which could plausibly lead to an AI Incident in the future if exploited. Since no harm has occurred yet, and the main focus is on the potential threat and the policy response, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its capabilities are central to the event.
Thumbnail Image

NHS to close-source GitHub repos over AI, security concerns

2026-05-05
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and potential misuse of an AI system (Anthropic's Mythos) capable of large-scale code ingestion and vulnerability detection. The NHS's decision to restrict public access to its code repositories is a direct response to the plausible risk that such AI systems could be used to identify and exploit security vulnerabilities, which could lead to harm such as breaches of confidentiality or disruption of healthcare services. Since no actual harm or incident has been reported, and the action is a precautionary measure, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it centers on the potential risk and organizational response to AI capabilities, and it is not unrelated as it directly concerns AI's impact on cybersecurity practices.
Thumbnail Image

NHS England withdraws public software over AI hacking fears

2026-05-05
Computing
Why's our monitor labelling this an incident or hazard?
An AI system (Mythos) is explicitly mentioned as capable of identifying software vulnerabilities, which could plausibly lead to cyberattacks on critical infrastructure like NHS systems. However, the article does not report any realized harm or incidents caused by AI exploitation. The event is about a precautionary policy change to mitigate potential AI-related risks, fitting the definition of an AI Hazard rather than an Incident or Complementary Information. The focus is on plausible future harm rather than actual harm or a response to a past incident.
Thumbnail Image

Is this the end? NHS is apparently shutting down most of its open source repos. Here's why

2026-05-03
Neowin
Why's our monitor labelling this an incident or hazard?
The NHS's action is motivated by the potential misuse of AI systems like Mythos, which can find and exploit software vulnerabilities at scale. The AI system's involvement is in its use and potential misuse to attack publicly available code repositories. While no actual harm has been reported yet, the credible risk of future harm (e.g., cybersecurity breaches) is recognized and is driving policy changes. This fits the definition of an AI Hazard, as the event involves plausible future harm due to AI capabilities, but no realized harm has been described. The event is not merely general AI news or a complementary update but a concrete organizational response to a credible AI-driven threat.
Thumbnail Image

NHS England May Make Public GitHub Repositories Private Over AI Concerns

2026-05-04
Linuxiac
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-assisted vulnerability scanning as the reason for NHS England's move to make repositories private, indicating AI system involvement. The event stems from the use of AI models to analyze code, which could plausibly lead to security vulnerabilities being exploited, thus potentially causing harm to critical infrastructure or data security. However, since no actual incident or harm has been reported, and the change is a preventive policy measure, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it focuses on the potential risk and policy response rather than updates on past incidents or governance responses. It is not unrelated because AI is central to the concern and policy change.
Thumbnail Image

Mythos AI hacking fears prompt UK health service crackdown on open-source code

2026-05-05
Cybernews
Why's our monitor labelling this an incident or hazard?
The event explicitly references advanced AI models capable of large-scale code ingestion and reasoning, indicating the involvement of AI systems. The NHS's policy change is motivated by concerns that these AI capabilities could be exploited to identify vulnerabilities in publicly available code, potentially leading to cybersecurity breaches. However, there is no indication that any actual harm or incident has occurred yet. The event is about mitigating a plausible future risk stemming from AI capabilities, fitting the definition of an AI Hazard. It is not an AI Incident because no realized harm has been reported, nor is it Complementary Information or Unrelated, as the focus is on AI-related cybersecurity risks and organizational response.