India's CERT-In Issues High-Severity Warning on AI-Driven Cyber Threats

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

India's cybersecurity agency CERT-In has issued a high-severity advisory warning that advanced AI systems are enabling faster, more sophisticated cyberattacks. The advisory highlights risks such as automated vulnerability detection, multi-stage attacks, and large-scale breaches, urging organizations, MSMEs, and individuals to strengthen defenses against AI-powered threats. No specific incidents reported.[AI generated]

Why's our monitor labelling this an incident or hazard?

The advisory discusses the plausible future harms that AI-driven cyber attacks could cause, including unauthorized system access, data breaches, and financial fraud. However, it does not report any specific realized incident of harm caused by AI systems but rather warns about the credible risks and provides guidance to mitigate them. Therefore, this event fits the definition of an AI Hazard, as it concerns circumstances where AI use in cyber attacks could plausibly lead to significant harms but does not describe an actual incident of harm occurring.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital security

Affected stakeholders
BusinessGeneral public

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI hazard

Business function:
ICT management and information security

AI system task:
Event/anomaly detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

CERT-In warns organisations, MSMEs and individuals of AI-driven cyber attack risks

2026-04-27
storyboard18.com
Why's our monitor labelling this an incident or hazard?
The advisory discusses the plausible future harms that AI-driven cyber attacks could cause, including unauthorized system access, data breaches, and financial fraud. However, it does not report any specific realized incident of harm caused by AI systems but rather warns about the credible risks and provides guidance to mitigate them. Therefore, this event fits the definition of an AI Hazard, as it concerns circumstances where AI use in cyber attacks could plausibly lead to significant harms but does not describe an actual incident of harm occurring.
Thumbnail Image

AI-driven cyber attacks pose new risks, warns Indian cybersecurity agency

2026-04-27
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article does not report an actual AI-driven cyber attack causing harm but rather a credible warning about the potential for such attacks to occur in the future. The involvement of AI is explicit, and the advisory focuses on the plausible risk of AI-enabled cyber threats. Therefore, this constitutes an AI Hazard, as it concerns a credible potential for harm stemming from AI use in cyber attacks.
Thumbnail Image

CERT-In Warns Of Rising AI-Driven Cyber Threats Amid 'Mythos' Concerns | Science & Tech

2026-04-27
Ommcom News
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems that could autonomously identify software vulnerabilities and execute complex cyberattacks, which could plausibly lead to significant harm such as breaches and disruptions. However, it does not describe any actual AI-driven cyberattack incidents that have occurred. The focus is on raising awareness and advising on mitigation strategies, indicating a credible risk rather than a realized harm. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

CERT-In outlines safeguards for Indian orgs, MSMEs amid Mythos AI cybersecurity risk concerns

2026-04-27
The Indian Express
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (frontier AI models like Mythos) and their potential misuse in cybersecurity attacks. The advisory is based on a risk assessment that these AI systems could enable automated, large-scale cyberattacks causing significant harms (service disruption, data theft, fraud, impersonation). No actual harm is reported yet, but the credible potential for harm is clearly articulated. The advisory's purpose is to warn and recommend safeguards to prevent such harms. This fits the definition of an AI Hazard, where the development or use of AI systems could plausibly lead to an AI Incident. It is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated, as the focus is on the credible risk and mitigation of AI-driven cyber threats.
Thumbnail Image

Anthropic Mythos effect: Indian govt asks MSMEs, organisations to brace for AI cyber threats

2026-04-27
The Financial Express
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Anthropic's Mythos and similar frontier AI models) that could plausibly lead to significant harms such as data breaches, financial fraud, and service disruptions through autonomous and sophisticated cyberattacks. Although no specific harm has yet occurred, the advisory explicitly warns about credible risks and recommends preventive measures. Therefore, this constitutes an AI Hazard, as the AI system's development and potential use could plausibly lead to an AI Incident in the future.
Thumbnail Image

Govt Warns Indian Firms And MSMEs To Stay Alert As Mythos AI Cyber Risk Grows

2026-04-27
TimesNow
Why's our monitor labelling this an incident or hazard?
The advisory explicitly mentions AI systems with advanced autonomous capabilities in cybersecurity attack planning and execution, which could plausibly lead to significant harm including disruption of critical infrastructure (banks and enterprises). Although no specific harm has yet occurred, the credible warning about these AI capabilities and their potential misuse constitutes an AI Hazard.
Thumbnail Image

CERT-In flags 'high-severity risks' from AI-driven cyber threats amid Mythos concerns - The Economic Times

2026-04-27
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to identify vulnerabilities and execute complex cyberattacks with minimal human intervention, which is a clear involvement of AI systems. Although no specific incident of harm is described, the advisory warns of high-severity risks and the potential for AI-driven cyberattacks to cause harm. This fits the definition of an AI Hazard, where the use or development of AI systems could plausibly lead to harm. The advisory's focus on mitigation and preparedness further supports that harm is anticipated but not yet realized in this report.
Thumbnail Image

CERT-In flags 'high-severity risks' from AI-driven cyber threats amid Mythos concerns

2026-04-27
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems capable of independently identifying vulnerabilities and executing complex cyberattacks, which fits the definition of an AI system. The advisory warns of high-severity risks and plausible future harms from AI-driven cyberattacks, but does not describe any realized harm or incident caused by AI. Therefore, this event qualifies as an AI Hazard because it concerns credible potential harms that could plausibly arise from the use or misuse of AI systems in cyberattacks. It is not an AI Incident since no actual harm has been reported yet, nor is it merely Complementary Information or Unrelated.
Thumbnail Image

CERT-In warns MSMEs on new AI risks - The Times of India

2026-04-28
The Times of India
Why's our monitor labelling this an incident or hazard?
The article discusses the potential risks and evolving threat landscape due to AI-powered cyberattacks, emphasizing plausible future harms to MSMEs. However, it does not describe any actual AI-driven cyberattack incidents or realized harm. The advisory is a warning about plausible future harm, making this an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it directly involves AI systems and their potential misuse in cyberattacks.
Thumbnail Image

CERT-In Issues 'High-Severity Alert' Amid Mythos Jitters; MSMEs Urged To Shield Against AI Risks

2026-04-27
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions advanced AI systems being used to conduct sophisticated cyberattacks autonomously, which could plausibly lead to significant harm such as breaches of enterprise networks and disruption of critical infrastructure. However, the article does not report any realized harm or specific AI-driven cyber incidents occurring at this time. Instead, it is a warning and guidance issued to mitigate potential future harms. Therefore, this event fits the definition of an AI Hazard, as it concerns plausible future harm from AI system use in cyberattacks, without a current incident having occurred.
Thumbnail Image

CERT-In Warns Indian Organisations, MSMEs, And Individuals Of AI-Driven Cyber Threats

2026-04-27
ETV Bharat News
Why's our monitor labelling this an incident or hazard?
The advisory explicitly discusses the plausible future risks of AI-enabled cyberattacks that could lead to significant harms but does not report any realized harm or incident. The involvement of AI systems is clear, as the advisory focuses on AI's role in automating and accelerating cyberattacks. Since the event concerns a credible risk of harm that could plausibly lead to an AI Incident but no actual incident has occurred, it fits the definition of an AI Hazard.
Thumbnail Image

Claude Mythos AI raises cyber risk: Govt asks MSMEs to stay alert

2026-04-27
Techlusive
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Claude Mythos and similar advanced AI models) being used to conduct or facilitate cyberattacks. Although no specific harm has yet occurred or been reported in this article, the advisory warns of credible and plausible future harms to businesses, especially MSMEs, from AI-powered cyber threats. This fits the definition of an AI Hazard, as the development and use of AI systems could plausibly lead to incidents involving harm to property, communities, or individuals through cybercrime. The article does not describe an actual realized incident but focuses on the potential risk and recommended mitigations, so it is not an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their potential harms.