AI Adoption in Indian Cybersecurity Outpaces Zero Trust Readiness, Raising Future Risk

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Indian businesses are rapidly adopting AI-driven cybersecurity tools, but a Zoho report highlights that one in three firms lack a Zero Trust framework and basic identity controls. This gap creates significant vulnerabilities, increasing the risk of future insider threats and breaches despite high confidence in AI's protective capabilities.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article clearly involves AI systems in the context of cybersecurity tools and their deployment. It identifies existing vulnerabilities and the potential for AI-driven security tools to either mitigate or exacerbate risks. However, no actual harm or security breach caused by AI systems is reported. The discussion centers on the potential for future harm due to gaps in security readiness despite AI adoption, which fits the definition of an AI Hazard. There is no indication of a realized AI Incident or a complementary information update about a past incident. Therefore, the event is best classified as an AI Hazard, reflecting the plausible future risk of harm due to the current security gaps in AI adoption.[AI generated]
AI principles
Robustness & digital securityPrivacy & data governance

Industries
Digital security

Affected stakeholders
Business

Harm types
Economic/PropertyReputational

Severity
AI hazard

Business function:
ICT management and information security

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

AI enthusiasm outpaces security readiness as one in three Indian firms lack Zero Trust, Zoho Report shows

2026-05-05
Asian News International (ANI)
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in cybersecurity use and discusses the potential risks due to insufficient foundational security measures. However, it does not report any realized harm or incident caused by AI systems, nor does it describe a specific event where AI malfunction or misuse led to harm. The concerns are about plausible future risks if foundational controls are not improved, but the article mainly presents survey findings and expert opinions rather than a concrete AI hazard event. Therefore, it is best classified as Complementary Information, providing context and insight into the AI ecosystem and security readiness without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Survey Finds U.S. Workforce Faces Highest AI Security Belief-to-Deployment Gap Globally Despite Leading in Investment Intent

2026-05-05
The Montreal Gazette
Why's our monitor labelling this an incident or hazard?
The article centers on survey findings about AI belief versus deployment readiness in workforce security, highlighting potential risks due to under-deployment of AI security tools. However, it does not describe any actual incident where AI systems caused harm or malfunctioned, nor does it report a specific event where AI use led to realized harm. The discussion is about potential future risks and the need for architectural improvements to enable effective AI security deployment. Therefore, the event fits the category of Complementary Information, as it provides context and insights into AI adoption challenges and security risks without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Business News | AI Enthusiasm Outpaces Security Readiness as One in Three Indian Firms Lack Zero Trust, Zoho Report Shows | LatestLY

2026-05-05
LatestLY
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in the context of cybersecurity tools and their deployment. It identifies existing vulnerabilities and the potential for AI-driven security tools to either mitigate or exacerbate risks. However, no actual harm or security breach caused by AI systems is reported. The discussion centers on the potential for future harm due to gaps in security readiness despite AI adoption, which fits the definition of an AI Hazard. There is no indication of a realized AI Incident or a complementary information update about a past incident. Therefore, the event is best classified as an AI Hazard, reflecting the plausible future risk of harm due to the current security gaps in AI adoption.
Thumbnail Image

AI for cybersecurity race in India exposes major Zero Trust gaps: Zoho

2026-05-05
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used in cybersecurity for threat detection and response, indicating AI system involvement. However, it does not describe any actual harm or incident caused by AI malfunction or misuse. Instead, it highlights the risk that current gaps in security controls combined with AI adoption could plausibly lead to future harms such as insider threats or breaches. Therefore, the event is best classified as an AI Hazard, as it concerns plausible future harm due to AI system use amid insufficient foundational security measures.