Gartner Warns of Rising Security Incidents in Generative AI Applications by 2028

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Gartner predicts that by 2028, 25% of enterprise generative AI applications will experience at least five minor security incidents annually, up from 9% in 2025, due to increased use of agent-based AI and Model Context Protocol (MCP). Risks include information leaks and inadequate security controls, prompting calls for stricter safeguards.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (generative AI applications) and discusses risks related to their use and integration with external components via protocols like MCP. However, the harms described are potential and forecasted rather than actualized. No specific AI Incident has occurred yet; instead, the article highlights plausible future security risks and advises on preventive measures. Therefore, this qualifies as an AI Hazard, as it concerns credible risks that could plausibly lead to AI Incidents in the future if not properly managed.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
Business

Harm types
Human or fundamental rightsReputational

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AIがシステムを操作する時代、SaaSはどう変わる? freeeが示すMCPの課題と可能性

2026-04-09
ITmedia
Why's our monitor labelling this an incident or hazard?
The article describes AI systems (AI agents) interacting with SaaS platforms via MCP, which qualifies as AI system involvement. However, it does not report any incident or hazard involving harm or plausible harm resulting from this technology. Instead, it discusses ongoing developments, challenges, and potential future changes in business models, which aligns with providing contextual and ecosystem information. Therefore, this is Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

28年までに生成AIアプリの4分の1にセキュリティ事故、MCP普及でリスク拡大 -- -- Gartner予測

2026-04-10
ITmedia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI applications) and discusses risks related to their use and integration with external components via protocols like MCP. However, the harms described are potential and forecasted rather than actualized. No specific AI Incident has occurred yet; instead, the article highlights plausible future security risks and advises on preventive measures. Therefore, this qualifies as an AI Hazard, as it concerns credible risks that could plausibly lead to AI Incidents in the future if not properly managed.
Thumbnail Image

Hakuhodo DY ONE、MCP導入支援サービスを開始 AI活用の統制・安全性を強化

2026-04-10
MarkeZine
Why's our monitor labelling this an incident or hazard?
The article does not report any actual harm or incident caused by AI systems but rather discusses a new service aimed at addressing security and governance challenges associated with AI agent integration. The focus is on preventing potential risks and improving operational control, which aligns with providing complementary information about societal and governance responses to AI-related challenges. Therefore, it does not qualify as an AI Incident or AI Hazard but fits the definition of Complementary Information.
Thumbnail Image

Gartner、2028年までに25%の生成AIアプリが年間5件以上のセキュリティインシデントを経験と予測

2026-04-13
CodeZine
Why's our monitor labelling this an incident or hazard?
The article discusses predicted future security incidents related to generative AI applications, indicating plausible risks of harm (security breaches) due to AI system use. However, it does not report any actual realized harm or incidents occurring at present. Therefore, it fits the definition of an AI Hazard, as it describes circumstances where AI system use could plausibly lead to security incidents causing harm in the future. It is not an AI Incident because no actual harm has yet occurred, nor is it Complementary Information since it is not updating or responding to a past incident but forecasting future risks. It is clearly related to AI systems, specifically generative AI applications and agent-based AI.
Thumbnail Image

生成AIアプリのセキュリティ事故が5%増加と予想--「MCP対策が肝心」とガートナー

2026-04-10
ZDNet Japan
Why's our monitor labelling this an incident or hazard?
The article discusses a credible forecast of increased security incidents involving AI systems in the future, specifically related to generative AI applications and MCP-based agent AI. However, it does not describe any realized harm or actual incidents but rather warns about plausible future risks and advises on preventive measures. Therefore, this constitutes an AI Hazard, as it concerns potential future harms stemming from the use and development of AI systems.
Thumbnail Image

4分の1の生成AIアプリが"静かに事故る" MCP時代の落とし穴をGartnerが指摘

2026-04-12
@IT
Why's our monitor labelling this an incident or hazard?
The article describes potential risks and minor incidents related to generative AI systems that could lead to information leakage, which is a form of harm to property or communities. However, it does not document any actual harm or incident occurring but rather discusses the plausible risks and challenges in managing them. Therefore, this fits the definition of an AI Hazard, as the development and use of AI systems could plausibly lead to incidents involving information leaks if not properly controlled.
Thumbnail Image

Enterprise GenAI applications will face rising security incidents as adoption accelerates

2026-04-14
ETCISO.in
Why's our monitor labelling this an incident or hazard?
The article focuses on predictions and warnings about plausible future security incidents involving AI systems, specifically generative AI in enterprises. It does not describe any realized harm or specific events where AI systems have caused security breaches or other harms. Therefore, it fits the definition of an AI Hazard, as it outlines credible risks that could plausibly lead to AI Incidents but does not report actual incidents.
Thumbnail Image

Gartner Predicts GenAI Apps to Face 25% Security Incident Risk by 2028 - BW Businessworld

2026-04-13
BW Businessworld
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or specific security incidents that have already occurred. Instead, it presents predictions and warnings about plausible future security risks associated with enterprise GenAI systems. The focus is on potential vulnerabilities and the need for oversight and security protocols to prevent incidents. Therefore, this constitutes an AI Hazard, as it highlights credible risks that could plausibly lead to AI-related security incidents in the future, but no actual incidents are reported yet.
Thumbnail Image

MCP security: Logging and runtime security measures - IT Security News

2026-04-10
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The content centers on security practices and risk mitigation related to AI systems (MCP servers executing AI agent commands), but it does not report any actual harm, malfunction, or realized threat. It is primarily about preventing potential risks and improving security, which aligns with providing complementary information rather than describing an incident or hazard.