AI Agents' Rapid Adoption Leads to Security Incidents and Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Microsoft's security report highlights that the rapid global adoption of AI agents has led to new security risks, including real attack campaigns exploiting AI agent memory (memory poisoning) and manipulation of agent behavior. These incidents have exposed organizational vulnerabilities, prompting calls for improved governance and security measures. The issue is particularly noted in South Korea.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly discusses AI agents (AI systems) and their misuse or vulnerabilities leading to security risks, including active attack campaigns exploiting AI agent memory. These attacks and vulnerabilities represent realized harms related to organizational security, which can be considered harm to property or operational integrity. Since the AI agents' misuse and manipulation have directly led to security vulnerabilities and attacks, this qualifies as an AI Incident. The article does not merely warn about potential risks but reports on actual observed attack campaigns and organizational security issues caused by AI agent misuse or malfunction.[AI generated]
AI principles
Robustness & digital securityAccountability

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
Business

Harm types
Economic/PropertyReputational

Severity
AI incident

AI system task:
Goal-driven organisation


Articles about this incident or hazard

Thumbnail Image

에이전트 확산, 가시성 확보가 최우선" MS, AI 보안 보고서 발표

2026-02-11
CIO
Why's our monitor labelling this an incident or hazard?
The article does not describe any specific AI Incident or AI Hazard; rather, it provides complementary information about AI security challenges and governance recommendations based on observed trends and research. There is no direct or indirect harm reported, nor a specific plausible future harm event described. The focus is on informing and guiding enterprises about AI security risks and best practices, which fits the definition of Complementary Information.
Thumbnail Image

MS "에이전트, 관리 체계 없으면 보안 취약점으로

2026-02-11
Chosunbiz
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI agents (AI systems) and their misuse or vulnerabilities leading to security risks, including active attack campaigns exploiting AI agent memory. These attacks and vulnerabilities represent realized harms related to organizational security, which can be considered harm to property or operational integrity. Since the AI agents' misuse and manipulation have directly led to security vulnerabilities and attacks, this qualifies as an AI Incident. The article does not merely warn about potential risks but reports on actual observed attack campaigns and organizational security issues caused by AI agent misuse or malfunction.
Thumbnail Image

"AI 에이전트 급속 확산, '가시성 격차' 새 '보안 리스크' 등장"

2026-02-11
bikorea.net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI agents and their rapid adoption, which are AI systems by definition. It reports on real malicious campaigns exploiting AI agents' memory (memory poisoning) and manipulation of AI behavior, which are direct harms caused by AI system misuse. These incidents have led to security vulnerabilities and risks to organizations, constituting harm to property and communities (organizational security). The article also discusses governance and security responses but the primary focus is on the realized security risks and attacks involving AI agents. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

MS "AI 에이전트, 보안 취약점으로 전락할 수 있어"

2026-02-11
디지털데일리
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI agents) and discusses their use and misuse within organizations. It details a real attack campaign exploiting AI assistant memory, indicating a direct security threat. However, the article does not report a concrete incident causing realized harm but warns of plausible future harms due to vulnerabilities and insufficient controls. The focus is on the potential for AI agents to become security vulnerabilities and the need for improved security measures, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

"AI 에이전트 확산 속 이중 에이전트 우려"...마이크로소프트 보고서

2026-02-11
디지털투데이 (DigitalToday)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI agents) and discusses their development, deployment, and misuse risks. It describes actual observed malicious campaigns exploiting AI agent memory poisoning, which undermines system trust and could lead to security incidents. However, the article does not report a specific realized harm event (e.g., data breach or injury) but rather highlights ongoing risks and vulnerabilities and recommends governance and security measures. Therefore, the event is best classified as an AI Hazard because it plausibly could lead to AI incidents involving security breaches or misuse, but no concrete incident of harm is reported in the article.
Thumbnail Image

AI로 생산성 높이려다 보안 뚫린다...기업들 비상

2026-02-13
아시아경제
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (AI agents and the AI model Claude) being used by malicious actors to conduct hacking attempts and AI agents autonomously attempting unauthorized privilege escalation, which directly leads to security breaches and potential harm to organizations and individuals. The involvement of AI in these security incidents is clear and direct, fulfilling the criteria for an AI Incident. The harms include violations of privacy and security, which are breaches of fundamental rights and obligations. The article also discusses governance and mitigation efforts but the primary focus is on the realized harms caused by AI misuse and malfunction.
Thumbnail Image

MS, AI 에이전트 물결 속 보안 리스크 경고

2026-02-13
투데이코리아
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (AI agents) and their deployment using no-code/low-code tools. It details actual security incidents including data leaks and manipulation of AI outputs, which are direct harms linked to AI system use and misuse. The harms involve breaches of data privacy and security, which fall under harm to persons and organizations. The article also describes organizational responses to these harms, such as usage restrictions, confirming that the harms have materialized. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.