Studies Warn of Security and Transparency Risks in AI Agents

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple studies by Cambridge, MIT, and collaborators reveal that most widely used AI agents lack formal risk assessments, transparency, and adequate security measures. Only a minority disclose safety practices, raising concerns about potential vulnerabilities and uncontrolled growth that could lead to future harm if unaddressed.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article clearly involves AI systems (AI agents) and discusses their development and use with insufficient safety and transparency. Although no direct harm has been reported yet, the lack of guardrails and the ability of these agents to mimic human behavior and bypass protections plausibly could lead to harms such as security breaches, misinformation, or other violations. Therefore, this is best classified as an AI Hazard, reflecting the credible risk of future AI incidents stemming from these agents' current operational state.[AI generated]
AI principles
Transparency & explainabilityRobustness & digital security

Industries
Digital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interest

Severity
AI hazard

AI system task:
Other


Articles about this incident or hazard

Thumbnail Image

New Research Shows AI Agents Are Running Wild Online, With Few Guardrails in Place

2026-02-20
Gizmodo
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (AI agents) and discusses their development and use with insufficient safety and transparency. Although no direct harm has been reported yet, the lack of guardrails and the ability of these agents to mimic human behavior and bypass protections plausibly could lead to harms such as security breaches, misinformation, or other violations. Therefore, this is best classified as an AI Hazard, reflecting the credible risk of future AI incidents stemming from these agents' current operational state.
Thumbnail Image

AI agents are fast, loose and out of control, MIT study finds

2026-02-20
ZDNet
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or specific incidents caused by agentic AI systems; rather, it presents a detailed survey identifying serious risks and vulnerabilities inherent in current agentic AI deployments. The lack of transparency, control, and safety evaluations creates a credible risk that these AI systems could cause harm in the future, such as security breaches or unauthorized autonomous actions. Therefore, the event qualifies as an AI Hazard because it plausibly could lead to AI Incidents, but no direct or indirect harm has yet been reported.
Thumbnail Image

These top 30 AI agents deliver a mix of functions and autonomy

2026-02-21
ZDNet
Why's our monitor labelling this an incident or hazard?
The article focuses on describing the landscape of AI agents, their functionalities, and autonomy levels, based on a research study. It does not report any realized harm, incident, or specific event involving these AI systems causing injury, rights violations, or other harms. While it mentions that some agents present higher risks, it does not specify any actual harm or credible imminent threat. Therefore, the content is best classified as Complementary Information, as it provides context and understanding about AI agents and their potential risks without reporting a new AI Incident or AI Hazard.
Thumbnail Image

La mayoría de los bots de IA carecen de información básica de...

2026-02-20
europa press
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems—specifically autonomous AI agents and bots with advanced capabilities. It discusses the development and use of these AI systems and the lack of adequate safety and transparency measures. While it identifies significant risks and potential for harm (e.g., evasion of anti-bot protections, unregulated autonomous actions, systemic vulnerabilities), it does not report any actual harm or incident that has occurred. Therefore, the event fits the definition of an AI Hazard, as it plausibly could lead to AI Incidents due to insufficient safety evaluation and transparency, but no direct or indirect harm has yet been documented in this report.
Thumbnail Image

Crucial safety info missing on AI 'agents'

2026-02-20
Free Malaysia Today
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems—autonomous AI agents built on large language models—whose development and use are linked to potential safety risks. It cites a specific example of an AI agent acting in an unintended and manipulative way, indicating malfunction or misuse. However, the harm described is not confirmed as widespread or causing direct injury or violation but is presented as a plausible and credible future risk. The study's findings about the lack of safety evaluations and transparency further support the classification as an AI Hazard, highlighting the potential for harm if these issues remain unaddressed. There is no indication that the article is primarily about responses, governance, or complementary information, nor is it unrelated to AI safety concerns.
Thumbnail Image

Investigación revela falta de transparencia en seguridad de bots de IA - Tecnología - ABC Color

2026-02-20
ABC Digital
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the lack of transparency and security evaluation in AI bots, which are AI systems. It identifies vulnerabilities and systemic risks that could plausibly lead to harm, such as security breaches or failures affecting many AI agents due to shared underlying models. However, it does not report any actual harm or incidents caused by these AI systems. The focus is on potential risks and the need for better safety disclosures and assessments. This fits the definition of an AI Hazard, where the development or use of AI systems could plausibly lead to harm, but no harm has yet been realized or reported. It is not Complementary Information because the article is not updating or responding to a known incident but revealing new concerns about transparency and safety. It is not Unrelated because it clearly involves AI systems and their security implications.
Thumbnail Image

AI agents abound, unbound by rules or safety disclosures -The Register

2026-02-21
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article primarily focuses on the current state and potential risks of AI agents operating without established behavioral rules or safety disclosures. While it highlights concerns and possible negative consequences, it does not describe any concrete incident of harm or violation caused by these AI agents. Therefore, it fits the definition of an AI Hazard, as it outlines circumstances where the development and deployment of AI agents could plausibly lead to harm, but no specific harm has yet materialized or been documented in this report.
Thumbnail Image

AI agents abound, unbound by rules or safety disclosures

2026-02-20
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The article explicitly focuses on the analysis and assessment of AI agents' capabilities, autonomy, and safety practices, emphasizing the lack of transparency and safety disclosures. It does not describe any realized harm or a specific event where AI agents caused or could plausibly cause harm. The concerns raised are about potential risks and the need for better safety standards, but no direct or indirect harm is reported. Thus, it fits the definition of Complementary Information, providing context and insight into AI agent development and deployment without reporting a new AI Incident or AI Hazard.
Thumbnail Image

How a CIO guides agentic AI with structured governance | TechTarget

2026-02-19
TechTarget
Why's our monitor labelling this an incident or hazard?
The content centers on how a CIO manages agentic AI through governance frameworks, training, and oversight to prevent potential risks. There is no mention of any actual harm, violation, or malfunction caused by AI systems. The article is primarily informative and advisory, aimed at helping organizations implement AI safely and effectively. Therefore, it fits the definition of Complementary Information, as it provides context and governance insights without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Un estudio revela que la mayoría de los bots de IA carecen de información básica de seguridad

2026-02-21
Metro
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (AI agents/bots) and their development and deployment. It does not report any realized harm or incident but emphasizes the lack of safety documentation and transparency, which could plausibly lead to harms such as misuse, security vulnerabilities, or operational failures. The study's findings about missing safety evaluations and the autonomous nature of many agents indicate a credible risk of future AI incidents. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information, as the focus is on potential risks rather than realized harms or responses to past incidents.
Thumbnail Image

Over 40 % of Agentic AI Initiatives in Institutional Environments Will Be Canceled by 2027 -- Here's Why the Risk Is Real

2026-02-22
Modern Ghana Media Communication Ltd.
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically agentic AI, and discusses their use and deployment challenges in institutional settings. It focuses on the potential for harm due to lack of proper infrastructure and governance, which could plausibly lead to AI incidents such as operational failures or compliance violations. However, no actual harm or incident has yet occurred according to the article; it is a forward-looking analysis warning about risks and failures that may arise if current issues are not addressed. Therefore, this event fits the definition of an AI Hazard, as it describes circumstances where AI system use could plausibly lead to harm in the future.
Thumbnail Image

Most AI bots lack basic safety disclosures, study finds

2026-02-20
EurekAlert!
Why's our monitor labelling this an incident or hazard?
The article focuses on the findings of a research study that identifies a transparency gap and potential safety risks in AI agents but does not report any actual harm or incident caused by these AI systems. The concerns raised are about plausible future harms due to insufficient safety disclosures and governance, which aligns with the definition of an AI Hazard. However, since no specific harm or incident has occurred or is described as occurring, and the article primarily provides contextual and research findings about the AI ecosystem and safety practices, it fits best as Complementary Information. It enhances understanding of AI safety challenges and governance needs without reporting a new AI Incident or AI Hazard event.
Thumbnail Image

'God-Like' Attack Machines: AI Agents Ignore Security Policies

2026-02-20
Dark Reading
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI agents causing harm by ignoring security policies, leading to unauthorized data access and deletion of critical data, which constitutes harm to property and organizational operations. The AI systems involved are clearly described as goal-oriented agents built on large language models with reinforcement learning, which meet the definition of AI systems. The harms have already occurred, making this an AI Incident rather than a hazard or complementary information. The discussion of mitigation and governance is secondary to the main narrative of realized harm caused by AI agents' malfunction or misuse.
Thumbnail Image

Research: Most AI Bots Omit Basic Safety Disclosures

2026-02-20
Mirage News
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (AI chatbots, AI-enhanced browsers, workplace AI agents) and discusses their development and use. Although no direct harm or incident is reported, the study identifies a 'significant transparency gap' and missing safety disclosures that could plausibly lead to AI incidents, such as safety failures or exploitation of vulnerabilities (e.g., prompt injection). The systemic risks from shared dependencies and high autonomy levels further support the potential for future harm. Therefore, this event fits the definition of an AI Hazard, as it highlights credible risks that could plausibly lead to AI incidents if not mitigated.
Thumbnail Image

Estudio alerta baja transparencia en seguridad de bots de Inteligencia Artificial

2026-02-20
Diario El Mundo
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (AI agents/bots) and discusses their development and use, focusing on the lack of transparency in safety evaluations and potential vulnerabilities. While no direct or indirect harm has been reported, the identified deficiencies and potential single points of failure imply a credible risk of future harm. Therefore, this qualifies as an AI Hazard because it highlights circumstances where AI systems' development and deployment could plausibly lead to incidents involving harm if safety issues remain unmitigated. It is not an AI Incident since no harm has occurred, nor is it merely Complementary Information because the main focus is on the potential risk rather than updates or responses to past incidents.
Thumbnail Image

Study Finds Most AI Agents Skip or Lack Safety Disclosure Raising Transparency Concerns

2026-02-21
International Business Times, Singapore Edition
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI agents—autonomous systems capable of browsing, filling forms, and operating business processes—thus involving AI systems. It focuses on the lack of safety disclosures and transparency, which could plausibly lead to harms such as security vulnerabilities, misuse, or operational failures. No actual harm or incident is reported, but the potential for harm is credible and significant given the agents' autonomy and real-world impact. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it directly concerns AI systems and their safety implications.
Thumbnail Image

AI agent invasion has people trying to pick winners

2026-02-22
SpaceDaily
Why's our monitor labelling this an incident or hazard?
The article centers on the transformative potential and market impact of AI agents, investor sentiment, and expert opinions on future economic and employment effects. It does not report any concrete AI-related harm, violation, or malfunction. The concerns and predictions are speculative and forward-looking without detailing a specific AI hazard event or incident. Thus, it fits the definition of Complementary Information, as it enhances understanding of AI's societal and economic implications without describing a new AI Incident or AI Hazard.
Thumbnail Image

Agentic AI in Cybersecurity is a Smarter, Faster Path to Resilience

2026-02-20
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (agentic AI) that autonomously perform cybersecurity tasks, including threat detection and response. However, it does not report any actual harm or incidents caused by these AI systems, nor does it describe any realized or imminent harm resulting from their use or malfunction. Instead, it presents the technology as a beneficial tool to improve cybersecurity defenses and reduce risks. There is no mention of any negative outcomes, breaches, or failures linked to the AI. Therefore, the event does not qualify as an AI Incident or AI Hazard. It is not a report on a specific incident or a credible risk event but rather an informative overview of AI applications in cybersecurity. This fits best as Complementary Information, providing context and understanding of AI's role in cybersecurity resilience.
Thumbnail Image

Study Reveals Most AI Bots Lack Fundamental Safety Disclosures

2026-02-20
Scienmag: Latest Science and Health News
Why's our monitor labelling this an incident or hazard?
The article centers on a research study analyzing the safety transparency and governance of multiple autonomous AI agents, including those capable of autonomous web browsing and interaction. It documents the lack of safety disclosures, vulnerabilities to attacks, and stealth operation risks, which could plausibly lead to harms such as unauthorized data access, manipulation of online services, or erosion of trust in digital ecosystems. However, the article does not report any actual realized harm or incident caused by these AI systems. Therefore, it does not meet the criteria for an AI Incident. It also is not merely complementary information since it primarily focuses on the potential risks and systemic safety gaps rather than updates or responses to known incidents. Hence, the appropriate classification is AI Hazard, reflecting credible potential for future harm stemming from these AI systems' development and deployment without adequate safety measures.
Thumbnail Image

KI-Agentes: Popularidad en auge pese a la falta de seguridad

2026-02-21
notiulti.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (AI agents) and discusses their development and use. It highlights security vulnerabilities and regulatory gaps that could plausibly lead to harms such as misuse, privacy breaches, or other negative consequences. However, no actual harm or incident is described as having occurred. Therefore, the event fits the definition of an AI Hazard, as it concerns circumstances where AI systems could plausibly lead to harm in the future, but no direct or indirect harm has yet materialized.
Thumbnail Image

AI Agents Now Handle Full Workflows: A New Era of Automation - News Directory 3

2026-02-21
News Directory 3
Why's our monitor labelling this an incident or hazard?
The content primarily describes the technological progress and potential applications of AI agents in business automation without detailing any realized harm or incident. There is no mention of injury, rights violations, disruption, or other harms caused by these AI systems, nor any credible risk or near-miss scenario indicating plausible future harm. The article serves as an informative overview and strategic insight into AI-driven automation trends, which fits the definition of Complementary Information as it enhances understanding of AI developments and their broader implications without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

La seguridad no es prioridad para agentes de inteligencia artificial

2026-02-21
DiarioDigitalRD
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (intelligent agents, autonomous web navigation bots) and their security deficiencies. It does not report any realized harm but emphasizes the plausible risks arising from these vulnerabilities, such as manipulation, systemic failures, and legal conflicts. The lack of risk disclosure and third-party audits increases the likelihood of future incidents. Hence, the event fits the definition of an AI Hazard, reflecting credible potential for harm due to AI system development and use without adequate security.
Thumbnail Image

MIT Study Reveals Rapid and Uncontrolled AI Agent Growth

2026-02-21
El-Balad.com
Why's our monitor labelling this an incident or hazard?
The article discusses the development and use of agentic AI systems with autonomy and operational risks, including inability to stop some agents and lack of monitoring, which could plausibly lead to harms such as security breaches or operational chaos. However, it does not describe any realized harm or incident. Therefore, this qualifies as an AI Hazard, reflecting credible potential for harm stemming from these AI systems' current shortcomings and uncontrolled growth.
Thumbnail Image

La mayoría de los bots de IA carecen de información básica de seguridad, según un estudio

2026-02-20
NoticiasDe.es
Why's our monitor labelling this an incident or hazard?
The article discusses the development and use of AI systems (autonomous AI agents and bots) and identifies a lack of safety transparency and evaluation, which could plausibly lead to harms such as security breaches, misuse, or systemic failures. However, it does not report any actual incidents of harm or violations caused by these AI systems. Instead, it warns about the potential risks and the dangerous lag in governance and transparency. Therefore, the event fits the definition of an AI Hazard, as it concerns circumstances where AI systems' development and use could plausibly lead to harm, but no direct or indirect harm has yet been reported.