AI-Integrated Browsers Expose Users to Security Vulnerabilities

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI and Microsoft have launched AI-powered web browsers, such as ChatGPT Atlas and Copilot Mode, which automate web tasks but introduce unresolved security risks. Experts report that these AI systems can be exploited through prompt injection attacks, leading to theft of sensitive user credentials and privacy breaches.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI systems integrated into browsers that autonomously act and learn from user data, which fits the definition of AI systems. It details how these systems' vulnerabilities could be exploited by attackers to cause harm, such as unauthorized data access and manipulation, constituting plausible future harm. Since no actual harm is reported but credible risks and vulnerabilities are outlined, this qualifies as an AI Hazard rather than an AI Incident. The focus is on potential cybersecurity threats arising from the use and malfunction of AI systems in browsers, meeting the criteria for an AI Hazard.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityRespect of human rightsAccountability

Industries
Digital security

Affected stakeholders
Consumers

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI hazard

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Por qué los navegadores con IA integrada son un peligro para la ciberseguridad

2025-10-31
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems integrated into browsers that autonomously act and learn from user data, which fits the definition of AI systems. It details how these systems' vulnerabilities could be exploited by attackers to cause harm, such as unauthorized data access and manipulation, constituting plausible future harm. Since no actual harm is reported but credible risks and vulnerabilities are outlined, this qualifies as an AI Hazard rather than an AI Incident. The focus is on potential cybersecurity threats arising from the use and malfunction of AI systems in browsers, meeting the criteria for an AI Hazard.
Thumbnail Image

Los (peligrosos) navegadores con IA

2025-11-03
Ara en Castellano
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems integrated into browsers that autonomously interact with web content and user data. It details concrete security vulnerabilities (prompt injection attacks) that have been exploited to steal sensitive user credentials, constituting harm to individuals' privacy and data security. The AI systems' malfunction or exploitation is a direct factor in these harms. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information, as the harm is realized and linked to the AI systems' use and vulnerabilities.
Thumbnail Image

HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage - IT Security News

2025-11-05
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its vulnerabilities that can be exploited to leak private user information, which constitutes harm to individuals' privacy and potentially violates rights. Although the article does not describe a specific realized incident of harm, the vulnerabilities significantly increase the risk of such harm occurring. Therefore, this event describes plausible future harm stemming from the AI system's malfunction or misuse, qualifying it as an AI Hazard rather than an AI Incident, since no actual harm is reported as having occurred yet.
Thumbnail Image

Hackers can use prompt injection attacks to hijack your AI chats -- here's how to avoid this serious security flaw

2025-11-02
Tom's Guide
Why's our monitor labelling this an incident or hazard?
The article explicitly details how prompt injection attacks have been used to cause unauthorized actions by AI systems, including stealing data and controlling smart home devices, which are direct harms caused by AI misuse. The AI systems involved are large language models and AI assistants, clearly fitting the definition of AI systems. The harms described fall under injury or harm to persons or groups, violations of rights, and harm to property or communities. Since these harms have already occurred or are ongoing, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI browsers wide open to attack via prompt injection

2025-11-03
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (AI browsers and chatbots) that have been exploited via prompt injection and other attacks to perform unauthorized actions, such as exfiltrating user data and manipulating user settings. These exploits have been demonstrated by researchers and reproduced by journalists, showing direct involvement of AI system malfunction or misuse leading to harm. The harms include privacy violations and potential data theft, which are significant harms to property and communities. The presence of AI systems is clear, the nature of involvement is their use and malfunction, and the harms are realized or ongoing. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage

2025-11-05
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and details how its vulnerabilities have been exploited to leak private user information and bypass safety features, which constitutes harm to individuals' privacy and security. This harm is direct and realized, not merely potential. The event is not just a warning or a theoretical risk but documents actual vulnerabilities and attack techniques that have been proven to compromise user data. Therefore, it qualifies as an AI Incident under the framework, as the AI system's malfunction and exploitation have directly led to harm.
Thumbnail Image

HackedGPT - 7 New Vulnerabilities in GPT-4o and GPT-5 Enables 0-Click Attacks

2025-11-05
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT's GPT-4o and GPT-5 models) and their core architecture, including system prompts, memory tools, and web browsing features. The vulnerabilities allow attackers to exploit the AI's behavior to exfiltrate sensitive user data, directly causing harm to users' privacy and violating their rights. The harm is realized, not just potential, as proof-of-concept attacks have been demonstrated. The AI system's malfunction or design flaws are pivotal in enabling these attacks. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Multiple ChatGPT Security Bugs Allow Rampant Data Theft

2025-11-06
Dark Reading
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and details how its use and vulnerabilities have directly led to harm, specifically the exfiltration of private information and manipulation of user interactions. These harms fall under injury to health (privacy and security harm to individuals) and violation of rights (breach of privacy). The presence of multiple exploitable vulnerabilities and demonstrated attack vectors confirms realized harm rather than just potential risk. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Is It Safe for You to Install ChatGPT Atlas?

2025-11-14
CNET
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT Atlas) and details its use and associated vulnerabilities. Although no actual harm or incident is reported, the described prompt injection and clipboard attacks, along with user overreliance and AI hallucinations, present credible risks that could lead to harms such as data theft, privacy violations, and security breaches. The article also references expert opinions and OpenAI's mitigation efforts, emphasizing the potential for future incidents. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

How a fake AI sidebar can steal your data

2025-11-13
Kaspersky Lab
Why's our monitor labelling this an incident or hazard?
The event involves the use and potential misuse of AI systems integrated into browsers (AI sidebars powered by large language models) to manipulate users and cause harm such as theft of credentials, unauthorized access to accounts, and device compromise. The attack exploits AI system outputs and user trust, leading directly to violations of user rights and harm to property (digital assets) and potentially to personal data and privacy. Although the attack is currently theoretical and has not yet caused realized harm, the article clearly states the high plausibility of such harm occurring if malicious extensions are deployed. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving significant harm through malicious use of AI systems in browsers.
Thumbnail Image

The biggest AI progress in 2025 may be the biggest risk in 2026 - CNBC TV18

2025-11-15
cnbctv18.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (agentic AI browsers) and details their use and associated security vulnerabilities. Although no actual harm has been reported yet, the identified loopholes and potential for cyberattacks represent credible risks that could lead to AI incidents such as privacy violations and data theft. This fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to significant harms in the near future. The article does not describe realized harm or incidents, nor is it merely a general update or unrelated news.
Thumbnail Image

Browsers Can Now Do Tasks For You - Here's What Agentic AI Means - SlashGear

2025-11-12
SlashGear
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (agentic AI browsers) that autonomously act on users' behalf, which fits the definition of AI systems. However, it does not describe any realized harm, injury, rights violations, or disruptions caused by these systems. Instead, it highlights potential security and privacy concerns, which are plausible future risks but not yet realized harms. Therefore, this event qualifies as an AI Hazard because the use of agentic AI browsers could plausibly lead to incidents involving privacy breaches or security issues in the future, but no incident has yet occurred.
Thumbnail Image

OpenAI's Atlas and Perplexity's Comet are igniting the AI browser war

2025-11-16
Cybernews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into browsers that perform autonomous tasks and maintain persistent user data, which introduces new attack surfaces. Security researchers have demonstrated prompt injection attacks that exploit the AI's natural language interface to potentially leak sensitive information. Although no realized harm is reported, the described vulnerabilities and risks constitute plausible future harms stemming from the AI systems' use and design. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information, as it focuses on potential risks rather than actualized harm or responses to past incidents.
Thumbnail Image

Comparing the Top 4 Agentic AI Browsers in 2025: Atlas vs Copilot Mode vs Dia vs Comet

2025-11-16
MarkTechPost
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by the AI browsers, but rather discusses their design, capabilities, and potential privacy risks. This fits the definition of Complementary Information, as it provides supporting data and contextual details about AI systems and their risk profiles without reporting a new AI Incident or AI Hazard. The focus is on understanding the ecosystem and informing users and stakeholders about the implications of these AI browsers, which aligns with the Complementary Information category.