Anthropic's Claude Desktop Secretly Installs Browser Backdoor on macOS

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Anthropic's Claude Desktop AI application for macOS was found to secretly install configuration files that pre-authorize its browser extensions to access and control browser sessions, even for browsers not yet installed. This was done without user consent, creating significant privacy and security risks, and violating user rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Claude Desktop) whose use leads to unauthorized access and control over user browsers, which is a violation of privacy and security rights. The installation of a backdoor without user consent is a direct breach of legal obligations protecting fundamental rights. The AI system's role is pivotal as it is the software performing this unauthorized action. The harm is realized (privacy violation and security risk), not just potential, and the event involves misuse or non-compliance in the AI system's deployment. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

"Je n'ai rien installé, rien autorisé" : vous avez Claude Desktop ? Anthropic a installé un logiciel espion sur votre machine

2026-04-22
Les Numériques
Why's our monitor labelling this an incident or hazard?
Claude Desktop is an AI system application that manipulates browser configurations without user authorization, which is a misuse of the AI system's deployment. Although no direct harm has been reported, the covert and persistent nature of these actions plausibly could lead to violations of user privacy or security, constituting potential harm. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if the unauthorized control or data access occurs.
Thumbnail Image

Comment Claude Desktop installe une porte dérobée dans vos navigateurs sans vous le dire - Numerama

2026-04-22
Numerama.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude Desktop) whose use leads to unauthorized access and control over user browsers, which is a violation of privacy and security rights. The installation of a backdoor without user consent is a direct breach of legal obligations protecting fundamental rights. The AI system's role is pivotal as it is the software performing this unauthorized action. The harm is realized (privacy violation and security risk), not just potential, and the event involves misuse or non-compliance in the AI system's deployment. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Claude Desktop d'Anthropic accusé d'installer des mouchards sur votre Mac

2026-04-23
Génération-NT
Why's our monitor labelling this an incident or hazard?
Claude Desktop is an AI system application. Its use involves deploying a hidden configuration that alters other software behavior without consent, increasing attack surface and enabling potential malicious code execution. This unauthorized modification and security risk constitute a violation of user privacy and legal protections, thus a breach of rights and harm to users. The harm is realized in the form of a security vulnerability and privacy violation, not just a potential risk. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system causing harm through misuse or unsafe design.
Thumbnail Image

Claude Desktop : Anthropic~? champion autoproclamé de l'IA responsable~? préinstalle discrètement un accès à vos sessions navigateur~? y compris pour des logiciels que vous n'avez pas encore installés

2026-04-23
Developpez.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude AI and its browser extension) that is installed and configured in a way that allows it to access and control browser sessions with elevated privileges. This access is granted without user consent and persists even for browsers not installed yet, indicating a deliberate design choice by the developers. The AI system's capabilities include reading and writing web page content, automating tasks, and potentially accessing sensitive authenticated sessions such as banking or professional email. These actions constitute violations of user privacy rights and create significant security risks, fulfilling the criteria for harm under human rights and privacy law. The lack of consent and transparency, combined with the persistent and hard-to-remove nature of the installation, further aggravates the harm. The involvement of the AI system is direct and central to the harm, as it is the AI-powered extension that leverages this privileged access. Hence, this event is best classified as an AI Incident.
Thumbnail Image

2026-04-22
next.ink
Why's our monitor labelling this an incident or hazard?
An AI system (Claude Desktop) is involved, as it is an AI application installed on user machines. The event concerns the use and deployment of this AI system, specifically its installation of native messaging manifests that enable high-privilege communication with browsers without explicit consent. While no actual harm (such as a successful attack or data breach) is reported, the article details credible security vulnerabilities and legal violations that could plausibly lead to harm, including unauthorized access or control over user data and actions. Therefore, this situation fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm to user privacy and security. It is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated, as the focus is on a specific risk stemming from the AI system's behavior.
Thumbnail Image

0

2026-04-23
developpez.net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude Desktop) whose deployment and use have directly led to unauthorized access to user browser sessions and local system execution privileges without consent, violating privacy rights and potentially exposing critical infrastructure controls. The AI system's role is pivotal as it pre-installs and maintains this access persistently, enabling significant harm to user privacy and security. The described harms include violations of fundamental rights (privacy), potential harm to critical infrastructure (via access to admin consoles), and security risks from vulnerabilities. The event meets the criteria for an AI Incident due to realized harm stemming from the AI system's use and deployment practices.