Signal President Warns of Agentic AI Privacy Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Signal President Meredith Whittaker warned at SXSW about the risks of agentic AI, emphasizing potential privacy and security issues. She highlighted how autonomous AI agents, which handle personal tasks by accessing user data, might compromise user privacy if not adequately managed.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems described as agentic AI performing autonomous tasks with deep access to user data and system controls, which fits the definition of an AI system. The concerns raised relate to the plausible future risk of privacy and security harms due to the AI's access and data handling, but no actual incident of harm is reported. Therefore, this qualifies as an AI Hazard because it describes a credible risk of harm that could plausibly arise from the development and use of such AI systems.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securityTransparency & explainabilityDemocracy & human autonomy

Industries
Consumer servicesDigital securityIT infrastructure and hosting

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Signal President Meredith Whittaker calls out agentic AI as having 'profound' security and privacy issues | TechCrunch

2025-03-07
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems described as agentic AI performing autonomous tasks with deep access to user data and system controls, which fits the definition of an AI system. The concerns raised relate to the plausible future risk of privacy and security harms due to the AI's access and data handling, but no actual incident of harm is reported. Therefore, this qualifies as an AI Hazard because it describes a credible risk of harm that could plausibly arise from the development and use of such AI systems.
Thumbnail Image

Signal President Meredith Whittaker calls out agentic AI as having 'profound' security and privacy issues - RocketNews

2025-03-07
RocketNews
Why's our monitor labelling this an incident or hazard?
The article discusses the potential risks associated with the use of agentic AI systems that act autonomously on behalf of users, requiring deep access to personal and sensitive information. Although no actual incident of harm is reported, the described scenario plausibly could lead to violations of privacy and security, which are significant harms. Therefore, this qualifies as an AI Hazard because it outlines credible risks that could lead to an AI Incident in the future if such systems are widely adopted without adequate safeguards.
Thumbnail Image

Signal president warns the hyped agentic AI bots threaten user privacy

2025-03-08
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems described as agentic AI bots capable of autonomous task completion requiring broad data access. The warnings focus on potential privacy and security harms that could plausibly arise from their use, such as unauthorized data access and undermining encrypted communications. Since no actual harm or incident is reported but credible risks are emphasized, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Signal president warns the hyped agentic AI bots threaten user privacy

2025-03-08
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The article centers on expert warnings about the plausible future harms that agentic AI systems could cause, particularly regarding user privacy and security. It describes how these AI agents would require broad access to sensitive personal data and likely process it off-device, raising serious risks. However, it does not report any actual incidents of harm or breaches caused by such AI systems at this time. Therefore, the event qualifies as an AI Hazard, reflecting credible potential risks rather than realized harm or incidents.
Thumbnail Image

Agentic AI has "profound" issues with security and privacy, Signal President says

2025-03-10
TechRadar
Why's our monitor labelling this an incident or hazard?
The article centers on expert warnings about the potential dangers of AI agents accessing sensitive user data, which could plausibly lead to privacy violations or security breaches. Since no actual harm or incident is described, but a credible risk is articulated, this fits the definition of an AI Hazard. The discussion about the need for root-level access and the compromise of end-to-end encryption underscores the plausible future harm these AI systems could cause if deployed without adequate safeguards.
Thumbnail Image

Signal Chief Has Major Security And Privacy Concerns About Agentic AI: What She Said - News18

2025-03-11
News18
Why's our monitor labelling this an incident or hazard?
The article discusses the potential risks of agentic AI systems that can independently perform tasks requiring deep access to personal data and applications, such as messaging apps and payment systems. The Signal chief's warnings focus on plausible future harms related to privacy and security breaches if such AI agents are widely adopted. There is no indication that an actual harm or incident has occurred yet, but the concerns are credible and relate to plausible future harms. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Signal president issues warning on the risks of agentic AI

2025-03-08
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses agentic AI systems that perform tasks on behalf of users, requiring broad access to personal data and system permissions, which fits the definition of AI systems. The Signal president's statements focus on the potential risks and privacy/security concerns that could arise from such AI use, indicating plausible future harm rather than an actual incident. No direct or indirect harm has occurred yet, but the risks are credible and significant, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. The article is not unrelated as it centers on AI risks.