AI Large Language Models Enable Mass Online Deanonymization, Threatening User Privacy

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Recent research by Anthropic and ETH Zurich demonstrates that large language models (LLMs) can deanonymize online users with up to 90% accuracy by analyzing unstructured text across platforms. This AI-driven capability undermines online anonymity, enabling large-scale privacy violations and exposing users to tracking and profiling at minimal cost.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (large language models) to analyze user-generated text and identify real identities behind anonymous online accounts. This use of AI directly leads to violations of privacy and potentially breaches fundamental rights, which qualifies as harm under the framework. The article reports actual research results showing high accuracy in de-anonymization, implying that harm is occurring or imminent, not just a theoretical risk. Therefore, this constitutes an AI Incident due to realized harm to privacy and rights caused by AI use.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Digital securityMedia, social platforms, and marketing

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Research and development

AI system task:
Forecasting/prediction


Articles about this incident or hazard

Thumbnail Image

匿名时代的终结:大模型只需几篇帖子就能识破你的马甲

2026-03-04
煎蛋
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) to analyze user-generated text and identify real identities behind anonymous online accounts. This use of AI directly leads to violations of privacy and potentially breaches fundamental rights, which qualifies as harm under the framework. The article reports actual research results showing high accuracy in de-anonymization, implying that harm is occurring or imminent, not just a theoretical risk. Therefore, this constitutes an AI Incident due to realized harm to privacy and rights caused by AI use.
Thumbnail Image

奇客Solidot | 利用大模型进行大规模去匿名化

2026-03-04
Lighthouse @ Newquay
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) used for de-anonymization, which is a violation of privacy rights and can be considered a breach of fundamental rights. The use of AI here directly leads to harm by exposing individuals' identities without consent, thus constituting an AI Incident under the framework's definition of violations of human rights or breach of obligations intended to protect fundamental rights. The harm is realized as the AI system is actively used to identify anonymous users at scale with high precision, not merely a potential risk.
Thumbnail Image

春节AI红包,本质是一场大规模微数据收割行动-钛媒体官方网站

2026-03-03
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) to perform deanonymization attacks that directly violate user privacy and anonymity, which are fundamental rights. The research demonstrates that these AI systems can extract and link microdata from non-structured text to real identities with high accuracy and low cost, enabling large-scale privacy breaches. This constitutes a violation of human rights and privacy protections, fulfilling the criteria for an AI Incident. The article details realized harm (privacy violations) caused by the AI system's use, not just potential harm, and thus it is not merely a hazard or complementary information. The AI system's role is pivotal in enabling this large-scale deanonymization and data harvesting.
Thumbnail Image

春节AI红包,本质是一场大规模微数据收割行动

2026-03-03
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (large language models) to perform deanonymization attacks that directly lead to harm by violating users' privacy and anonymity online. The harm is realized, not hypothetical, as the research demonstrates high accuracy and economic feasibility of such attacks. This constitutes a violation of fundamental rights to privacy and anonymity, fitting the definition of an AI Incident. The article does not merely warn about potential harm but reports on demonstrated capabilities and ongoing risks, thus it is not an AI Hazard or Complementary Information. It is not unrelated because the core focus is on AI-enabled deanonymization and its harmful consequences.
Thumbnail Image

大语言模型能大规模识别匿名用户身份,准确度令人震惊

2026-03-04
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) to analyze social media content and successfully identify anonymous users, which constitutes a violation of privacy rights and poses significant harm to individuals and communities. The harm is realized as the research demonstrates actual de-anonymization with high accuracy, not just a theoretical risk. The AI system's use directly leads to these harms, fulfilling the criteria for an AI Incident. The article also discusses mitigation measures, but the primary focus is on the demonstrated harm and risks, not just responses or general AI ecosystem context, so it is not Complementary Information. It is not an AI Hazard because harm is already occurring or demonstrated. It is not Unrelated because the event clearly involves AI systems and their impact.
Thumbnail Image

LLMs may help fact-checkers track who's behind pseudonymous accounts: Study

2026-03-09
Business Standard
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (LLMs) to analyze and link online identities, which is explicitly described. The use of these AI systems has directly led to the capability to deanonymize users, which can be beneficial for combating misinformation but also poses a risk of harm to individuals' privacy and rights. Although no specific harm incident is reported as having occurred yet, the article clearly outlines plausible future harms such as harassment and surveillance resulting from misuse of the AI technology. Therefore, this event fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to harms related to privacy violations and misuse.
Thumbnail Image

Anthropic research says AI can mass expose of anonymous internet accounts

2026-03-07
Digital Trends
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) used to analyze online data and identify real identities behind anonymous accounts. Although no actual harm has been reported yet, the research shows that AI could plausibly lead to significant privacy violations and harm to individuals and communities by exposing identities that rely on anonymity for protection. This constitutes a plausible future harm scenario, fitting the definition of an AI Hazard. The event does not describe a realized harm incident but highlights a credible risk arising from AI capabilities, thus it is best classified as an AI Hazard.
Thumbnail Image

Study finds AI can expose hidden identities online

2026-03-08
The News International
Why's our monitor labelling this an incident or hazard?
The study explicitly involves AI systems (LLMs) used to analyze and link online identities, which is a clear AI system involvement. The use of AI to deanonymize individuals can lead to violations of privacy and human rights, which are recognized harms under the framework. Although the article reports on research findings rather than an actual incident of harm occurring, the demonstrated capability and its implications constitute a plausible risk of harm. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving violations of rights and harm to individuals if the technology is deployed maliciously or irresponsibly.
Thumbnail Image

AI can now unmask anonymous accounts for under $4 each

2026-03-07
Rolling Out
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) to deanonymize online users, which is a clear AI system involvement. The use of these AI systems has directly led to privacy harms, which constitute violations of human rights and fundamental rights to privacy. The harms are realized or ongoing, as the technology can identify individuals behind anonymous accounts, enabling doxxing and harassment. Therefore, this qualifies as an AI Incident due to direct harm caused by AI use in deanonymization and privacy breaches.
Thumbnail Image

Anthropic Research Shows AI Can Unmask Anonymous Internet Users at Scale

2026-03-08
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) to perform mass deanonymization by analyzing behavioral and linguistic patterns in public posts. This use of AI directly leads to harm by violating individuals' rights to privacy and anonymity, which are fundamental human rights. The harm is not speculative; the research demonstrates the capability and its implications, which can already impact vulnerable groups relying on anonymity. Therefore, this qualifies as an AI Incident due to realized harm to human rights through AI-enabled deanonymization.