AI Large Language Models Enable Mass Online Deanonymization, Threatening User Privacy

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Recent research by Anthropic and ETH Zurich demonstrates that large language models (LLMs) can deanonymize online users with up to 90% accuracy by analyzing unstructured text across platforms. This AI-driven capability undermines online anonymity, enabling large-scale privacy violations and exposing users to tracking and profiling at minimal cost.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (large language models) to analyze user-generated text and identify real identities behind anonymous online accounts. This use of AI directly leads to violations of privacy and potentially breaches fundamental rights, which qualifies as harm under the framework. The article reports actual research results showing high accuracy in de-anonymization, implying that harm is occurring or imminent, not just a theoretical risk. Therefore, this constitutes an AI Incident due to realized harm to privacy and rights caused by AI use.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Digital securityMedia, social platforms, and marketing

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Research and development

AI system task:
Forecasting/prediction


Articles about this incident or hazard

Thumbnail Image

匿名时代的终结:大模型只需几篇帖子就能识破你的马甲

2026-03-04
煎蛋
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) to analyze user-generated text and identify real identities behind anonymous online accounts. This use of AI directly leads to violations of privacy and potentially breaches fundamental rights, which qualifies as harm under the framework. The article reports actual research results showing high accuracy in de-anonymization, implying that harm is occurring or imminent, not just a theoretical risk. Therefore, this constitutes an AI Incident due to realized harm to privacy and rights caused by AI use.
Thumbnail Image

奇客Solidot | 利用大模型进行大规模去匿名化

2026-03-04
Lighthouse @ Newquay
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) used for de-anonymization, which is a violation of privacy rights and can be considered a breach of fundamental rights. The use of AI here directly leads to harm by exposing individuals' identities without consent, thus constituting an AI Incident under the framework's definition of violations of human rights or breach of obligations intended to protect fundamental rights. The harm is realized as the AI system is actively used to identify anonymous users at scale with high precision, not merely a potential risk.
Thumbnail Image

春节AI红包,本质是一场大规模微数据收割行动-钛媒体官方网站

2026-03-03
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) to perform deanonymization attacks that directly violate user privacy and anonymity, which are fundamental rights. The research demonstrates that these AI systems can extract and link microdata from non-structured text to real identities with high accuracy and low cost, enabling large-scale privacy breaches. This constitutes a violation of human rights and privacy protections, fulfilling the criteria for an AI Incident. The article details realized harm (privacy violations) caused by the AI system's use, not just potential harm, and thus it is not merely a hazard or complementary information. The AI system's role is pivotal in enabling this large-scale deanonymization and data harvesting.
Thumbnail Image

春节AI红包,本质是一场大规模微数据收割行动

2026-03-03
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (large language models) to perform deanonymization attacks that directly lead to harm by violating users' privacy and anonymity online. The harm is realized, not hypothetical, as the research demonstrates high accuracy and economic feasibility of such attacks. This constitutes a violation of fundamental rights to privacy and anonymity, fitting the definition of an AI Incident. The article does not merely warn about potential harm but reports on demonstrated capabilities and ongoing risks, thus it is not an AI Hazard or Complementary Information. It is not unrelated because the core focus is on AI-enabled deanonymization and its harmful consequences.
Thumbnail Image

大语言模型能大规模识别匿名用户身份,准确度令人震惊

2026-03-04
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) to analyze social media content and successfully identify anonymous users, which constitutes a violation of privacy rights and poses significant harm to individuals and communities. The harm is realized as the research demonstrates actual de-anonymization with high accuracy, not just a theoretical risk. The AI system's use directly leads to these harms, fulfilling the criteria for an AI Incident. The article also discusses mitigation measures, but the primary focus is on the demonstrated harm and risks, not just responses or general AI ecosystem context, so it is not Complementary Information. It is not an AI Hazard because harm is already occurring or demonstrated. It is not Unrelated because the event clearly involves AI systems and their impact.