Israeli AI Spyware 'Graphite' Exposed After LinkedIn Leak

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A legal advisor at Israeli firm Paragon Solutions accidentally leaked a screenshot on LinkedIn revealing the control panel of its AI-powered spyware 'Graphite.' The tool exploits zero-click vulnerabilities to remotely access encrypted communications, targeting journalists and civil society. WhatsApp accused Paragon of targeting 90 individuals, raising concerns over AI-driven surveillance abuses.[AI generated]

Why's our monitor labelling this an incident or hazard?

The spyware tool 'Graphite' qualifies as an AI system or at least an advanced algorithmic system capable of sophisticated remote surveillance and exploitation of zero-click vulnerabilities. Its use has directly led to violations of privacy and human rights, as evidenced by the targeting of journalists and civil society members. The inadvertent exposure of the control panel reveals operational details of this AI-enabled spyware. Therefore, this event constitutes an AI Incident due to the direct harm caused by the use of the AI system in surveillance and rights violations.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Digital securityGovernment, security, and defence

Affected stakeholders
Civil society

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

AI system task:
Other


Articles about this incident or hazard

Thumbnail Image

بصورة على "لينكد إن".. موظفة تفضح شركة تجسس إسرائيلية

2026-02-13
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The spyware tool 'Graphite' qualifies as an AI system or at least an advanced algorithmic system capable of sophisticated remote surveillance and exploitation of zero-click vulnerabilities. Its use has directly led to violations of privacy and human rights, as evidenced by the targeting of journalists and civil society members. The inadvertent exposure of the control panel reveals operational details of this AI-enabled spyware. Therefore, this event constitutes an AI Incident due to the direct harm caused by the use of the AI system in surveillance and rights violations.
Thumbnail Image

هفوة على "لينكد إن" تكشف أسرار شركة تجسس إسرائيلية

2026-02-13
Dostor
Why's our monitor labelling this an incident or hazard?
The spyware tool 'Griffin' is an AI-enabled system used for sophisticated surveillance and remote access to mobile devices, which fits the definition of an AI system. Its use has directly led to violations of human rights and privacy by targeting journalists and civil society members, fulfilling the criteria for harm under (c) violations of human rights. The inadvertent exposure of the control panel is part of the incident narrative but does not change the classification. The event involves the use and development of the AI system leading to realized harm, thus it is an AI Incident.
Thumbnail Image

صورة على لينكد إن تفضح طرق التجسس الإسرائيلية

2026-02-13
elsiyasa.com
Why's our monitor labelling this an incident or hazard?
The spyware 'Paragon' is an AI system designed for remote surveillance and data extraction without user interaction, which fits the definition of an AI system. Its use to hack into phones and extract private communications constitutes a violation of human rights and privacy, fulfilling the criteria for harm under (c). The exposure of this spyware and its targeting of journalists and civil society demonstrates realized harm caused by the AI system's use. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's deployment and misuse.
Thumbnail Image

أداة سرّية.. موظفة تفضح شركة تجسس إسرائيلية - وكالة ستيب نيوز

2026-02-13
وكالة ستيب نيوز
Why's our monitor labelling this an incident or hazard?
The spyware tool 'Graphite' is an AI-enabled system capable of exploiting zero-click vulnerabilities to access encrypted communications, which is a sophisticated AI application in surveillance. The accidental leak of the control panel reveals the operational use of this AI system in spying on individuals, including journalists and civil society members, which constitutes a violation of human rights and privacy. The harm is realized and ongoing, as evidenced by WhatsApp's accusations and the targeting of specific individuals. Hence, this is an AI Incident involving direct harm through misuse of AI surveillance technology.