UN Experts Call for Suspension of Pegasus Spyware Sales After Human Rights Violations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Israeli-developed Pegasus spyware, created by NSO Group, was used by various governments to surveil activists, journalists, and politicians, violating their human rights. UN experts condemned the unregulated use of such AI surveillance technology and urged a global suspension of its sale and transfer until robust regulatory frameworks are established.[AI generated]

Why's our monitor labelling this an incident or hazard?

The spyware Pegasus is an AI system capable of sophisticated surveillance functions, including activating cameras and microphones and data collection, which fits the definition of an AI system. Its use by governments to monitor and potentially suppress activists, journalists, and political figures constitutes a violation of human rights, fulfilling the criteria for harm under the AI Incident definition. The event describes realized harm through misuse of the AI system, not just potential harm. Therefore, this qualifies as an AI Incident due to direct involvement of an AI system causing violations of fundamental rights.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilityDemocracy & human autonomySafety

Industries
Government, security, and defence

Affected stakeholders
Civil societyWorkersGovernment

Harm types
Human or fundamental rightsPublic interestPsychological

Severity
AI incident

Business function:
Other

AI system task:
Other


Articles about this incident or hazard

Thumbnail Image

以色列間諜軟體爆醜聞 聯國專家籲停售監視科技

2021-08-12
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The spyware Pegasus is an AI system capable of sophisticated surveillance functions, including activating cameras and microphones and data collection, which fits the definition of an AI system. Its use by governments to monitor and potentially suppress activists, journalists, and political figures constitutes a violation of human rights, fulfilling the criteria for harm under the AI Incident definition. The event describes realized harm through misuse of the AI system, not just potential harm. Therefore, this qualifies as an AI Incident due to direct involvement of an AI system causing violations of fundamental rights.
Thumbnail Image

幫你讀懂經濟學人 中整頓企業保數據

2021-08-13
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article centers on China's regulatory policies affecting AI-related companies and data governance, but it does not report any realized harm or specific incident caused by AI systems. There is no mention of an AI system malfunctioning or causing harm, nor a plausible future harm event directly linked to AI system use or development. The focus is on policy and market responses, making this a case of Complementary Information that provides context and understanding of AI ecosystem governance and its economic implications.
Thumbnail Image

以色列間諜軟體爆醜聞 聯國專家籲停售監視科技 | 國際焦點 | 國際 | 經濟日報

2021-08-12
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
Pegasus spyware is an AI system capable of sophisticated surveillance and data collection, which has been used to monitor and infringe upon the rights of numerous individuals, including political figures and journalists. This constitutes a violation of human rights as defined in the framework. The involvement of the AI system in causing these harms is direct, as the spyware's capabilities enable intrusive surveillance leading to breaches of privacy and fundamental rights. The UN experts' call for regulatory action and suspension of sales underscores the severity of the harm already realized. Therefore, this event qualifies as an AI Incident due to the realized harm to human rights caused by the AI system's use.
Thumbnail Image

投資人擔憂!中國國務院預告5年內加強監管 | 中國 | 新頭殼 Newtalk

2021-08-12
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The article does not describe any AI system causing harm or malfunction, nor does it report an event where AI use has led or could plausibly lead to harm. Instead, it discusses China's government plans to enhance regulation and legal frameworks related to AI and other technologies over the next five years. This fits the definition of Complementary Information as it provides context on governance and regulatory responses to AI and related technologies, without reporting a new AI Incident or AI Hazard.
Thumbnail Image

以色列間諜軟體爆醜聞 UN專家籲停售監視科技 | 蘋果新聞網 | 蘋果日報

2021-08-12
蘋果新聞網
Why's our monitor labelling this an incident or hazard?
Pegasus spyware is an AI system capable of sophisticated surveillance, including activating cameras and microphones and extracting data from phones. Its use has directly caused violations of human rights, as evidenced by the monitoring of numerous political and civil society figures. The involvement of the AI system in causing harm to fundamental rights classifies this event as an AI Incident. The UN's call for regulatory frameworks and sales suspension further underscores the severity of the harm already realized.
Thumbnail Image

【重拳監管】傳小馬智行擱置赴美上市尋私募融資,仍憧憬監管放行

2021-08-12
ET Net
Why's our monitor labelling this an incident or hazard?
Pony.ai's autonomous driving technology involves AI systems, and the article centers on regulatory and financial developments affecting the company's US listing plans. No actual harm or incident related to the AI system is described, nor is there a credible risk of imminent harm detailed. The focus is on regulatory scrutiny and strategic business decisions, which align with Complementary Information as per the framework. This classification helps track ecosystem developments without misclassifying regulatory or market news as incidents or hazards.
Thumbnail Image

以色列間諜軟體爆醜聞 聯國專家籲停售監視科技

2021-08-12
RFI
Why's our monitor labelling this an incident or hazard?
The spyware 'Pegasus' is an AI system capable of sophisticated surveillance and data collection. Its use by governments to monitor and infringe upon the rights of individuals constitutes a violation of human rights, fulfilling the criteria for harm under the AI Incident definition (specifically, violation of human rights). The involvement of the AI system in causing these harms is direct, as the spyware's capabilities enable the intrusive surveillance. The UN experts' call for regulatory frameworks and sales suspension further underscores the recognized harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

以色列間諜軟體爆醜聞 聯國專家籲停售監視科技 - 政治圈

2021-08-12
中時新聞網
Why's our monitor labelling this an incident or hazard?
Pegasus spyware is an AI system capable of sophisticated surveillance and data collection, which has been used to violate human rights by unauthorized monitoring of individuals. The article reports actual harm caused by the use of this AI system, including breaches of privacy and suppression of political and journalistic freedoms. The involvement of the AI system in causing these harms is direct and significant. Therefore, this event qualifies as an AI Incident due to the realized violations of human rights stemming from the AI system's use.
Thumbnail Image

以色列間諜軟體爆醜聞 聯國專家籲停售監視科技 | 國際 | 中央社 CNA

2021-08-12
Central News Agency
Why's our monitor labelling this an incident or hazard?
Pegasus spyware is an AI system used for surveillance that has directly led to violations of human rights by enabling unauthorized monitoring of individuals, including activists and journalists. The article describes realized harm caused by the use of this AI system, fulfilling the criteria for an AI Incident under the OECD framework. The involvement of the AI system in causing harm is explicit and direct, and the call for regulatory action underscores the severity of the incident.
Thumbnail Image

路透:中國整治科技企業小馬智行擱置美國上市計劃

2021-08-12
美國之音
Why's our monitor labelling this an incident or hazard?
Pony.ai is an AI system developer (autonomous driving), so AI system involvement is clear. However, the article does not describe any harm caused by the AI system's development, use, or malfunction. Instead, it discusses regulatory actions and market listing suspensions, which are governance and business environment issues. There is no direct or indirect harm or plausible future harm described from the AI system itself. Thus, the article provides complementary information about regulatory impacts on AI companies rather than reporting an AI Incident or AI Hazard.