US AI Surveillance Firm Fusus Trials Predictive Policing Tech with UK Authorities

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

US company Fusus has lobbied UK councils and police to adopt its AI-powered surveillance platform, previously used against Black Lives Matter protesters. At least one London council is trialling the technology, raising concerns from civil liberties groups about potential mass surveillance and human rights violations, though no direct harm has yet occurred.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as an AI-powered real-time crime center platform that automates surveillance and predictive policing. The use of such technology in public surveillance and policing has a credible risk of leading to violations of human rights and harm to communities, as noted by human rights organizations cited in the article. Although no direct harm is reported at this stage, the trial and lobbying efforts indicate a plausible future risk of harm from the AI system's deployment. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceTransparency & explainabilityFairnessAccountabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General publicCivil society

Harm types
Human or fundamental rightsPublic interestPsychological

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Recognition/object detectionEvent/anomaly detectionForecasting/predictionOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

US surveillance firm targets UK police forces and councils

2023-09-25
openDemocracy
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-powered real-time crime center platform that automates surveillance and predictive policing. The use of such technology in public surveillance and policing has a credible risk of leading to violations of human rights and harm to communities, as noted by human rights organizations cited in the article. Although no direct harm is reported at this stage, the trial and lobbying efforts indicate a plausible future risk of harm from the AI system's deployment. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Fusus’ AI-Powered Surveillance Solutions Stir Controversy in the UK

2023-09-26
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article details the use and trial of AI surveillance systems that could plausibly lead to violations of human rights and harm to communities through mass surveillance and privacy erosion. However, it does not document any actual harm or incident resulting from the AI system's use so far. Therefore, this situation fits the definition of an AI Hazard, as the AI system's deployment could plausibly lead to an AI Incident involving rights violations and harm to communities, but no such incident has yet materialized according to the article.
Thumbnail Image

Council trials 'controversial' surveillance technology from US company

2023-09-26
South London News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-powered surveillance system (Fusus RTCC) that integrates and analyzes CCTV footage for predictive policing, indicating AI system involvement. Although no direct harm has occurred yet, the concerns raised by privacy advocates about mass surveillance and biometric tracking highlight plausible future harms such as violations of privacy and civil liberties (human rights). The trial and demonstrations indicate use and development stages that could lead to an AI Incident if harms materialize, but currently, the situation is a credible potential risk rather than an actualized harm. Hence, this qualifies as an AI Hazard.
Thumbnail Image

US Surveillance Firm's Charm Offensive to UK Councils and Police Forces

2023-09-25
ZNetwork
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as automating surveillance and predictive policing, which can impact fundamental rights and liberties. The use of this system by public authorities for mass surveillance could plausibly lead to violations of human rights and harm to communities, as noted by human rights groups warning against a surveillance state. Although no direct harm has been reported yet, the credible risk of such harm from the system's use qualifies this as an AI Hazard rather than an Incident or Complementary Information.