AI Surveillance Systems Spark Privacy Violations and Misuse Across U.S. Cities

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Flock's AI-powered surveillance systems have led to privacy violations in multiple U.S. cities. Incidents include unauthorized access to sensitive camera footage in Dunwoody, Georgia, wrongful police stops in Colorado due to license plate misreads, and widespread tracking and profiling in Shreveport and Sarasota, raising significant privacy and civil rights concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Flock's ALPRs) used by law enforcement for surveillance. The misuse of this AI system has directly led to violations of privacy and potential breaches of fundamental rights, constituting harm to individuals and communities. The article provides multiple examples of realized harm, including stalking and invasive investigations without proper legal oversight. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Police are using surveillance tech to stalk love interests. Dystopia, here we come | Arwa Mahdawi

2026-05-02
The Guardian
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Flock's ALPRs) used by law enforcement for surveillance. The misuse of this AI system has directly led to violations of privacy and potential breaches of fundamental rights, constituting harm to individuals and communities. The article provides multiple examples of realized harm, including stalking and invasive investigations without proper legal oversight. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

Proposal calls for Troy license plate reader data to be erased in two days

2026-05-04
Times Union
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Flock Safety's AI-powered automated license plate readers) and discusses concerns about privacy and data misuse, which are potential harms related to AI surveillance. However, no actual harm or incident resulting from the AI system's use is reported. The main focus is on proposed legislation and political debate aimed at mitigating risks and protecting privacy. Therefore, this event is best classified as Complementary Information, as it provides governance and societal response context to AI-related privacy concerns without describing a specific AI Incident or AI Hazard.
Thumbnail Image

A dystopian society is here! Police are now using surveillance tech to stalk love interests

2026-05-03
End Time Headlines
Why's our monitor labelling this an incident or hazard?
The automated license plate reader technology is an AI system that processes vehicle data to track movements. Its misuse by police officers to stalk individuals and track activists constitutes a violation of human rights and privacy, fulfilling the criteria for harm under the AI Incident definition. The article documents multiple cases of abuse, criminal charges, and job losses, indicating realized harm rather than potential harm. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

The City of Sarasota Is Experimenting With Surveillance Technology

2026-05-05
Sarasota Magazine
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is clear from the use of ALPRs and software platforms that analyze and aggregate surveillance data in real time. The harms described include misuse by law enforcement officials leading to stalking and harassment, wrongful detentions, and broader privacy violations affecting communities. These constitute violations of human rights and breaches of privacy protections, fitting the definition of an AI Incident. The article reports actual harms that have occurred, not just potential risks, so the classification as an AI Incident is appropriate.
Thumbnail Image

Surveillance Cameras Raise Privacy Concerns Across Shreveport Neighborhoods

2026-05-04
96.5 KVKI
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI databases used to process surveillance data, indicating AI system involvement. The harms include violations of privacy rights, unauthorized surveillance, stalking, and wrongful police actions, which are direct harms to individuals and communities. The misuse and malfunction of the AI surveillance system have caused these harms. Therefore, this event qualifies as an AI Incident due to the realized violations and harms stemming from the AI system's use and abuse.
Thumbnail Image

City Learns Flock Accessed Cameras in Children's Gymnastics Room as a Sales Pitch Demo, Renews Contract Anyway

2026-05-01
404 Media
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI surveillance system (Flock's technology) used to access camera footage in sensitive locations, including a children's gymnastics room. The access was part of sales demonstrations but was authorized by the city, leading to public outcry over privacy violations. This constitutes a violation of human rights (privacy rights) and harm to communities (loss of trust, exposure of sensitive footage). The AI system's use directly led to these harms, meeting the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized incident involving harm from AI system use.
Thumbnail Image

Colorado Grandma Keeps Getting Pulled Over Because Police Cameras Cannot Tell the Difference Between a Zero and the Letter O

2026-05-03
Guessing Headlights
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—automated license plate readers using AI to identify vehicles and alert police. The system's malfunction is not in reading plates but in matching them against an incorrect database entry, leading to repeated wrongful stops. This causes harm to the individual (unjustified police stops, potential rights violations) and reflects systemic issues in AI deployment and data governance. The harm is realized and ongoing, not merely potential, thus classifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Surveillance Cameras Raise Privacy Concerns Across Shreveport Neighborhoods

2026-05-04
Talk Radio 960am
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (surveillance cameras feeding data into AI databases for facial recognition and tracking). The use of these AI systems has directly led to violations of privacy rights and misuse by employees and law enforcement, causing harm to individuals and communities. The article documents realized harms such as stalking, unauthorized surveillance, and privacy breaches, meeting the criteria for an AI Incident. The systemic nature of these harms and the direct link to AI-enabled surveillance justify classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

The Creepy Reality Behind the License Plate Cameras in Your Town

2026-05-05
PC Magazine
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI and machine learning to capture and analyze license plate and vehicle data, as well as to identify and track people. The misuse and vulnerabilities of this AI system have directly led to realized harms, including privacy violations, unauthorized surveillance, and potential breaches of fundamental rights. The article documents actual incidents of harm, such as unauthorized employee access to sensitive video feeds, law enforcement misuse, and public protests against the surveillance. These factors meet the criteria for an AI Incident, as the AI system's use has directly and indirectly caused significant harm to individuals and communities.
Thumbnail Image

El Cerrito city council to vote on future of license plate readers

2026-05-05
KRON4
Why's our monitor labelling this an incident or hazard?
The Flock license plate readers are AI systems that analyze license plate data to track suspects. The unauthorized access by federal agencies to this data represents a breach of privacy rights, which falls under violations of human rights or legal obligations. The event describes realized harm in terms of privacy violations, not just potential harm. Hence, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Troy mayor condemns City Council proposal to limit police use of Flock cameras

2026-05-05
WRGB
Why's our monitor labelling this an incident or hazard?
The Flock cameras are AI systems (automatic license plate readers with image recognition and data analysis). The event involves their use by police and a political debate about restricting their use and data retention. No actual harm or incident is reported; rather, the article discusses policy proposals and differing views on privacy and policing. This fits the definition of Complementary Information, as it relates to governance and societal responses to AI systems without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Flock's Sales Pitch Included Recordings Of Kids' Gymnastic Classes

2026-05-05
Techdirt
Why's our monitor labelling this an incident or hazard?
Flock Safety's system is an AI system as it involves automated surveillance and data processing capabilities that infer and provide live and recorded video feeds for law enforcement and other agencies. The event describes the use and misuse of this AI system leading to direct harm: unauthorized access to sensitive footage, privacy violations, and potential breaches of rights. The sharing of footage with hundreds of external agencies without proper controls constitutes a violation of human rights and privacy laws. The harm is realized and ongoing, not merely potential, making this an AI Incident under the framework.
Thumbnail Image

The Creepy Reality Behind the License Plate Cameras in Your Town

2026-05-05
PC Mag Middle East
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Flock Safety's ALPR cameras and AI-based video analysis) whose use and misuse have directly led to harms including privacy violations, unauthorized surveillance, and potential breaches of fundamental rights. The unauthorized access by employees and external parties to sensitive video feeds, the tracking of individuals without consent, and the use of AI to identify and monitor people constitute direct harm to individuals and communities. The article documents realized harms rather than just potential risks, meeting the criteria for an AI Incident rather than a hazard or complementary information.