Flock Safety License Plate Reader Data Sharing Sparks Privacy and Rights Concerns in California

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Flock Safety's AI-powered license plate readers, used by law enforcement in California, have come under scrutiny after data was shared with federal agencies, including ICE and Border Patrol, without proper oversight. This has led to privacy violations, public backlash, and contract terminations by cities and Amazon's Ring, highlighting risks of AI surveillance misuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes an AI system (Flock's license plate reader) actively used by police and communities, which has led to widespread public concern about privacy and civil liberties. The system's use has indirectly caused harm by eroding trust and raising fears of surveillance misuse, which aligns with violations of human rights and harm to communities. The termination of contracts by cities is a direct consequence of these harms. Although no physical injury or legal ruling is mentioned, the societal and rights-based harms are clear and materialized, meeting the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceAccountability

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Cities join Amazon in ending their partnership with license-plate reader Flock following Super Bowl Ad. 'Your privacy is totally fine,' says Ring CEO | Fortune

2026-03-03
Fortune
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Flock's license plate reader) actively used by police and communities, which has led to widespread public concern about privacy and civil liberties. The system's use has indirectly caused harm by eroding trust and raising fears of surveillance misuse, which aligns with violations of human rights and harm to communities. The termination of contracts by cities is a direct consequence of these harms. Although no physical injury or legal ruling is mentioned, the societal and rights-based harms are clear and materialized, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

LAPD's relationship with Flock Safety under scrutiny from oversight body

2026-03-03
Los Angeles Times
Why's our monitor labelling this an incident or hazard?
The license plate reader system is an AI system as it involves automated recognition and processing of license plate data. The article highlights a 'configuration error' that allowed unauthorized access to sensitive data, raising concerns about misuse and potential harm to individuals, especially vulnerable groups. While the harm is not confirmed as realized, the plausible risk of privacy violations and misuse of surveillance data by federal authorities and others is credible. Hence, the event fits the definition of an AI Hazard, as the AI system's use and malfunction could plausibly lead to violations of human rights and harm to communities.
Thumbnail Image

Privacy concerns surround Richmond's Flock camera system ahead of vote

2026-03-03
KRON4
Why's our monitor labelling this an incident or hazard?
The Flock camera system qualifies as an AI system because it involves automated license plate reading, which requires AI-based image recognition and data processing. The concerns raised are about potential misuse of data and privacy violations, which could lead to harm such as violations of rights or harm to communities if data is shared improperly. However, the article does not report any realized harm or incident; it focuses on the debate and concerns prior to a council vote. Hence, this is best classified as an AI Hazard, reflecting the plausible future risk of harm from the AI system's use or misuse.
Thumbnail Image

Residents rally to demand Richmond keep Flock camera on and extend contract

2026-03-04
ABC7 News
Why's our monitor labelling this an incident or hazard?
The event centers on the use and management of an AI system (automated license plate readers) that has been involved in law enforcement activities with tangible outcomes (arrests, suspect identification). The unauthorized data access due to a software error constitutes a breach of privacy and a violation of legal obligations, which falls under violations of human rights or breach of applicable law. However, the article does not report direct harm such as injury or explicit rights violations occurring as a result of this breach, but rather the discovery of the breach and the ensuing policy debate. Therefore, this event is best classified as Complementary Information, as it provides an update on the governance, policy considerations, and responses related to an AI system previously deployed, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

LA Historic Park cameras

2026-03-03
LAist
Why's our monitor labelling this an incident or hazard?
The Flock Safety license plate readers are AI systems used for surveillance. Their deployment in a public park and the sharing of collected data with law enforcement, including instances of illegal sharing with federal agencies, directly implicate violations of privacy and potentially human rights. The harms are realized, not just potential, as the article documents actual data sharing and community pushback due to surveillance fears. Therefore, this qualifies as an AI Incident due to violations of rights and harm to communities caused by the AI system's use and data handling practices.
Thumbnail Image

License plate readers spark privacy concerns at LA State Historic Park in Chinatown

2026-03-03
LAist
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered license plate readers used for surveillance, which is an AI system. The use of these systems has led to privacy concerns and documented instances of law enforcement agencies sharing data in violation of state laws, indicating a breach of rights. The AI system's deployment and data sharing practices have directly contributed to these harms. The concerns about mass surveillance and data misuse align with violations of human rights as defined in the framework. Hence, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

LAPD's relationship with Flock Safety under scrutiny from oversight body

2026-03-04
Eagle-Tribune
Why's our monitor labelling this an incident or hazard?
The license plate reader system is an AI system as it automates recognition and data processing. The event centers on oversight and concerns about data use and sharing, which could implicate human rights or privacy violations. However, no direct harm or incident is reported yet, only a request for more information and scrutiny. Therefore, this is a plausible risk scenario but without confirmed harm at this stage, fitting the definition of Complementary Information as it provides context and governance response to potential AI-related issues.
Thumbnail Image

Everett shuts down Flock camera network after judge rules footage is public record

2026-03-02
wltx.com
Why's our monitor labelling this an incident or hazard?
The Flock camera system is an AI system as it involves automated license plate recognition technology. The event stems from the use of this AI system and the legal ruling that its data is public record. The concerns raised relate to potential misuse of the data that could harm individuals' privacy and safety, which fits the definition of plausible future harm. However, the article does not report any actual harm or incident caused by the AI system's use so far. The shutdown is a precautionary response to the ruling and ongoing legislative debate. Thus, the event is best classified as an AI Hazard, reflecting the credible risk of harm from the AI system's data being publicly accessible, but without evidence of realized harm at this time.
Thumbnail Image

Cities join Amazon in ending their partnership with license-plate reader Flock following public outcry. 'Your privacy is totally fine,' says Ring CEO

2026-03-03
DNYUZ
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Flock Safety's license plate reader) used by law enforcement and communities for surveillance purposes. The system's development and intended use raised significant public concern about privacy and potential misuse, leading to contract cancellations. However, the article does not report any actual harm or violation of rights that has occurred due to the AI system's malfunction or misuse. The concerns and contract terminations are preventive and reflect plausible future harm rather than realized harm. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Lapd's Relationship With Flock Safety Under Scrutiny From Oversight Body

2026-03-03
Breaking News, Latest News, US and Canada News, World News, Videos
Why's our monitor labelling this an incident or hazard?
Flock Safety's license plate reader system is an AI system that processes surveillance data. The sharing of this data with national authorities without proper consent or oversight constitutes a violation of privacy and potentially human rights, fulfilling the criteria for harm under (c) violations of human rights or breach of obligations under applicable law. The LAPD's investigation and Flock's acknowledgment indicate that harm has occurred or is ongoing. Therefore, this event qualifies as an AI Incident due to the direct or indirect harm caused by the use and misuse of the AI system.
Thumbnail Image

Debate continues over Richmond's use of Flock cameras

2026-03-04
San Jose Mercury News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Flock's license plate readers, CCTV, and drones) used in surveillance and public safety. The harms discussed (increased crime, potential failure to find a trafficking victim) are indirect consequences of disabling the AI system, not harms caused by the AI system's malfunction or misuse. There is no evidence of actual unauthorized data access or direct violation of rights occurring, only concerns and preventive measures. The main focus is on the community and governance debate about balancing public safety and privacy, including contract decisions and policy considerations. This fits the definition of Complementary Information, as it updates on societal and governance responses to AI use and its implications without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Richmond city council delays Flock camera contract vote

2026-03-04
NBC Bay Area
Why's our monitor labelling this an incident or hazard?
The Flock Safety cameras involve AI systems for license plate recognition. The concern about data sharing and privacy implicates potential violations of rights, but no actual harm or incident has occurred or been reported. The event is about a pending decision and public debate, not a realized incident or a direct hazard. Therefore, it is best classified as Complementary Information, as it provides context and updates on governance and societal responses to AI surveillance technology.
Thumbnail Image

Cleveland Puts Flock Expansion on Hold Amid Local Pushback

2026-03-04
Cleveland Scene
Why's our monitor labelling this an incident or hazard?
Flock's license plate reader cameras use AI to identify and track vehicles. The reports of data being shared with ICE without official contracts indicate misuse or unauthorized use of AI system outputs, leading to potential violations of privacy and human rights. The harms are realized or ongoing, as evidenced by public backlash and city council actions. Therefore, this event meets the criteria for an AI Incident due to violations of rights and harm to communities caused directly or indirectly by the AI system's use.
Thumbnail Image

Out-of-state police access Silicon Valley license plate readers

2026-03-04
San José Spotlight
Why's our monitor labelling this an incident or hazard?
The automated license plate reading system is an AI system as it automatically processes and interprets license plate data. The incident arises from the use and configuration of this AI system, which directly led to unauthorized data sharing with out-of-state law enforcement agencies, violating California's legal restrictions. This constitutes a breach of obligations under applicable law intended to protect privacy rights, fitting the definition of harm (c). The event is not merely a potential risk but involves realized unauthorized access and legal violations. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Village Sets New ALPR Privacy Fines

2026-03-05
Journal Online
Why's our monitor labelling this an incident or hazard?
The ALPR system qualifies as an AI system due to its automated data capture and analysis capabilities. The article references past unauthorized data sharing that violated privacy rights, constituting a breach of legal protections and human rights. However, the current article primarily discusses the village's response to these issues through contract penalties and audits, aiming to prevent future harm. Therefore, the event is best classified as Complementary Information because it provides updates on mitigation and governance responses to a previously reported AI Incident rather than describing a new incident or hazard.
Thumbnail Image

Flock Off! How Flock Safety Is Turning Roads Into Surveillance Networks - And What You Can Do About It

2026-03-04
SGT Report
Why's our monitor labelling this an incident or hazard?
Flock Safety's ALPRs are AI systems that automatically process images to extract license plate and vehicle data, building a searchable surveillance database. The article details how this system is actively used to track individuals without warrants or oversight, directly infringing on privacy and civil liberties, which are human rights. The harm is realized and ongoing, not merely potential. Hence, this event meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to communities through mass surveillance.
Thumbnail Image

Pensacola used opioid settlement for Flock Cameras. Was it legal?

2026-03-04
Pensacola News Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Flock's license plate readers with AI-based vehicle fingerprinting). The event concerns the use of opioid settlement funds to pay for these AI systems, raising legal and ethical questions and public debate. There is no report of actual harm caused by the AI system's malfunction or misuse, nor a credible imminent risk of harm. The main focus is on governance, legality, and public trust issues, which aligns with the definition of Complementary Information. It updates understanding of societal and governance responses to AI deployment rather than reporting a new AI Incident or Hazard.
Thumbnail Image

Flock Off! How Flock Safety Is Turning Roads Into Surveillance Networks...

2026-03-04
freedomsphoenix.com
Why's our monitor labelling this an incident or hazard?
Flock Safety's ALPRs are AI systems that automatically extract and process license plate and vehicle data, creating a searchable surveillance network. The article highlights that this network enables warrantless tracking and mass surveillance without transparency or oversight, directly impacting civil liberties and privacy rights. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities through intrusive surveillance practices.
Thumbnail Image

Colorado's Controversial Flock Cameras: Safety or Surveillance? - Bucket List Community Cafe

2026-03-04
Bucket List Community Cafe
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (automated license plate readers with AI capabilities for scanning and data processing) used by law enforcement. The article details ongoing use and public concerns about privacy and surveillance, as well as policy responses to mitigate risks. However, it does not report any realized harm such as violations of rights or other harms directly caused by the AI system. The concerns and legislative efforts indicate plausible future harm if misuse or overreach occurs. Therefore, this event is best classified as Complementary Information, as it provides context on societal and governance responses to AI surveillance technology and its implications, rather than reporting a concrete AI Incident or an imminent AI Hazard.
Thumbnail Image

Out-Of-State Police Access Silicon Valley License Plate Readers

2026-03-05
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The automated license plate reading system is an AI system that processes and shares data. The event describes unauthorized data sharing and access by out-of-state police, violating California law and raising civil rights concerns. This misuse of the AI system has directly led to violations of legal protections and privacy rights, constituting harm under the framework's category of violations of human rights or breach of legal obligations. The incident is not merely a potential risk but involves actual unauthorized queries and data sharing, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

More questions raised about Flock Safety in Cleveland, data access and who's watching

2026-03-05
WEWS
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Flock Safety's license plate readers and surveillance cameras with AI capabilities). The concerns raised relate to who can access the data and how it might be used, highlighting vulnerabilities and potential misuse. Although no direct harm has been reported, the plausible risk of privacy violations and misuse of surveillance data constitutes a credible potential for harm. The event does not describe a realized harm or incident but rather a credible risk and ongoing debate about the implications and governance of the AI system's use. Hence, it is best classified as an AI Hazard.
Thumbnail Image

Data centers

2026-03-05
LAist
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (automated license plate readers with AI-powered recognition and data processing). The misuse and unauthorized sharing of data collected by this AI system have directly led to violations of state laws designed to protect privacy and limit data sharing, which constitutes a breach of legal obligations and human rights protections. The harm is realized, as communities are experiencing erosion of trust and potential privacy violations. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the resulting harms and legal violations.
Thumbnail Image

Some South Pasadena residents want the city's Flock license plate readers gone -- they're not alone

2026-03-05
LAist
Why's our monitor labelling this an incident or hazard?
The AI system (Flock's automated license plate readers) is explicitly mentioned and is central to the event. The misuse of data collected by these AI systems has directly led to violations of legal frameworks protecting privacy and fundamental rights, fulfilling the criteria for harm under (c) violations of human rights or breach of legal obligations. The event details actual incidents of data misuse and community harm, not just potential risks, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

City of Ithaca ends contract with Flock camera system following community backlash

2026-03-05
WBNG
Why's our monitor labelling this an incident or hazard?
The Flock Safety system is an AI system as it uses cameras and license plate readers with AI capabilities for crime detection. However, the article does not report any actual harm or incident caused by the AI system's use, malfunction, or development. Instead, it focuses on community backlash and concerns about potential misuse and privacy, leading to contract termination. There is no indication of realized harm or a credible imminent risk of harm from the system's use. Therefore, this event is best classified as Complementary Information, as it provides context on societal and governance responses to AI surveillance technology rather than describing an AI Incident or AI Hazard.
Thumbnail Image

Northtowns residents push back on license plate reader cameras

2026-03-05
WKBW
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ALPR cameras) used in law enforcement, which have directly contributed to solving crimes and tracking stolen vehicles, indicating realized impacts. The concerns about privacy and surveillance relate to potential violations of rights, a recognized harm category. The AI system's use is central to the event, and the harms (privacy concerns, surveillance, potential rights violations) are occurring or have occurred. Therefore, this is an AI Incident rather than a hazard or complementary information. The presence of public opposition and official responses further supports the classification as an incident involving AI-related harm or rights concerns.
Thumbnail Image

Does ICE have access to Flock data?

2026-03-05
Isthmus | Madison, Wisconsin
Why's our monitor labelling this an incident or hazard?
The Flock system is an AI system (license plate recognition with automated data processing and nationwide database). The article discusses its use and data sharing, including potential misuse by ICE and other agencies, raising concerns about privacy and rights violations. While there is no explicit report of a concrete incident of harm (e.g., a specific case where ICE used the data to deport someone), the sharing of data with ICE and the lack of control over third-party use plausibly could lead to violations of rights and harm to individuals. The article also mentions an officer's misuse of the system for personal reasons, which is a realized misuse but does not specify resulting harm beyond misconduct charges. Given the absence of a clearly articulated, realized harm event directly linked to the AI system's use, but the presence of credible concerns and potential for harm, this situation aligns best with an AI Hazard classification.
Thumbnail Image

The Crowdsourced Resistance: Mapping the Invisible Grid of License Plate Readers

2026-03-05
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of ALPRs with algorithms that track vehicles and aggregate data for law enforcement use. The concerns raised relate to potential violations of constitutional rights (Fourth Amendment) and privacy harms, which are recognized as significant harms under the framework. Although no specific realized harm or incident is described, the article details credible legal challenges and the plausible future risk of mass surveillance harms. Thus, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving rights violations and harm to communities. It is not an AI Incident because no actual harm event is reported, nor is it merely Complementary Information or Unrelated.
Thumbnail Image

County license plate readers mined for data

2026-03-05
Thousand Oaks Acorn
Why's our monitor labelling this an incident or hazard?
The automated license plate readers are AI systems that process and analyze license plate data to assist law enforcement. The breach was caused by a vendor error that reactivated a feature allowing unauthorized access, leading to violations of California privacy laws. This directly caused harm by breaching legal protections and privacy rights, fulfilling the criteria for an AI Incident. The event involves the AI system's malfunction and misuse of its data, resulting in realized harm, not just potential harm or complementary information.
Thumbnail Image

Sanctuary or Surveillance? Bay Area Cities Face Federal Tech Expansion

2026-03-06
lapost.us
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (facial recognition, license plate readers with AI capabilities) whose use and misuse have directly led to violations of privacy rights and constitutional protections, including illegal data sharing with federal immigration authorities. These harms fall under violations of human rights and breach of legal obligations. The presence of lawsuits and documented illegal access confirms realized harm. Thus, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Modesto license plate readers shared data with federal agencies, police say

2026-03-17
CBS News
Why's our monitor labelling this an incident or hazard?
The automated license plate reader system qualifies as an AI system due to its automated data collection and analysis capabilities. The improper connections to federal agencies represent a misuse or failure in the use of the AI system, potentially leading to violations of privacy and legal rights. Since the article does not confirm actual unauthorized access or harm but highlights a breach of legal data sharing restrictions, this situation is best classified as an AI Hazard, reflecting a credible risk of harm that could plausibly lead to an AI Incident if data misuse occurred or occurs in the future.
Thumbnail Image

MN bill seeks to regulate automated license plate readers and data use

2026-03-17
FOX 9 Minneapolis-St. Paul
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system, specifically automated license plate readers, which infer and generate data outputs (license plate information linked to locations). The misuse of this data has led to violations of privacy rights and potential intimidation, which can be considered a breach of fundamental rights. However, the article describes a legislative response to past misuse and ongoing concerns rather than a new incident of harm occurring at the time of reporting. Therefore, this is best classified as Complementary Information, as it provides context and governance response to previously reported or ongoing issues related to AI system use and data privacy.
Thumbnail Image

South Pasadena to remove Flock Safety cameras over privacy concerns

2026-03-20
FOX 11 Los Angeles
Why's our monitor labelling this an incident or hazard?
The ALPR system is an AI system used for surveillance. The event centers on the decision to remove some cameras due to privacy concerns and data misuse involving federal agencies, indicating indirect involvement of AI in potential rights violations. However, the article does not document a concrete AI Incident with realized harm but rather a governance response to concerns and potential risks. This fits the definition of Complementary Information, as it updates on societal and governance responses to AI-related privacy issues without describing a new incident or hazard.
Thumbnail Image

South Pasadena cancels contract with Flock Safety, citing privacy concerns

2026-03-19
LAist
Why's our monitor labelling this an incident or hazard?
The AI system in question is the automated license plate readers powered by AI, which collect and share sensitive data. The misuse of this data by law enforcement agencies, including illegal sharing with federal immigration agents, constitutes a violation of rights and legal obligations. The harm is realized and has led to the cancellation of contracts and policy changes, meeting the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a concrete case of harm caused by the use of an AI system.
Thumbnail Image

Welch: Public safety and privacy can coexist in the Flock camera debate - HeraldNet.com

2026-03-17
Herald net
Why's our monitor labelling this an incident or hazard?
The article centers on the ongoing societal and policy debate about the use and regulation of AI-powered automated license plate reader systems. It does not describe any realized harm or incident resulting from the AI system's use, nor does it report a near miss or credible future harm event. Instead, it offers analysis and recommendations regarding privacy and public safety concerns, legal rulings, and policy responses. Therefore, it fits the definition of Complementary Information, as it provides supporting context and governance-related discussion about AI systems without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Residents Raise Privacy, Equity Concerns Over Police Use of Flock Cameras

2026-03-20
Pasadena Now
Why's our monitor labelling this an incident or hazard?
The automated license plate reader cameras constitute an AI system due to their automated data capture, processing, and real-time alert generation capabilities. The concerns raised by residents about privacy, data sharing, and disproportionate surveillance indicate plausible risks of harm, including violations of privacy rights and inequitable treatment of communities. However, the article does not describe any actual harm or incident caused by the AI system's malfunction or misuse. The event is therefore best classified as an AI Hazard, reflecting credible potential for harm without evidence of realized harm at this time.
Thumbnail Image

Public Safety Committee Raises Privacy Concerns Over Police Use of Flock Cameras

2026-03-20
Pasadena Now
Why's our monitor labelling this an incident or hazard?
The automated license plate reader system qualifies as an AI system because it automates data collection and creates a searchable database for law enforcement use. The concerns focus on potential misuse, privacy violations, and indirect access by other agencies, which could plausibly lead to violations of privacy rights and misuse of personal data. Since no actual misuse or harm has been reported, but credible concerns about future misuse exist, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential for harm, not on responses or updates to past incidents.