Flock AI Surveillance Data Leak Exposes Millions, Triggers Privacy Backlash

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Police departments across the U.S. inadvertently released unredacted Flock Safety AI surveillance logs, exposing over 2.3 million license plates and sensitive law enforcement queries. The leak led to privacy violations, public outcry, and Santa Cruz terminating its contract with Flock due to concerns over unauthorized data access and misuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

Flock is an AI-enabled surveillance system that processes license plate data to support law enforcement. The incident arises from the use and mishandling of data generated by this AI system, leading to the exposure of millions of surveillance targets and active investigations. This exposure constitutes a violation of privacy rights and harms communities by revealing sensitive information. The AI system's role is pivotal as it aggregates and enables searching of this data, and the failure to properly redact information during public records releases directly led to the harm. Hence, this event meets the criteria for an AI Incident due to realized harm linked to the AI system's use.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityAccountabilityRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General publicBusiness

Harm types
Human or fundamental rightsReputational

Severity
AI incident

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Police Unmask Millions of Surveillance Targets Because of Flock Redaction Error

2026-01-13
democraticunderground.com
Why's our monitor labelling this an incident or hazard?
Flock is an AI-enabled surveillance system that processes license plate data to support law enforcement. The incident arises from the use and mishandling of data generated by this AI system, leading to the exposure of millions of surveillance targets and active investigations. This exposure constitutes a violation of privacy rights and harms communities by revealing sensitive information. The AI system's role is pivotal as it aggregates and enables searching of this data, and the failure to properly redact information during public records releases directly led to the harm. Hence, this event meets the criteria for an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Flock threatens website hosting license plate data accidentally leaked by cops

2026-01-13
Straight Arrow News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—Flock Safety's license plate reading cameras—that collects and processes surveillance data. The accidental leak of audit logs containing millions of license plates and search queries directly leads to harm in the form of privacy violations and potential misuse of sensitive information. The AI system's development and use are central to the incident, as the data collected by the AI system was improperly redacted and released, enabling public access to sensitive surveillance information. This meets the criteria for an AI Incident because the AI system's use has directly led to harm related to violations of rights and harm to communities through surveillance exposure.
Thumbnail Image

Flock threatens website hosting license plate data accidentally leaked by cops - Muvi TV

2026-01-13
Muvi TV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (license plate reading cameras with data processing capabilities) whose use by law enforcement has led to the accidental release of sensitive personal data. This exposure constitutes a violation of privacy rights and harms communities by revealing surveillance targets and sensitive law enforcement queries. The AI system's development and use directly contributed to this harm. Although the leak was accidental, the AI system's role in collecting and managing this data is pivotal. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Santa Cruz votes to terminate its contract with Flock Safety

2026-01-14
Santa Cruz Sentinel
Why's our monitor labelling this an incident or hazard?
The automated license plate reader system provided by Flock Safety is an AI system as it automatically captures and processes license plate data using AI technologies. The controversy and opposition arose because the system's data was accessed by out-of-state agencies without clear consent, raising privacy and human rights concerns. The City Council's decision to terminate the contract reflects recognition of these harms. The event involves the use of an AI system whose deployment has directly led to violations of privacy rights and community trust, which are harms under the AI Incident definition. The concerns about data misuse and surveillance constitute violations of human rights and harm to communities. Hence, this event is best classified as an AI Incident.
Thumbnail Image

Bridgeport Police Department's drone proposal faces opposition from some residents

2026-01-14
Connecticut Public
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (autonomous drones with surveillance capabilities) whose deployment is under consideration. The concerns raised relate to plausible future harms such as privacy violations and misuse of data by law enforcement or federal agencies, which could lead to violations of human rights. Since no actual harm has been reported yet and the event is about the proposal and public opposition, this constitutes an AI Hazard rather than an AI Incident. The event highlights credible risks that could plausibly lead to harm if the drones are deployed without adequate safeguards.
Thumbnail Image

Federal judge casts doubts on challenge to Norfolk's Flock Camera system

2026-01-15
Daily Press
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Flock Safety's license plate recognition cameras) used by law enforcement, which is being legally challenged for potential constitutional rights violations. However, the article does not report any actual harm or incident caused by the AI system; rather, it discusses the legal challenge, judicial skepticism, and regulatory measures. This fits the definition of Complementary Information, as it provides context, legal proceedings, and governance responses related to AI surveillance technology without describing a new AI Incident or AI Hazard. The focus is on the evolving understanding and societal/legal response to the AI system's use, not on a realized or imminent harm.
Thumbnail Image

Thornton town hall brings community together to talk use of Flock cameras across city

2026-01-15
Denver 7 Colorado News (KMGH)
Why's our monitor labelling this an incident or hazard?
The Flock cameras are AI-enabled systems (automated license plate readers) used by police, so an AI system is involved. However, the article only reports a community meeting discussing the technology's use and concerns, without any reported harm or incident caused by the AI system. There is no indication of realized harm or a credible imminent risk of harm described. The main focus is on community engagement and governance discussion, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Judge pushes back on arguments challenging Norfolk's Flock camera system

2026-01-15
WAVY.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI-enabled system (Flock Safety cameras with license plate recognition and vehicle attribute detection) that collects and processes data to aid law enforcement. The legal challenge concerns potential violations of constitutional rights due to the system's surveillance capabilities. However, no actual harm has been reported as having occurred yet; the case is still in litigation and the judge's ruling is pending. Therefore, this situation represents a plausible risk of harm related to the AI system's use, but no realized harm or incident has been established. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving rights violations if the court finds the use unconstitutional or if misuse occurs.
Thumbnail Image

Privacy fears push Flock cameras into the spotlight in Arizona

2026-01-16
https://www.azfamily.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (automatic license plate readers with AI capabilities) and discusses their use and potential for privacy harm. However, no direct or indirect harm has been reported as having occurred yet. The focus is on legislative proposals to regulate these systems to prevent misuse and protect privacy, which constitutes a plausible risk of harm in the future. Therefore, this event fits the definition of an AI Hazard, as it concerns circumstances where AI system use could plausibly lead to harm (privacy violations, mass surveillance) if unregulated, but no incident has yet materialized.
Thumbnail Image

OPINION: Get the Flock off of campus

2026-01-16
The Louisville Cardinal
Why's our monitor labelling this an incident or hazard?
Flock cameras employ AI for automatic license plate recognition and tracking, which is an AI system. The article documents actual misuse by law enforcement agencies, including racial profiling and targeting of protestors, which are violations of human rights and harm to communities. The security flaws leading to potential unauthorized access further exacerbate these harms. Since these harms have occurred and are ongoing, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Yakima City Council hears updates and concerns about Flock cameras

2026-01-15
Curated - BLOX Digital Content Exchange
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Flock Safety's license plate recognition cameras) used by law enforcement. However, it does not report any realized harm such as injury, rights violations, or property/community harm directly caused by the AI system. Instead, it presents an update on the system's use, community concerns about privacy and data access, and ongoing policy considerations. This fits the definition of Complementary Information, as it provides supporting context and governance-related updates about an AI system and its societal implications without describing a new AI Incident or AI Hazard.
Thumbnail Image

Yakima City Council hears updates and concerns about Flock cameras

2026-01-15
Yakima Herald-Republic
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—Flock Safety's cameras and software that use AI for license plate recognition and vehicle identification. The system is actively used by law enforcement, which constitutes AI system use. However, the article does not describe any direct or indirect harm caused by the AI system, nor does it report a specific event where harm was realized or a near miss occurred. Instead, it details community concerns, legal debates, and policy discussions about privacy, data access, and potential misuse, which are ongoing societal and governance responses. This fits the definition of Complementary Information, as it enhances understanding of the AI system's impact and the ecosystem's response without describing a new AI Incident or AI Hazard.
Thumbnail Image

Washington Scrambles to Regulate License-Plate Cameras That Could Aid Stalkers

2026-01-15
US News & World Report
Why's our monitor labelling this an incident or hazard?
The Flock Safety cameras are AI systems that automatically capture and analyze license plate and vehicle data. The article details actual harms caused by misuse of these AI systems, including stalking by law enforcement officers and potential privacy violations through public records access. These harms fall under violations of human rights and harm to individuals. The article also discusses legislative responses, but the primary focus is on realized harms from the AI system's use, not just potential future risks or general policy discussion. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Washington scrambles to regulate license-plate cameras that could aid stalkers

2026-01-15
whas11.com
Why's our monitor labelling this an incident or hazard?
The article discusses the deployment and capabilities of AI-powered license plate reader cameras and the privacy concerns they raise, but it does not describe any realized harm or incident resulting from their use. The concerns about aiding stalkers and privacy invasion represent plausible future harms, but no specific event or harm is reported. Therefore, this situation fits the definition of an AI Hazard, as the technology's use could plausibly lead to harm, but no harm has yet been documented in this report.
Thumbnail Image

Arizona License Plate Reader Bill

2026-01-15
townhall.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI-powered ALPR cameras) used by law enforcement. The bill's provisions enable extensive surveillance and data collection on the public while blocking public scrutiny, which plausibly could lead to violations of human rights, specifically privacy and freedom from unwarranted surveillance. Although no specific incident of harm is reported, the described circumstances create a credible risk of future harm due to potential misuse or abuse of the surveillance data without accountability. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Proposed state legislation could affect Snohomish County Flock cameras

2026-01-15
HeraldNet.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Flock Safety cameras with AI capabilities analyzing vehicle images) and their use by law enforcement. However, the article centers on a legislative proposal to regulate these systems to mitigate privacy risks and misuse, rather than reporting an actual incident or harm caused by the AI systems. There is no direct or indirect harm described as having occurred; rather, the article highlights potential future risks and the policy response to them. Therefore, this qualifies as Complementary Information, providing governance and societal response context to AI system use and associated concerns.
Thumbnail Image

Santa Cruz votes to terminate its contract with Flock Safety

2026-01-15
The Mercury News
Why's our monitor labelling this an incident or hazard?
The automated license plate reader system provided by Flock Safety is an AI system as it involves automated data capture and processing with sharing capabilities. The event details that the system's data was accessed by out-of-state agencies without proper authorization, violating state law and community trust, which constitutes a breach of obligations intended to protect fundamental rights (privacy and sanctuary city protections). The community's fears and opposition, along with the council's decision to terminate the contract, reflect realized harm related to human rights and harm to the community. The police department's discontinuation of the system and removal of cameras further confirm the incident's materialization. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Santa Cruz votes to terminate its contract with Flock Safety

2026-01-15
East Bay Times
Why's our monitor labelling this an incident or hazard?
The automated license plate reader system provided by Flock Safety is an AI system as it involves automated image capture and data processing to identify license plates. The article reports that the system's data was accessed by out-of-state agencies without proper authorization, violating privacy and potentially leading to misuse against community members, including immigrants. This constitutes a violation of rights and harm to the community, fulfilling the criteria for an AI Incident. The decision to terminate the contract is a response to these harms. The involvement of the AI system's use directly led to realized harm, not just potential harm, so it is not merely a hazard or complementary information.
Thumbnail Image

Will Bridgeport City Council revive rejected Flock contract for police drones?

2026-01-15
Connecticut Post
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of drones intended for emergency response, which likely incorporate AI for navigation and operational tasks. However, the article does not describe any actual or imminent harm resulting from their use or malfunction. The focus is on the political decision-making process, community concerns, and potential future deployment. Since no harm has occurred and no credible imminent risk is described, it does not meet the criteria for AI Incident or AI Hazard. Instead, it provides complementary information about governance and societal responses to AI technology in public safety.
Thumbnail Image

KyCIR Report: U of L, UK use Flock license plate readers to monitor drivers. Here's where their cameras are.

2026-01-16
The Louisville Cardinal
Why's our monitor labelling this an incident or hazard?
The license plate readers are AI systems that collect and analyze vehicle data. The article reports on past misuse of these systems for immigration-related searches, which constitutes a violation of rights and thus an AI Incident. However, the main focus of the article is on the investigation outcomes, disciplinary measures, policy changes, and transparency efforts following that incident. There is no new harm occurring or imminent risk described. The article also covers the legal and public debate about camera location disclosure, which is a governance and societal response. Hence, the article serves as Complementary Information, updating on responses to a previously reported AI Incident and ongoing governance issues, rather than reporting a new incident or hazard.
Thumbnail Image

Flock CEO Goes Ballistic on Critics as More Americans Question Mass Driver Surveillance | ACLU

2026-01-16
American Civil Liberties Union
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—Flock's automated license plate reader network—that processes data to surveil individuals. The concerns raised relate to the potential misuse of this AI system leading to violations of privacy and human rights, which are harms under the AI Incident definition. However, the article does not report a specific realized harm or incident but rather ongoing societal debate and potential risks. Thus, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm. The article also includes responses from officials and the company's defensive stance, but these do not constitute a new incident or complementary information about mitigation. Hence, the classification is AI Hazard.
Thumbnail Image

Drivers In This City Thought These Were Just Traffic Cameras. They Were Wrong | Carscoops

2026-01-16
Carscoops
Why's our monitor labelling this an incident or hazard?
The automated license plate readers are AI systems that analyze vehicle data and location to create searchable histories. Their use by law enforcement without sufficient transparency or oversight has directly led to violations of privacy and potential breaches of rights, fulfilling the criteria for harm under human rights violations. The documented misuse by officers and security vulnerabilities further support the classification as an AI Incident. The event is not merely a potential risk but describes ongoing data collection and misuse, so it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system's use is central to the described harms.
Thumbnail Image

RI police cite crime‑solving wins in plate-reading camera expansion

2026-01-16
The Providence Journal
Why's our monitor labelling this an incident or hazard?
The license plate reading system is an AI system used by police to identify vehicles linked to crimes. Its use has directly contributed to solving crimes, which is a positive impact rather than harm. There is mention of privacy concerns and opposition by civil liberties groups, but no realized harm or violation is reported. The article mainly discusses the deployment, approvals, and benefits of the system, with some societal debate. Therefore, this is best classified as Complementary Information, as it provides context and updates on AI system deployment and societal responses without describing an AI Incident or AI Hazard.
Thumbnail Image

York-Poquoson Sheriff's Office now using 'Flock Transparency Portal' to increase transparency around Flock camera usage

2026-01-16
WAVY.com
Why's our monitor labelling this an incident or hazard?
The Flock cameras are AI systems performing license plate recognition and real-time alerts, so AI system involvement is clear. However, the article does not report any actual harm or incident resulting from their use, only public concerns and legal challenges. The transparency portal is a response to these concerns, aiming to increase public understanding and trust. This fits the definition of Complementary Information, as it provides context and governance-related updates without describing a new incident or hazard.
Thumbnail Image

Washington scrambles to regulate license-plate cameras that could aid stalkers

2026-01-17
The Columbian
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (automatic license plate readers with advanced recognition capabilities) and discusses their use and potential misuse. However, no direct or indirect harm has been reported or described as having occurred. The concerns raised are about plausible future misuse and privacy risks, which could lead to harm if not regulated properly. Therefore, this qualifies as an AI Hazard because the technology's use could plausibly lead to harms such as stalking or privacy violations, but no incident has yet materialized.