Flock Safety AI Cameras Exposed, Leading to Widespread Privacy Breaches

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Flock Safety left at least 60 of its AI-powered Condor PTZ surveillance cameras unsecured and exposed to the public internet across the U.S. This allowed anyone to access live and archived footage, including images of children, and control the cameras, resulting in significant privacy violations and unauthorized surveillance.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI-enabled system used for surveillance that directly aided police in locating a suspect, which is a clear example of AI system use leading to a significant public safety outcome. While the system's use has raised privacy and human rights concerns, these concerns are ongoing debates rather than realized harms in this specific case. The AI system's role in the investigation is direct and pivotal. Hence, this qualifies as an AI Incident due to the direct involvement of AI in a law enforcement context with implications for human rights and privacy, which are fundamental rights. The article also discusses regulatory and ethical considerations, but the primary focus is on the AI system's use and its impact, not just complementary information or general AI news.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityRespect of human rightsAccountability

Industries
Digital security

Affected stakeholders
ChildrenGeneral public

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

The CEO of Flock downloads on his surveillance cameras

2025-12-22
POLITICO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-enabled system used for surveillance that directly aided police in locating a suspect, which is a clear example of AI system use leading to a significant public safety outcome. While the system's use has raised privacy and human rights concerns, these concerns are ongoing debates rather than realized harms in this specific case. The AI system's role in the investigation is direct and pivotal. Hence, this qualifies as an AI Incident due to the direct involvement of AI in a law enforcement context with implications for human rights and privacy, which are fundamental rights. The article also discusses regulatory and ethical considerations, but the primary focus is on the AI system's use and its impact, not just complementary information or general AI news.
Thumbnail Image

Flock Exposed Its AI-Powered Cameras to the Internet. We Tracked Ourselves.

2025-12-22
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI-enabled Condor PTZ cameras) used for tracking people. The misuse or malfunction here is the failure to secure these AI systems, leading to unauthorized access to live streams and recorded footage. This has directly led to harm by violating individuals' privacy rights and exposing them to surveillance without consent, which is a breach of fundamental rights. The harm is realized, not just potential, as unauthorized parties could watch and download footage of people in public and semi-public spaces. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Flock Safety AI Cameras Exposed: Privacy Breaches and Surveillance Fears

2025-12-22
WebProNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Flock Safety's AI cameras with capabilities such as object and person identification and tracking). The misuse and malfunction (security misconfiguration) of these AI systems directly led to privacy breaches and unauthorized surveillance, which constitute violations of human rights and harm to communities. The exposure of live feeds and control interfaces to the public internet is a clear failure in the AI system's deployment and security, resulting in realized harm. The article also details real-world implications, including legal challenges and public backlash, confirming that harm has occurred rather than being merely potential. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

Flock camera captured kids on a playground. A security failure exposed them online - Muvi TV

2025-12-22
Muvi Television Homepage - Latest Local News, Sports News, Business News & Entertainment
Why's our monitor labelling this an incident or hazard?
The event involves AI-powered surveillance cameras that automatically track people, indicating AI system involvement. The security failure led to the exposure of live and archived footage, including images of unattended children, which directly harms privacy and potentially safety, fulfilling the criteria for harm to persons and communities. The exposure was real and ongoing at the time of reporting, not just a potential risk, thus constituting an AI Incident rather than a hazard or complementary information. The company's response does not negate the fact that harm occurred due to the AI system's use and security failure.
Thumbnail Image

Flock Exposed Its AI-Powered Cameras to the Internet. We Tracked Ourselves

2025-12-22
404 Media
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI-enabled Condor PTZ cameras with people-tracking capabilities). The misuse or malfunction here is the exposure of live streams and control panels to the open internet without password protection, allowing unauthorized access to sensitive surveillance footage. This has directly led to violations of privacy and human rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as people are being watched and tracked without consent, including vulnerable individuals such as children at playgrounds.
Thumbnail Image

Flock camera captured kids on a playground. A security failure exposed them online

2025-12-22
Straight Arrow News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI-powered surveillance cameras with automatic tracking features). The security failure allowed unauthorized access to live and archived footage, directly exposing individuals, including children, to privacy violations and potential safety risks. This exposure constitutes harm to individuals' rights and safety, fulfilling the criteria for an AI Incident. The AI system's malfunction (misconfiguration and lack of encryption) directly led to this harm. Although the company claims to have remedied the issue, the harm from the exposure has already occurred.
Thumbnail Image

Dozens of Flock AI camera feeds were just out there

2025-12-23
The Verge
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as AI-powered surveillance cameras with automatic tracking capabilities. The event describes a security misconfiguration that allowed unauthorized access to live feeds and control panels, directly leading to privacy violations and potential breaches of rights. This fits the definition of an AI Incident because the AI system's malfunction (misconfiguration) directly led to harm (privacy violations and potential human rights breaches). The harm is realized, not just potential, as unauthorized viewing and control occurred.
Thumbnail Image

Douglas County's Flock camera compromised as company leaves it exposed on internet

2025-12-24
9NEWS
Why's our monitor labelling this an incident or hazard?
The incident directly involves an AI system (the AI-powered Flock camera) whose misuse (insecure exposure of livestream and recorded footage) has led to a violation of privacy rights, a form of harm to individuals and communities. The exposure of sensitive surveillance data without consent or protection is a breach of obligations under applicable laws protecting fundamental rights. Therefore, this qualifies as an AI Incident because the AI system's use and the resulting security failure have directly led to harm.
Thumbnail Image

Answer woman: Does APD share license plate camera data with ICE?

2025-12-24
The Asheville Citizen Times
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Flock Safety's ALPR) used by law enforcement, which fits the definition of an AI system due to its automated license plate reading and data sharing capabilities. However, the article does not report any realized harm or incident caused by the AI system's development, use, or malfunction. Instead, it addresses public concerns, clarifies data sharing policies, and describes governance and transparency measures. There is no direct or indirect harm reported, nor a plausible future harm scenario presented. The focus is on explaining the system's operation and data sharing practices, making it a case of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Mass Surveillance & Immigration Laws: Hidden Double Standard

2025-12-26
Shift Frequency
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (automated license plate reader platforms) that continuously collects and analyzes data on individuals' movements, which is a clear AI system under the definitions. The harms described include violations of privacy rights, chilling effects on lawful behavior, and misuse of surveillance data, which constitute violations of human rights and harm to communities. The harms are realized and ongoing, not merely potential. The article documents direct consequences of the AI system's deployment and use, including data sharing with immigration enforcement leading to policy backlash and suspension of systems. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

As number of license-plate readers surge, more Hoosiers are pushing back

2025-12-26
CNHI News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—automated license-plate readers using AI for vehicle recognition and tracking. The use of these systems has directly aided law enforcement in solving crimes, which is a positive impact. However, the article does not report any specific realized harm such as unlawful surveillance, data misuse, or rights violations that have already occurred due to the AI system. Instead, it focuses on public concerns, advocacy efforts, legal challenges, and calls for regulation to prevent potential future harms related to privacy and government overreach. This fits the definition of Complementary Information, as it provides supporting data and societal/governance responses related to AI systems and their impacts, without describing a new AI Incident or AI Hazard.
Thumbnail Image

Metro security camera breach included Cedar Rapids

2025-12-26
The Mighty 1630 KCJJ
Why's our monitor labelling this an incident or hazard?
The event describes a security breach involving AI-enabled surveillance cameras, where unauthorized access to live and recorded footage occurred. The AI system's use in surveillance and video feed management is central to the incident. The breach led to a violation of privacy rights, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's misuse or malfunction (security vulnerability).