AI Weapon Scanners in NYC Subway Yield False Positives

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-powered weapon scanners in New York City's subway system, deployed by Mayor Eric Adams, resulted in 118 false positives and detected no firearms, only 12 knives. The system, manufactured by Evolv, raised privacy concerns and led to unwarranted searches, prompting criticism from civil liberties groups and legal experts.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Evolv AI scanning pilot was actively in use and its malfunction led to real-world harm—false alarms causing invasive searches and privacy violations—making this a concrete AI Incident under the framework.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Government, security, and defenceMobility and autonomous vehicles

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputationalPublic interest

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detectionEvent/anomaly detection

In other databases

Articles about this incident or hazard

Thumbnail Image

NYC Attempt to Scan Subway for Weapons With AI Fails Miserably as System Flooded by False Positives While Detecting Zero Actual Guns

2024-10-31
Futurism
Why's our monitor labelling this an incident or hazard?
The Evolv AI scanning pilot was actively in use and its malfunction led to real-world harm—false alarms causing invasive searches and privacy violations—making this a concrete AI Incident under the framework.
Thumbnail Image

Law Enforcement Today

2024-11-01
Law Enforcement Today
Why's our monitor labelling this an incident or hazard?
The weapons scanner is an AI system whose malfunction (high false positive rate) directly caused harm—unwarranted invasive searches of 118 passengers—constituting a breach of human rights (privacy) under the AI Incident definition.
Thumbnail Image

NYC Weapon Sensing Tech Fails, Investigations into Misconduct

2024-11-01
AmmoLand.com
Why's our monitor labelling this an incident or hazard?
The AI system (weapon detection) was explicitly used and malfunctioned, producing false positives and failing to detect concealed weapons, which led to unnecessary searches and potential violations of constitutional rights (Fourth Amendment). This constitutes a violation of human rights and a breach of legal protections, fitting the definition of an AI Incident. The investigations into the company's misconduct and the city's handling of the contract further contextualize the incident but do not change the classification. The harm is realized, not just potential, so this is not merely a hazard or complementary information.
Thumbnail Image

Gun Detection Tech The Gun Detection Tech Firm Said Wouldn't Work In NYC Subways Doesn't Work In NYC Subways

2024-10-30
Techdirt
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Evolv's AI-powered gun detection scanners) used in a public safety context. The system's deployment and malfunction (high false positive rate and failure to detect guns) directly led to ineffective security screening, which can harm community safety and trust, a form of harm to communities. The CEO's admission that the technology is not suited for subway environments further confirms the malfunction aspect. The event meets the criteria for an AI Incident as the AI system's use has directly led to harm (ineffective security and potential public safety risks).
Thumbnail Image

NYC's Subway AI Weapons Scanners Fail to Find a Single Gun - The Truth About Guns

2024-10-30
The Truth About Guns
Why's our monitor labelling this an incident or hazard?
The AI system (Evolv scanners) was actively used, but the pilot resulted only in false positives without any detected firearms or harm. The article discusses the system's reliability issues, public debate, and legal scrutiny, which are responses and contextual information rather than direct or indirect harm caused by the AI system. No injury, rights violation, or other harms occurred, and no plausible future harm is clearly indicated. Hence, the event does not meet the criteria for AI Incident or AI Hazard but fits the definition of Complementary Information.
Thumbnail Image

Metro is testing out an AI weapons scanner that faced criticism in NYC

2024-10-30
LAist
Why's our monitor labelling this an incident or hazard?
The AI system (weapons scanners using AI) is explicitly mentioned and is being used in a real-world setting. The event involves the use of AI systems for security screening, which could plausibly lead to harm such as wrongful police interactions or rights violations due to false positives. However, the article does not report any actual injury, rights violation, or other harm occurring so far. The concerns and criticisms are about potential harms and effectiveness, making this a plausible risk rather than a realized incident. Therefore, this qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risks and criticisms of the AI system's deployment, not on responses or ecosystem context. It is not unrelated because AI systems are central to the event.
Thumbnail Image

Metro tests AI scanners

2024-10-30
LAist
Why's our monitor labelling this an incident or hazard?
The AI system (weapons detection scanners using AI) is explicitly mentioned and is in use. The event involves the use of the AI system in a real-world setting (transit system). While there are many false positives that could plausibly lead to harm (e.g., confrontations with police, rights violations), the article does not report any actual injuries, rights violations, or other harms occurring so far. The concerns and criticisms suggest potential future harms, making this an AI Hazard rather than an AI Incident. The event is not merely complementary information because it focuses on the pilot's outcomes and potential risks, not just updates or responses to past incidents.
Thumbnail Image

AI Weapons Scanners in NYC Subway Found Zero Guns in Month Test

2024-10-30
Insurance Journal
Why's our monitor labelling this an incident or hazard?
The AI-powered weapons scanners are clearly AI systems involved in the event. The event concerns their use and performance (including false positives) but does not report any direct or indirect harm resulting from their deployment. There is no indication that the AI system caused injury, rights violations, or other harms, nor that it plausibly could have led to such harms in this pilot. The event mainly provides results of the pilot test and public/legal reactions, which fits the definition of Complementary Information. Hence, the classification is Complimentary Info.