Legislative Action Against AI-Driven Surveillance Pricing in Grocery Stores

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

New York and New Jersey lawmakers are advancing bills to ban the use of AI algorithms for surveillance pricing in grocery stores, which set individualized prices based on personal data. The practice has led to discriminatory pricing, disproportionately impacting vulnerable populations and prompting regulatory scrutiny and consumer protection efforts.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article focuses on the legislative push to ban algorithmic pricing based on personal data, which involves AI systems that set prices dynamically and individually. While harms such as exploitative pricing and discrimination are described as occurring in practice, the article does not report a specific incident or harm caused by AI systems that has been legally or officially recognized. Instead, it highlights the potential for such harms and the need for regulation to prevent them. This aligns with the definition of an AI Hazard, where the development or use of AI systems could plausibly lead to harm, but no concrete incident is documented in the article. The presence of AI systems is explicit, the nature of involvement is use of AI for pricing, and the plausible future harm is exploitative and discriminatory pricing practices. Therefore, the classification is AI Hazard.[AI generated]
AI principles
FairnessPrivacy & data governance

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI hazard

Business function:
Marketing and advertisement

AI system task:
Goal-driven organisation


Articles about this incident or hazard

Thumbnail Image

AG James, Dems want to ban prices based on personal algorithms

2026-03-16
Gothamist
Why's our monitor labelling this an incident or hazard?
The article focuses on the legislative push to ban algorithmic pricing based on personal data, which involves AI systems that set prices dynamically and individually. While harms such as exploitative pricing and discrimination are described as occurring in practice, the article does not report a specific incident or harm caused by AI systems that has been legally or officially recognized. Instead, it highlights the potential for such harms and the need for regulation to prevent them. This aligns with the definition of an AI Hazard, where the development or use of AI systems could plausibly lead to harm, but no concrete incident is documented in the article. The presence of AI systems is explicit, the nature of involvement is use of AI for pricing, and the plausible future harm is exploitative and discriminatory pricing practices. Therefore, the classification is AI Hazard.
Thumbnail Image

NY lawmakers push bills to ban algorithmic pricing

2026-03-16
WRGB
Why's our monitor labelling this an incident or hazard?
The article discusses the use of AI-driven algorithmic pricing that could lead to discriminatory pricing and harm to consumers and workers. However, it does not report any realized harm or incident but rather the potential for such harm, prompting legislative action. Therefore, this qualifies as an AI Hazard because the development and use of AI systems for surveillance pricing could plausibly lead to harms such as discrimination and economic harm to communities and workers.
Thumbnail Image

'One Fair Price': How Letitia James, NY pols are looking to stop algorithmic pricing known to inflate prices on what you like to buy | amNewYork

2026-03-16
amNewYork
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI-based algorithmic pricing systems that adjust prices based on personal data, which fits the definition of an AI system. The harms described (unfair price hikes, discriminatory pricing) are recognized harms related to AI use. However, the article does not report a specific incident where harm has already occurred or a near miss; instead, it focuses on legislative proposals to prevent such harms. This aligns with Complementary Information, as it details governance responses and societal efforts to address AI-related harms, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

NY Attorney General Letitia James pushes bills to ban 'surveillance pricing' by retailers

2026-03-16
WBNG
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of algorithms analyzing personal data to set different prices, which qualifies as AI system involvement. The harms described (predatory pricing, economic harm to consumers) fall under harm to communities or individuals. Since the bills are proposed to prevent these harms and no actual harm or incident is reported, this is a plausible future harm scenario. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

NY Dems say 'predatory' algorithms should charge more based on private data

2026-03-16
CNYhomepage
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses algorithms that use personal data to set individualized prices, which fits the definition of an AI system. The harms described include discriminatory pricing that disproportionately impacts vulnerable populations, constituting harm to communities and violations of rights. Since these harms are occurring and the legislation is a response to these harms, this is an AI Incident. The article is not merely about potential future harm or a general policy discussion but about ongoing harm caused by AI systems in pricing. Hence, the classification is AI Incident.
Thumbnail Image

NJ Senate panel advances bill targeting surveillance pricing at grocery stores

2026-03-16
Curated - BLOX Digital Content Exchange
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of algorithms (AI systems) to adjust prices for individual shoppers based on personal data, which constitutes an AI system's use. Although no specific harm has yet occurred in New Jersey grocery stores, the practice has been observed in online grocery delivery services and is feared to extend to physical stores. The legislation aims to prevent this potential harm, which includes unfair pricing and consumer fraud. Therefore, this event describes a credible risk of harm from AI use that could plausibly lead to an AI Incident if unregulated, making it an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

You could be paying more for groceries than your neighbor. Here's what Jersey lawmakers are doing about it.

2026-03-18
The Philadelphia Inquirer
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-driven algorithms to adjust prices based on personal data, leading to consumers paying different prices for the same goods without their knowledge. This constitutes a direct harm to consumers (harm to communities and violation of rights). The legislative response aims to ban this practice, confirming the recognition of harm. The AI system's role is pivotal in enabling this discriminatory pricing. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

You could be paying more for groceries than your neighbor. Here's what New Jersey lawmakers are doing about it

2026-03-18
ArcaMax
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (algorithms analyzing personal data to set individualized prices) that have directly led to harm in the form of unfair economic treatment of consumers, which can be considered harm to communities and a violation of consumer rights. The article details ongoing legislative efforts to address this harm, indicating the harm is occurring and recognized. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (unfair pricing) to people.
Thumbnail Image

You could be paying more for groceries than your neighbor. Here's what New Jersey lawmakers are doing about it

2026-03-18
Eagle-Tribune
Why's our monitor labelling this an incident or hazard?
The article implies the use of AI or algorithmic systems in pricing decisions based on personal data, which can lead to discriminatory pricing practices. However, there is no indication that harm has already occurred or that a specific incident has taken place. Instead, the lawmakers' actions are preventive, aiming to address potential unfairness and privacy concerns before widespread harm occurs. Therefore, this situation represents a plausible risk of harm due to AI-driven pricing practices, qualifying it as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Grocers face state and federal lawmakers' scrutiny over 'surveillance' pricing

2026-03-17
Grocery Dive
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of dynamic pricing algorithms and surveillance technologies used by grocers. The article does not report a realized harm incident but focuses on the potential for these AI systems to cause harm by enabling predatory pricing and privacy violations. The legislative and union responses indicate a credible risk of harm, making this an AI Hazard. It is not Complementary Information because the main focus is not on updates or responses to a past incident but on the emerging risk and regulatory efforts. It is not an AI Incident because no direct or indirect harm has yet materialized according to the article.
Thumbnail Image

New York officials push bill banning surveillance pricing | Fingerlakes1.com

2026-03-17
Fingerlakes1.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of algorithms that analyze consumer data to set individualized prices, which involves AI systems. The harms include violations of consumer rights and potential discriminatory pricing practices that affect fairness and trust in the marketplace. However, the event is about proposed legislation to prevent such harms rather than an actual incident of harm occurring. Therefore, this is a governance response to a potential AI-related harm, making it Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Critics take aim at companies that charge different prices based on personal data

2026-03-19
Newsday
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for algorithmic pricing based on personal data, which has directly led to harm by charging different prices to consumers unfairly, constituting exploitation and a violation of consumer rights. The harm is realized and ongoing, as evidenced by consumer complaints, legislative responses, and the ending of such programs by companies like Instacart. Therefore, this is an AI Incident due to the direct link between AI system use and harm to consumers.