AI-Powered Necklace Launch Suspended in EU Over Privacy Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The US startup Friend postponed the launch of its AI-powered necklace in France and the EU due to privacy concerns and potential GDPR violations. The device, which listens and analyzes conversations, raised fears about data protection, prompting the company to review compliance before marketing in Europe.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system is explicitly described as listening and analyzing conversations, which involves AI processing. The concerns raised relate to privacy and data protection under GDPR, which are legal rights protecting individuals' personal data. Since the product launch is suspended to address these concerns before deployment, no direct harm or violation has yet occurred. Thus, the event is best classified as an AI Hazard because it plausibly could lead to violations of personal data rights (a form of harm under the framework) if the AI system is deployed without proper safeguards. It is not an AI Incident because harm has not materialized, nor is it Complementary Information or Unrelated as the focus is on the AI system's potential to cause harm and the regulatory response.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Consumer products

Affected stakeholders
ConsumersGeneral public

Harm types
Human or fundamental rights

Severity
AI hazard

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Voici pourquoi ce collier IA, pourtant disponible, ne sera pas commercialisé tout de suite en Europe

2026-04-05
DH.be
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI necklace) but does not describe any incident or hazard involving harm or plausible harm. The delay is a precautionary measure to comply with legal frameworks, not a response to an AI-related harm or risk. Therefore, this is best classified as Complementary Information, providing context about governance and compliance efforts related to AI deployment.
Thumbnail Image

La start-up d'IA " Friend " suspend son lancement le temps de répondre aux exigences européennes sur les données personnelles

2026-04-05
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as listening and analyzing conversations, which involves AI processing. The concerns raised relate to privacy and data protection under GDPR, which are legal rights protecting individuals' personal data. Since the product launch is suspended to address these concerns before deployment, no direct harm or violation has yet occurred. Thus, the event is best classified as an AI Hazard because it plausibly could lead to violations of personal data rights (a form of harm under the framework) if the AI system is deployed without proper safeguards. It is not an AI Incident because harm has not materialized, nor is it Complementary Information or Unrelated as the focus is on the AI system's potential to cause harm and the regulatory response.
Thumbnail Image

Friend renonce à vendre son collier équipé d'IA en France et dans l'Union européenne

2026-04-05
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The AI system (the connected collar with continuous audio listening powered by AI) is explicitly mentioned and is central to the event. The concerns raised relate to violations of fundamental rights (privacy and data protection) and potential misuse of collected data, which could lead to harm. However, the product has been withdrawn from the EU market pending compliance, and no actual harm or incident is reported as having occurred yet. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if the system were used without proper safeguards. The involvement of the data protection authority and the suspension of sales support this classification.
Thumbnail Image

Sa campagne de pub avait fait polémique: le collier IA censé devenir l'ami des consommateurs voit sa commercialisation repoussée dans l'UE

2026-04-06
BFMTV
Why's our monitor labelling this an incident or hazard?
The product is an AI system as it uses AI (Google's Gemini) to process recorded conversations and interact with users. The controversy and regulatory scrutiny arise from the product's capability to record conversations without consent, which could lead to violations of privacy rights and data protection laws (a form of harm to rights). However, the product has not yet been sold in the EU, and no actual harm or incident has been reported. The company's decision to delay sales to comply with GDPR indicates recognition of potential harm. Thus, the event describes a plausible future risk of harm due to AI use, qualifying it as an AI Hazard.
Thumbnail Image

Friend renonce à vendre son collier équipé d'IA en France

2026-04-05
20minutes
Why's our monitor labelling this an incident or hazard?
The connected collar is an AI system capable of continuous audio monitoring, which involves AI processing of environmental data. The suspension of sales and regulatory scrutiny stem from concerns about potential violations of data protection laws and privacy rights, which are human rights. Since no actual harm has occurred yet but there is a credible risk of such harm if the device were to be used or sold without compliance, this situation qualifies as an AI Hazard rather than an AI Incident. The event focuses on the plausible future harm and regulatory response rather than a realized harm.
Thumbnail Image

Il devait enregistrer tout ce qui se passe autour de vous : la startup Friend renonce à vendre son collier équipé d'IA dans l'Union européenne

2026-04-05
Franceinfo
Why's our monitor labelling this an incident or hazard?
The collar is an AI system that continuously records and processes audio data, including potentially sensitive information about people around the wearer. The event involves the use of AI and its development and deployment. While no direct harm has yet occurred in the EU (the product is not currently sold there), the potential for violations of privacy rights and data protection laws is high, as noted by regulatory authorities and political figures. The startup's decision to delay sales in the EU due to GDPR compliance issues underscores the plausible risk of harm. Therefore, this event is best classified as an AI Hazard, reflecting the credible potential for harm if the AI system were to be used without adequate safeguards.
Thumbnail Image

" Friend " : la commercialisation d'un collier IA controversé et enregistrant les conversations repoussée dans l'Union européenne

2026-04-05
Le Parisien
Why's our monitor labelling this an incident or hazard?
The collar is an AI system that records conversations and uses AI to respond, implicating privacy and data protection rights. The company has not yet sold the product in the EU, and no direct harm is reported there, but the potential for harm through unauthorized recording and data misuse is credible. The event focuses on the postponement to comply with regulations, indicating a recognized risk rather than an incident of realized harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

IA : le collier espion de Friend qui veut être votre ami attendra pour être commercialisé en Europe

2026-04-06
SudOuest.fr
Why's our monitor labelling this an incident or hazard?
The Friend necklace is an AI system that listens to conversations and transmits data without consent, which constitutes a violation of privacy rights and data protection laws. The article mentions regulatory investigations and public backlash due to these privacy breaches. Even though the product is not yet sold in the EU, the system's deployment and data practices have already caused harm or rights violations. The AI system's role is pivotal in enabling these privacy infringements. Hence, this is an AI Incident involving violations of human rights and legal obligations related to privacy and data protection.
Thumbnail Image

"Friend" : la commercialisation du collier IA en France et en Europe suspendue

2026-04-05
CNEWS
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the smart collar with AI virtual companion) whose use (listening to conversations without consent) raises significant privacy concerns and potential violations of data protection laws (GDPR). However, since the product's sale is currently suspended in Europe and no realized harm is reported, the situation represents a plausible risk of harm rather than an actual incident. Therefore, it qualifies as an AI Hazard due to the credible potential for privacy and legal harms if the product were to be marketed without compliance.
Thumbnail Image

La commercialisation d'un collier IA suspendue

2026-04-05
Le Journal de Montreal
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved (the AI-powered necklace using Google's Gemini AI). The event stems from the use of the AI system and its potential to record conversations without consent, which could lead to privacy violations (a breach of data protection laws). However, the article does not report any actual harm or incident occurring, only the suspension of sales to ensure compliance with GDPR. This is a governance and societal response to potential AI-related privacy risks, enhancing understanding of the ecosystem and regulatory environment. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Ce collier IA controversé reporte finalement son arrivée sur le marché français

2026-04-05
Le Huffington Post
Why's our monitor labelling this an incident or hazard?
The connected necklace is an AI system that listens and analyzes conversations, raising serious privacy and data protection concerns. The event focuses on regulatory and societal pushback preventing the product's launch to avoid potential violations of privacy rights under GDPR. Since no actual harm has occurred but there is a plausible risk of harm if the product were released as is, this qualifies as an AI Hazard. It is not an AI Incident because no realized harm has taken place, nor is it Complementary Information or Unrelated, as the AI system and its potential for harm are central to the event.
Thumbnail Image

Friend, le collier IA dont tout le monde parle, mais suspendu en France

2026-04-04
LesEchos.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Friend necklace with listening and interactive capabilities). However, the article does not report any realized harm or incident caused by the AI system. Instead, it highlights regulatory concerns and potential legal violations related to data privacy and consent under GDPR, which could plausibly lead to harm if the product were marketed without compliance. Since no harm has occurred yet and the product launch is suspended to avoid such harm, this situation fits the definition of an AI Hazard. It is not Complementary Information because the article's main focus is on the potential risks and regulatory challenges, not on updates or responses to a past incident. It is not an AI Incident because no direct or indirect harm has materialized.
Thumbnail Image

La commercialisation d'un collier IA enregistrant les conversations repoussée dans l'Union européenne

2026-04-05
Le Soir
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it records conversations and uses AI to respond, implicating privacy and data protection rights. The event stems from the use and potential misuse of the AI system. Since the product is not yet marketed in the EU, no direct or indirect harm has occurred there yet. The concerns about GDPR compliance and privacy violations indicate a plausible risk of harm if the product were sold without safeguards. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to violations of rights if commercialization proceeds without compliance. It is not an AI Incident because no harm has materialized in the EU, nor is it Complementary Information or Unrelated.
Thumbnail Image

La commercialisation d'un collier IA enregistrant les conversations suspendue dans l'UE

2026-04-05
DH.be
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as an AI-powered collar that records conversations without consent, implicating privacy and data protection rights. The suspension of marketing is due to concerns about compliance with the GDPR, which protects fundamental rights. Since no actual harm or violation has been reported yet, but the potential for harm exists if the product were marketed and used without consent, this fits the definition of an AI Hazard. The event is not an AI Incident because no realized harm has occurred, nor is it Complementary Information or Unrelated.
Thumbnail Image

La sortie du controversé collier IA suspendue dans l'Union européenne

2026-04-06
L'Obs
Why's our monitor labelling this an incident or hazard?
The necklace is an AI system that listens continuously and uses AI to respond and interact, which is explicitly mentioned. The event concerns the postponement of its sale in the EU due to potential non-compliance with GDPR, indicating a risk of privacy violations (a breach of fundamental rights). No actual harm or incident has occurred yet, but the potential for harm is credible and recognized by authorities (e.g., CNIL investigation). Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

La fête est finie pour " Friend ", la start-up qui avait fait polémique dans le métro parisien

2026-04-06
Presse-citron
Why's our monitor labelling this an incident or hazard?
The AI system is the AI-powered companion collar designed to replace human friends. Its development and intended use involve processing personal data, which raises privacy concerns. The EU's refusal to allow the product on the market due to GDPR non-compliance indicates a credible risk of harm to privacy rights if the product were sold. No actual harm or incident is reported; the product was not marketed in the EU, and no direct injury or rights violation has occurred there. The public backlash and advertising defacement are reactions to the product but do not constitute harm caused by the AI system itself. Hence, the event is best classified as an AI Hazard, reflecting plausible future harm from the AI system's use if unregulated or non-compliant with data protection laws.
Thumbnail Image

La fête est finie pour " Friend ", la start-up qui avait fait polémique dans le métro parisien

2026-04-06
Presse-citron
Why's our monitor labelling this an incident or hazard?
The AI system (the AI companion necklace) is explicitly mentioned and is central to the event. The EU's refusal to allow its sale is due to concerns about data privacy and GDPR non-compliance, which relates to potential violations of fundamental rights. Since the product has not been sold in the EU and no harm has yet occurred there, the event describes a plausible future risk rather than an actual incident. The event does not focus on responses or updates to a past incident but on the potential for harm, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Le collier IA controversé Friend voit sa sortie repoussée dans l'UE

2026-04-06
Challenges
Why's our monitor labelling this an incident or hazard?
The necklace is an AI system as it uses AI (Google's Gemini) to analyze conversations and generate responses. The postponement is due to concerns about compliance with privacy laws (GDPR), indicating potential for violations of fundamental rights if launched without proper safeguards. Since no actual harm has occurred but there is a credible risk of privacy violations, this qualifies as an AI Hazard rather than an Incident. The involvement is in the use phase, with potential misuse or non-compliance leading to harm.
Thumbnail Image

La vente d'un collier IA controversé repoussée dans l'UE

2026-04-05
L'essentiel
Why's our monitor labelling this an incident or hazard?
The AI system (the AI-powered necklace with conversational listening capabilities) is explicitly mentioned and is central to the event. The postponement of sales is due to concerns about compliance with GDPR, indicating potential legal and rights-related harms if the product were marketed without safeguards. No actual harm or incident has been reported in the EU yet, only the potential for harm. Thus, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely complementary information because it focuses on the potential risks and regulatory response rather than updates on past incidents or governance responses unrelated to a specific harm. It is not unrelated because the AI system and its potential impact on privacy rights are clearly central.
Thumbnail Image

Protection des données: Un collier IA enregistreant les conversations retiré de la vente dans l'Union européenne

2026-04-05
Tribune de Genève
Why's our monitor labelling this an incident or hazard?
The AI system (the collar with AI-enabled listening capabilities) is explicitly mentioned. Its use (recording conversations without consent) implicates potential violations of data protection laws and privacy rights, which fall under violations of human rights or legal obligations. Since the product is withdrawn before any reported harm or incident, the event represents a plausible risk of harm rather than an actual incident. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

"Je serai toujours d'accord pour prendre un café avec toi" : le collier qui utilise l'IA ne sera finalement pas commercialisé en Europe tout de suite

2026-04-06
parismatch.be
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Gemini) is explicitly involved in the product's operation. The postponement is due to concerns about compliance with data protection laws, indicating a plausible risk of privacy-related harm (a violation of rights) if launched prematurely. No realized harm or incident is described, only potential future harm. Therefore, this event fits the definition of an AI Hazard, as the development and intended use of the AI system could plausibly lead to harm if not properly regulated.
Thumbnail Image

"Friend" : le collier dopé à l'intelligence artificielle finalement pas commercialisé en France (pour l'instant) | TF1 Info

2026-04-05
TF1 INFO
Why's our monitor labelling this an incident or hazard?
The AI system (the pendant using AI to listen and respond) is explicitly mentioned and is central to the event. The event stems from the intended use of the AI system and concerns about its compliance with privacy laws (GDPR). Although the product has been sold in the US, the article focuses on the EU context where sales are postponed due to privacy concerns. No direct or indirect harm has yet occurred in the EU, but the potential for privacy violations and breaches of data protection laws is credible and significant. Hence, this is an AI Hazard, as the AI system's use could plausibly lead to violations of fundamental rights (privacy) if marketed without proper safeguards.