
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
The US startup Friend postponed the launch of its AI-powered necklace in France and the EU due to privacy concerns and potential GDPR violations. The device, which listens and analyzes conversations, raised fears about data protection, prompting the company to review compliance before marketing in Europe.[AI generated]
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as listening and analyzing conversations, which involves AI processing. The concerns raised relate to privacy and data protection under GDPR, which are legal rights protecting individuals' personal data. Since the product launch is suspended to address these concerns before deployment, no direct harm or violation has yet occurred. Thus, the event is best classified as an AI Hazard because it plausibly could lead to violations of personal data rights (a form of harm under the framework) if the AI system is deployed without proper safeguards. It is not an AI Incident because harm has not materialized, nor is it Complementary Information or Unrelated as the focus is on the AI system's potential to cause harm and the regulatory response.[AI generated]