Cox Media Group's AI Software Spies on User Conversations for Targeted Ads

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Cox Media Group admitted using AI-powered 'Active Listening' software to eavesdrop on smartphone conversations, targeting ads based on captured data. This practice, involving major clients like Meta, Google, and Amazon, raises significant privacy concerns due to unauthorized surveillance and data collection without user consent.[AI generated]

Why's our monitor labelling this an incident or hazard?

CMG’s Active-Listening software is an AI system in use (not just a theoretical risk) that allegedly records and analyzes sensitive voice data without clear user consent. This directly breaches users’ privacy rights, a fundamental human right, to deliver targeted ads—qualifying as an AI Incident under violations of human rights/privacy obligations.[AI generated]
AI principles
Privacy & data governanceTransparency & explainabilityRespect of human rightsAccountabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsReputationalPsychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Recognition/object detectionOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Leak from Advertising Giant Suggests Your Phone Really Is Spying on You

2024-09-03
Breitbart
Why's our monitor labelling this an incident or hazard?
CMG’s Active-Listening software is an AI system in use (not just a theoretical risk) that allegedly records and analyzes sensitive voice data without clear user consent. This directly breaches users’ privacy rights, a fundamental human right, to deliver targeted ads—qualifying as an AI Incident under violations of human rights/privacy obligations.
Thumbnail Image

Shock leak: Facebook, Google 'ARE listening into your conversations'

2024-09-02
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event describes the active use of an AI system to record and analyze private conversations without explicit consent, directly harming user privacy—a fundamental human right. The AI’s development and use have led to real, ongoing privacy violations by Facebook, Google, Amazon, and other purported clients, fitting the definition of an AI Incident.
Thumbnail Image

Phone listening to target ads - suggests company leak

2024-09-03
Inquirer
Why's our monitor labelling this an incident or hazard?
The article does not document concrete deployed harms but uncovers an AI system whose design and leaked marketing materials indicate it could surreptitiously listen to private speech and enable intrusive ad targeting. The harm (unauthorized data collection and privacy breaches) is not yet confirmed in operation but is a credible and serious potential outcome. Therefore, it represents an AI Hazard.
Thumbnail Image

Marketing firm used by Facebook, Google spies on you using your...

2024-09-03
New York Post
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (CMG’s Active Listening software) that is deployed in operation to capture real-time voice data from users’ devices and pair it with behavioral data for ad targeting, resulting in unauthorized surveillance and infringement of users’ privacy rights. This is a realized harm caused by the AI system’s use, fitting the definition of an AI Incident.
Thumbnail Image

Media Giant CMG Bragged About Eavesdropping On Phone, Laptop Or 'Smart Home' Microphones

2024-09-03
ZeroHedge
Why's our monitor labelling this an incident or hazard?
This is an AI Incident because the development and use of an AI system has directly led to a violation of individuals’ privacy—a core human right—by secretly capturing and analyzing real-time speech for commercial purposes. The harm has already occurred, with users being unknowingly monitored and served ads based on their private conversations.
Thumbnail Image

Instagram listens to every user - media

2024-09-03
Tengrinews.kz
Why's our monitor labelling this an incident or hazard?
The article describes deployment of an AI system that actively listens to users via smartphone microphones and uses those recordings to generate targeted advertising. This constitutes a direct violation of users’ privacy—a fundamental human right—caused by the development and use of the AI system. Therefore, it qualifies as an AI Incident under the framework.
Thumbnail Image

Facebook Listens to Your Microphone Confirms Social Media Partner Company

2024-09-04
ProPakistani
Why's our monitor labelling this an incident or hazard?
The reported software employs AI to analyze users’ microphone inputs in real-time and pair voice data with behavioral profiles for ad targeting. This constitutes a direct violation of personal privacy and data protection rights and is an actual, not hypothetical, misuse of AI technology. Therefore it meets the criteria for an AI Incident.
Thumbnail Image

Smartphones spying on Americans for major company

2024-09-05
American Military News
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly used (“Active Listening” software leveraging AI to capture real-time intent data). The firm’s covert recording and analysis of personal conversations without consent has directly harmed users’ privacy and violates fundamental rights. This is a realized harm from the AI system’s use, so it qualifies as an AI Incident.
Thumbnail Image

Facebook and Google partner admits to eavesdropping through smartphone microphones for targeted ads

2024-09-04
OpIndia
Why's our monitor labelling this an incident or hazard?
The software’s use of AI to listen to private conversations and target ads constitutes a direct violation of users’ privacy—a fundamental human right—thus representing realized harm caused by an AI system.
Thumbnail Image

Cox Media Group Reveals Its 'Active Listening' Software Spies on User Convos, Clients Include Meta, Google

2024-09-04
Tech Times
Why's our monitor labelling this an incident or hazard?
Cox Media Group admitted that its AI system actively listens to and processes real-time audio from users’ smartphones without consent to deliver tailored ads. This direct misuse of an AI system has led to a breach of privacy—a violation of fundamental rights—qualifying as an AI Incident under the framework.
Thumbnail Image

You phone is listening to your conversations. Firm working for FB, Google confirms

2024-09-04
WION
Why's our monitor labelling this an incident or hazard?
This event describes the active use of an AI system to capture private voice and behavioural data without consent, constituting a violation of users’ fundamental privacy rights. The AI’s deployment has directly caused harm through unauthorized surveillance and profiling, meeting the criteria for an AI Incident.
Thumbnail Image

Marketing firm admits using your phone to listen to conversations: Report

2024-09-04
Sky News Australia
Why's our monitor labelling this an incident or hazard?
The event describes the deployment and use of an AI system to record private conversations without explicit user understanding or consent, directly implicating violations of privacy and human rights. This harm has occurred rather than being a hypothetical risk, making it an AI Incident.
Thumbnail Image

Is my phone listening to me? New report leads to worry that devices are snooping

2024-09-03
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article involves an AI-related technology concept ('Active Listening' using microphones for targeted ads), which implies AI system involvement in data processing and advertising targeting. However, the article explicitly states there is no indication that this technology is actually used, and major companies deny such practices. No direct or indirect harm has been reported or confirmed. The content mainly provides context, public concerns, and company responses about privacy and AI surveillance fears. Therefore, it does not describe an AI Incident or AI Hazard but rather provides complementary information about societal and governance responses and public discourse on AI privacy issues.
Thumbnail Image

Next time you talk on your phone be careful! Facebook and Google are listening to your conversations

2024-09-04
Economic Times
Why's our monitor labelling this an incident or hazard?
An AI system (the "Active Listening" program) is explicitly mentioned as recording and analyzing conversations in real time to target ads, which involves AI-driven data processing. The use of this AI system has directly led to harm in the form of violations of privacy rights and potentially breaches of applicable laws protecting fundamental rights. The event describes realized harm through unauthorized surveillance and data use, not just potential harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Are smartphones listening to your conversations? What Google, Facebook, and Amazon have to say - Times of India

2024-09-04
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Active Listening technology) that collects and analyzes voice data for targeted advertising, which is a plausible privacy and rights concern. However, the article does not provide evidence that this AI system has directly or indirectly caused harm yet. The major companies deny involvement, and no concrete incident of harm is reported. Therefore, this situation represents a plausible risk of harm (privacy violation, potential breach of rights) but no confirmed harm has occurred. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Yes, it sounds like a conspiracy theory. But maybe our phones really are listening to us | Arwa Mahdawi

2024-09-04
The Guardian
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Active Listening' software) that processes voice data from device microphones to target advertisements. This use of AI directly leads to violations of privacy rights, which fall under human rights and legal protections. The article indicates that this technology is actively used or at least marketed, implying realized harm rather than just potential harm. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through privacy violations.
Thumbnail Image

New evidence claims Google, Microsoft, Meta, and Amazon could be listening to you on your devices

2024-09-04
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves AI-related technology (voice data processing and targeted advertising potentially using AI), but there is no confirmed use or misuse of AI systems causing harm or a credible risk of harm described. The major companies deny involvement, and the article focuses on the marketing pitch and privacy concerns rather than an actual incident or a credible hazard. Thus, it fits best as Complementary Information, providing context and updates on societal and governance responses to AI-related privacy issues without reporting a new incident or hazard.
Thumbnail Image

After years of gaslighting users, marketing firm finally acknowledges that phones listen in on conversations

2024-09-04
India Today
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using microphones and AI to monitor and analyze real-time conversations to infer user intentions and target advertising. This use of AI has directly led to violations of privacy and potentially breaches of legal obligations, which fall under violations of human rights and legal protections. The harm is realized, not just potential, as users' conversations are being monitored without explicit consent, constituting an AI Incident. The involvement of AI in the development and use of this Active Listening technology is central to the harm described.
Thumbnail Image

New evidence claims Google, Microsoft, Meta, and Amazon could be listening to you on your devices

2024-09-05
Mashable ME
Why's our monitor labelling this an incident or hazard?
The event involves AI systems capable of processing voice data from smart devices to target advertising, which implicates potential violations of privacy and human rights. Although the companies deny involvement and no direct harm is reported, the promotion of such a service indicates a credible risk that AI-enabled eavesdropping could lead to violations of rights and harm to individuals' privacy. Since no harm has materialized or been confirmed, but plausible future harm exists, this qualifies as an AI Hazard under the framework.
Thumbnail Image

Leaked slideshow shows that YES, your phones are eavesdropping on you!

2024-09-03
Conservative News Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for active listening and predictive audience targeting, which process voice data captured from smart devices. The AI system's use has directly led to privacy violations and unauthorized data collection, which are breaches of fundamental rights. The harm is realized, as the technology is actively used to eavesdrop and target consumers without their informed consent. The involvement of AI in analyzing and combining voice and behavioral data to influence advertising demonstrates a direct link to harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'Is my phone listening to me?': Fears spark after marketing company claims to access mic data

2024-09-04
LaptopMag
Why's our monitor labelling this an incident or hazard?
The article involves AI-related technology in the form of data analysis and marketing tools, but no direct evidence or confirmation of an AI system actively listening to conversations and causing harm is presented. The claims are largely speculative or promotional and are later retracted or clarified by the company. No direct or indirect harm has occurred, nor is there a clear plausible future harm established beyond general privacy concerns. The article mainly provides background, public reaction, and clarifications, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Do social media sites listen to your phone's mic? Facebook partner says Yes!

2024-09-04
Gizmochina
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is explicit: 'Active Listening' uses an AI model to analyze conversations in real-time. The system's use (development and deployment) has directly led to harm by infringing on privacy and potentially violating human rights related to consent and data protection. The article reports actual use and marketing of this technology, not just potential or hypothetical risks. The harms are significant and clearly articulated, fitting the definition of an AI Incident under violations of human rights. Denials by companies do not negate the evidence from leaked documents and marketing materials indicating the system's deployment and impact.
Thumbnail Image

Microsoft, Google, Facebook, Amazon partner admits your phone could listen to everything

2024-09-04
Neowin
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system that aggregates and analyzes voice data captured from smart devices' microphones to deliver personalized ads. This use of AI directly leads to violations of privacy and consumer rights, which are breaches of fundamental rights under applicable law. The harm is realized as the AI system's use results in unauthorized surveillance and exploitation of personal conversations for commercial gain. Therefore, this qualifies as an AI Incident due to violations of human rights and privacy breaches caused by the AI system's use.
Thumbnail Image

Report: Facebook And Google Are Listening To Conversations Through Your Phone

2024-09-03
One America News Network
Why's our monitor labelling this an incident or hazard?
The event explicitly describes an AI system ('Active Listening' software) that listens to conversations via smartphone microphones and uses AI to analyze this data for targeted advertising. This use of AI directly leads to violations of privacy rights, a form of human rights violation, which is a recognized harm under the AI Incident definition. The involvement of major companies and the admission by the marketing firm that this practice is legal but potentially invasive confirms that harm is occurring. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Is your phone listening to you for ads? This Facebook partner says yes

2024-09-04
MSPoweruser
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Active Listening technology) that processes voice data to influence advertising targeting. The use of this AI system has directly led to privacy violations and potential breaches of user rights, as it secretly listens to conversations without explicit informed consent. This constitutes a violation of fundamental rights related to privacy and data protection, which falls under the category of violations of human rights or breach of obligations under applicable law. Therefore, this event qualifies as an AI Incident due to the realized harm to user privacy and rights caused by the AI system's use.
Thumbnail Image

Law Enforcement Today

2024-09-06
Law Enforcement Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Active-Listening' software) that listens to users' conversations via microphones on phones, laptops, and home assistants to collect and analyze data for targeted advertising. This use of AI directly leads to violations of privacy rights and potentially breaches legal obligations regarding user consent and data protection. The harm is realized as users' conversations are being surveilled and exploited without clear consent, constituting a violation of human rights and applicable laws. Hence, this event meets the criteria for an AI Incident due to the direct involvement of AI in causing harm through privacy violations.
Thumbnail Image

Facebook ads partner admits eavesdropping on people's phones to serve ads

2024-09-04
Celebitchy
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used by Cox Media Group that listens to users' conversations through their smartphones to target ads. This use of AI directly leads to violations of privacy rights, a recognized human rights violation. The harm is realized, not just potential, as the system is actively used to eavesdrop and target consumers. The involvement of AI in capturing and analyzing voice data for advertising purposes fits the definition of an AI system, and the resulting privacy breach constitutes an AI Incident under the framework. The event is not merely a hazard or complementary information but a clear case of harm caused by AI use.
Thumbnail Image

Facebook partner has software that listens to your conversations

2024-09-03
Android Headlines
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Active Listening' software using an AI model) that listens to user conversations via internet-connected devices and uses that data for targeted advertising. This constitutes a violation of privacy rights, a form of human rights violation under applicable law. The harm is realized as users' conversations are captured and exploited without clear, informed consent, leading to privacy breaches. The involvement of AI in processing and analyzing the audio data is central to the harm. Hence, this event meets the criteria for an AI Incident due to violations of human rights (privacy) caused by the AI system's use.
Thumbnail Image

Cox Media Group Listens to People via Their Phone Microphone

2024-09-03
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to process voice data captured from phone microphones to target ads, which qualifies as an AI system. The use of this AI system directly leads to violations of privacy rights and potentially breaches legal obligations regarding user consent and data protection. The harm is occurring as the system is actively listening and analyzing conversations without clear user awareness or consent, constituting an AI Incident under the framework's definition of violations of human rights or breach of applicable law protecting fundamental rights.
Thumbnail Image

Shocking leaks reveal our mobile phones and home devices are spying on us

2024-09-03
The Gulf Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system ('Active-Listening' program) that uses microphones on personal devices to collect and analyze voice data for targeted advertising. This constitutes a direct use of AI in a manner that infringes on individuals' privacy rights, a recognized human rights violation. The harm is realized as the program is actively collecting and using personal data without clear consent, leading to privacy breaches. The involvement of major tech companies and the leak of internal presentations confirm the system's deployment and impact. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Cox Media Admits To Monitoring Conversations Using Its Software To Serve Better Ads

2024-09-03
RTTNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system that monitors conversations and online behavior to serve targeted ads, which directly implicates privacy and human rights concerns. The system's use has led to public backlash and actions by major platforms like Google, indicating realized harm rather than just potential risk. The AI system's development and use have directly contributed to violations of privacy rights, fitting the definition of an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights.
Thumbnail Image

Cox Media Group's Pitch Deck Confirms Smartphone Surveillance for Targeted Ads - WinBuzzer

2024-09-05
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI or algorithmic systems to analyze voice data captured from smart devices for targeted advertising without clear user consent, which constitutes a violation of privacy rights and potentially other legal protections. The direct use of AI in processing personal conversations for commercial gain without consent is a breach of fundamental rights and has caused harm to individuals' privacy. The involvement of AI in this surveillance and data processing, combined with the realized harm and ethical concerns, fits the definition of an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights.
Thumbnail Image

Your Phones May Be "Active Listening" to Your Conversations, Claims Facebook Alleged Ad Partner ~ My Mobile India

2024-09-05
My Mobile
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as 'Active Listening' software that listens to real-time conversations and uses AI to pair voice data with behavioral data for targeted ads. This use of AI directly leads to a violation of human rights, specifically privacy rights, as it collects and processes personal voice data without explicit user consent. The article indicates that this practice is ongoing or has occurred, constituting realized harm. Despite denials from major companies, the leaked pitch deck and the description of the technology provide sufficient basis to classify this as an AI Incident under the framework's criteria for violations of rights caused by AI use.
Thumbnail Image

Did a Leaked Memo Confirm Advertisers Are Listening in on Our Conversations?

2024-09-05
DRGNews
Why's our monitor labelling this an incident or hazard?
The leaked memo reveals that an AI system is being used to listen to conversations and analyze them for advertising purposes without clear informed consent, which constitutes a violation of privacy rights and potentially applicable laws protecting fundamental rights. This use of AI has directly led to harm in terms of unauthorized surveillance and exploitation of personal data, fitting the definition of an AI Incident involving violations of human rights and legal obligations.
Thumbnail Image

Media Giant CMG Bragged About Eavesdropping On Phone, Laptop Or 'Smart Home' Microphones

2024-09-04
freedomsphoenix.com
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as using artificial intelligence to capture and analyze audio data from devices such as phones, laptops, and smart home assistants. This involves AI-based processing of natural language to infer user intent. The use of such AI to eavesdrop on private conversations without consent constitutes a violation of privacy and potentially human rights, as it breaches fundamental rights to privacy and data protection. Therefore, this event involves the use of an AI system leading to a violation of rights, qualifying it as an AI Incident.
Thumbnail Image

Here's the Pitch Deck for 'Active Listening' Ad Targeting

2024-09-03
404 Media
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system ('Active Listening') that processes audio data from microphones to target ads, which is a sophisticated AI application. The use of such technology without clear user consent or transparency can be reasonably inferred to violate privacy rights, a form of human rights violation under the framework. The harm is realized or ongoing, as the service was actively marketed and used, leading to Google's punitive action. Hence, this is an AI Incident involving violations of rights caused by the AI system's use.
Thumbnail Image

Leak from Advertising Giant Suggests Your Phone Really Is Spying on You - The Jewish Voice

2024-09-03
The Jewish Voice
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used to eavesdrop on users' conversations through microphones on personal devices, analyzing voice data combined with online behavior to target ads. This constitutes a violation of privacy rights, a fundamental human right, and the harm is realized as users' private conversations are being surveilled without clear consent. The involvement of AI in analyzing and inferring intent from conversations directly leads to this harm. Hence, this is an AI Incident under the framework's definition of violations of human rights caused by AI system use.
Thumbnail Image

Google, Meta, Microsoft And Amazon Advertising Partner Listens To Phone Microphones - GEARRICE

2024-09-05
Gearrice
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to analyze audio data collected from device microphones to target ads, which directly involves AI systems in processing personal data. The unauthorized or non-transparent use of such data for advertising purposes constitutes a violation of privacy rights, a form of human rights violation under the framework. The harm is realized as the data is actively used to influence advertising targeting, impacting individuals' privacy and autonomy. Although some companies deny involvement, the media company admits to this practice, and Google has severed ties, indicating acknowledgment of harm. Hence, this is an AI Incident due to direct harm caused by AI use in analyzing voice data for targeted advertising without proper consent.
Thumbnail Image

Is Facebook Listening? New Report Uncovers Alarming Use of Smartphone Mics for Ads

2024-09-03
News9live
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Active Listening' software) that uses smartphone microphones to capture conversations and generate targeted ads. This use of AI directly leads to violations of privacy rights and possibly breaches legal obligations regarding user consent and data protection. The harm is realized, as users are being surveilled without clear, informed consent, and the companies involved are facing repercussions. Hence, this qualifies as an AI Incident due to the direct harm to human rights and privacy caused by the AI system's use.
Thumbnail Image

This Company Says It Uses Your Phone's Mic to Serve Ads for Facebook, Google, and More

2024-09-02
It's FOSS News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Active Listening' technology) that processes voice data from users' smartphones to serve targeted ads. This use of AI directly leads to violations of privacy rights and unauthorized data collection, which are breaches of fundamental rights. The article indicates that these practices have been ongoing and have caused harm to users' privacy. The involvement of major companies and their responses further confirm the seriousness of the issue. Hence, this is an AI Incident as the AI system's use has directly led to harm in the form of privacy violations and unauthorized surveillance.
Thumbnail Image

Disturbing revelation indicates that your phone may actually be eavesdropping on your chats - Internewscast Journal

2024-09-02
Internewscast Journal
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system ('Active-Listening' software) that processes voice data to target ads, which is a clear use of AI for real-time data analysis and decision-making. The AI system's use has directly led to violations of privacy and potentially breaches of legal protections related to wiretapping and consent, constituting harm to human rights. The leak and subsequent admissions confirm the AI system's role in causing these harms. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's use in eavesdropping and targeted advertising without proper consent.
Thumbnail Image

Shocking Leak Suggests Facebook, Google And Amazon Really ARE Listening Into Your Conversations To Serve You Targeted Ads On Your Phone - Ny Breaking News

2024-09-02
NY Breaking News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Active Listening' software) that listens to and analyzes users' conversations to target ads, directly leading to violations of privacy and human rights. The harm is realized, not just potential, as the software is actively used to collect and exploit voice data. The involvement of major companies like Facebook, Google, and Amazon (even if some deny direct involvement) and the detailed description of the AI system's operation confirm the presence of an AI system causing harm. This fits the definition of an AI Incident because it involves violations of human rights (privacy) caused by the use of AI systems.
Thumbnail Image

Bombshell leak confirms your phone may be listening to you - and telling Google

2024-09-03
Mirror
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Active Listening software) that listens to user conversations via device microphones and uses AI to analyze and target ads. The harm is realized as it violates users' privacy rights and potentially breaches legal frameworks on consent and data protection. The leak confirms the system's active use, not just a potential risk, so it is an AI Incident rather than a hazard. The harm is indirect but direct enough as the AI system's use leads to privacy violations and user harm. Hence, the classification is AI Incident.
Thumbnail Image

Meta responds to allegations smartphone microphones listen to conversations to serve ads

2024-09-05
TweakTown
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the "Active Listening" software) that uses AI to process microphone input to infer user intent for targeted advertising. The alleged use of this system to listen to private conversations without consent constitutes a violation of privacy and potentially human rights. However, the article does not confirm that Meta or other companies have actually deployed this technology or that harm has occurred; it reports on the existence of the technology and denials by Meta. Since no direct or indirect harm is confirmed as having occurred, but the technology's use could plausibly lead to privacy violations and harm, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or a response to a past incident, so it is not Complementary Information. Therefore, the classification is AI Hazard.
Thumbnail Image

Facebook Allegedly Said to Have Eavesdropped by Using Users' Microphones

2024-09-08
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
The report describes an AI-enabled software that captures real-time voice data without user consent and pairs it with behavioral data for advertising. This constitutes misuse of an AI system leading to a violation of users’ privacy rights, fitting the definition of an AI Incident.
Thumbnail Image

Is your phone really listening to you? Here's what we know

2024-09-07
Newsweek
Why's our monitor labelling this an incident or hazard?
The piece does not document any realized harm—CMG denies the system was ever deployed and it has been discontinued—so it is not an AI Incident. However, the detailed method for AI-based voice eavesdropping presents a plausible pathway to privacy and human‐rights violations, fitting the definition of an AI Hazard.
Thumbnail Image

CMG Leak Unveils Controversial 'Active Listening' Ad Technology - Space/Science news - Tasnim News Agency

2024-09-08
خبرگزاری تسنیم
Why's our monitor labelling this an incident or hazard?
The article details an AI-powered “active listening” advertising tool whose deployment would infringe on user privacy by inferring purchase intent from voice data. No actual listening incident occurred—CMG says it was never live—so there is no realized harm yet. However, the described technology represents a credible potential for privacy violation and unwanted surveillance, classifying it as an AI Hazard.
Thumbnail Image

Cox Media Admits To Monitoring Conversations Using Its Software To Serve Better Ads

2024-09-10
RTTNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used to monitor conversations and online behavior to serve targeted ads, which directly impacts individuals' privacy and rights. The system's operation has led to public concern and platform-level sanctions, indicating realized harm related to violations of rights. The AI system's development and use have directly contributed to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The harm is not speculative but has materialized through the monitoring practices and subsequent reactions.
Thumbnail Image

Big Brother is pitching you: Marketing company's pitch proves your phone is spying on you

2024-09-11
gorgenewscenter.com
Why's our monitor labelling this an incident or hazard?
The article describes an AI system developed and used by Cox Media Group that listens to users' conversations via device microphones to target ads, which is a clear violation of privacy and potentially human rights. The AI system's use has directly led to harm by spying on users without consent, fulfilling the criteria for an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights. The involvement of AI in processing voice data for targeted advertising and the resulting privacy harm justifies this classification.
Thumbnail Image

Un partenaire de Google et Facebook vend de la publicité ciblée en écoutant en douce les conversations

2024-09-03
01net
Why's our monitor labelling this an incident or hazard?
Cox Media Group’s system explicitly uses AI to analyze audio captured from consumers’ devices without clear consent and then targets ads based on illicitly obtained intent data. The misuse of an AI system in this way directly breaches privacy and fundamental rights, meeting the definition of an AI Incident.
Thumbnail Image

Un partenaire de Facebook espionne les conversations des utilisateurs par le biais du microphone de leurs téléphones pour diffuser des publicités ciblées basées sur les mots clés de leurs discussions

2024-09-03
Developpez.com
Why's our monitor labelling this an incident or hazard?
An AI system (“Active-Listening”) is directly used to surreptitiously record and analyze user speech, causing actual harm by violating users’ rights to privacy. The incident involves development and deployment of AI for unauthorized audio surveillance and targeted advertising. This is a concrete case of AI leading to a breach of human rights.
Thumbnail Image

Cette entreprise assure pouvoir écouter vos conversations pour vous envoyer de la publicité ciblée

2024-09-03
BFMTV
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system that listens to conversations via smart devices to collect data for targeted advertising, which directly implicates privacy violations and breaches of fundamental rights. The use of AI to process audio data for advertising without user consent constitutes a violation of human rights under applicable laws protecting privacy. The involvement of AI in the development and use of this system is clear, and the harm (privacy violation) is realized, not just potential. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Vos téléphones nous écoutent-ils ? Enfin la réponse ...

2024-09-05
Tunisie Numerique
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as listening to and analyzing user conversations in real time to target advertisements, which is a direct use of AI. The harm is realized in the form of privacy violations and potential breaches of legal protections related to personal data and consent, which fall under violations of human rights and legal obligations. The involvement of AI is central to the harm, as it enables the scale and precision of surveillance and targeting. The article also mentions reactions from major companies and legal concerns, confirming the significance of the issue. Hence, this is classified as an AI Incident.
Thumbnail Image

Attention à ce que vous dites sur vos smartphones, ces entreprises connues du grand public vous espionnent

2024-09-03
Toms Guide : actualités high-tech et logiciels
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system that listens to real-time conversations via device microphones and analyzes them to target advertisements, which is a clear use of AI for surveillance and data processing. The system's operation leads to privacy violations and potential breaches of legal consent requirements, which are harms to human rights and legal obligations. The harm is realized, not just potential, as the system is reportedly in use or at least developed with the intent to be used. Hence, this qualifies as an AI Incident under the definitions provided, as the AI system's use directly leads to violations of rights and ethical concerns.
Thumbnail Image

Notre téléphone nous écoute-t-il vraiment ? " Il y a deux gros problèmes ! "

2024-09-04
Sudinfo.be
Why's our monitor labelling this an incident or hazard?
The article does mention AI systems potentially used for real-time audio processing to target ads, which implies AI system involvement. However, it does not report any realized harm or incident caused by such AI use. Instead, it focuses on the plausibility and technical feasibility, with expert opinion suggesting it is unlikely. There is no indication that an AI system has caused or is causing harm, nor that harm is imminent or plausible beyond general speculation. Therefore, this is best classified as Complementary Information, providing context and expert analysis about AI-related advertising practices and privacy concerns, without describing a specific AI Incident or AI Hazard.
Thumbnail Image

"Active Listening" : révélations troublantes sur une technologie qui écoute nos conversations pour des pubs ciblées sur mobile

2024-09-04
Siècle Digital
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system that processes audio data from users' devices to infer intentions and behaviors for targeted ads. This use without explicit consent constitutes a violation of data protection and privacy rights, which are fundamental rights under applicable law. The harm is realized as users' conversations are surveilled and exploited without proper consent, constituting a breach of rights. Therefore, this qualifies as an AI Incident due to violations of human rights and legal obligations related to privacy and data protection.
Thumbnail Image

Un spécialiste de la publicité prétend écouter vos conversations et annonce travailler avec Google, Amazon ou Facebook

2024-09-03
MacGeneration
Why's our monitor labelling this an incident or hazard?
The event involves an AI system claimed to be used for aggregating conversational and behavioral data for targeted advertising, which could implicate privacy violations (a form of harm to rights). However, the article emphasizes the lack of direct evidence and denials from major companies, making the actual use of such AI technology uncertain. Since no confirmed harm has occurred and the claims remain unproven, the event fits the definition of an AI Hazard, as it plausibly could lead to privacy harms if true, but no incident is confirmed. It is not Complementary Information because the main focus is on the claim and its plausibility, not on updates or responses to a known incident. It is not Unrelated because AI systems and potential misuse are central to the discussion.
Thumbnail Image

" Active listening " : Révélation sur une technologie qui écoute nos conversations pour des publicités ciblées sur mobile

2024-09-05
L'Actualité du Burkina Faso 24h/24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Active Listening') that listens to user conversations via smart devices to analyze data and deliver targeted ads. This use without explicit consent likely breaches privacy laws and fundamental rights, constituting harm. The AI system's development and use have directly led to these harms, fulfilling the criteria for an AI Incident. The involvement of AI in processing and analyzing conversations for advertising is clear, and the harm to users' rights is realized, not just potential.
Thumbnail Image

Nos smartphones nous espionnent-ils ? La vérité derrière les publicités ciblées

2024-09-02
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system that intercepts and analyzes personal conversations via smartphone microphones to deliver targeted ads. This use of AI directly infringes on individuals' privacy rights, constituting a violation of human rights under applicable law. The harm is realized, as the AI system's deployment has already occurred and is causing privacy breaches. The event involves the use of AI and its direct role in causing harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Έτσι μετατρέπονται τα κινητά σας σε κοριούς κατασκοπείας!

2024-09-06
Sportdog.gr - Αθλητικά Νέα | Ειδήσεις | Sport
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions software that continuously records conversations via microphones on mobile devices and smart home assistants, then uses this data for targeted advertising. This involves AI systems capable of audio processing and inference. The harm is a violation of privacy and personal data rights, which are fundamental rights protected by law. The harm is realized, not just potential, as the recordings are used by advertising companies. The users' consent is often given without full understanding, but the AI system's role in enabling this surveillance and targeted advertising is pivotal. Hence, this is an AI Incident involving violation of rights due to AI-enabled surveillance and data exploitation.
Thumbnail Image

Active-Listening: Τι είναι το λογισμικό "ενεργής ακρόασης" που μετατρέπει τα κινητά τηλέφωνα σε... κοριούς - Πώς να προστατευτείτε - OmegaLive

2024-09-08
omegalive.com.cy
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system that continuously listens to and records private conversations via device microphones, analyzing them with AI to generate targeted ads. This use of AI directly leads to harm by violating privacy and potentially breaching legal and human rights protections. The harm is realized, not hypothetical, as the system is actively used and has caused privacy intrusions. Hence, it meets the criteria for an AI Incident involving violations of human rights and privacy through AI-enabled surveillance and data misuse.
Thumbnail Image

Αυτό είναι το λογισμικό που... μετατρέπει τα κινητά τηλέφωνα σε κοριούς

2024-09-08
news.makedonias.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system that listens to mobile phone conversations and combines this with behavioral data to target ads. This use of AI directly infringes on privacy rights and can be considered a violation of human rights and legal protections related to privacy. The harm is realized as users' private conversations are monitored and exploited without consent, constituting an AI Incident under the framework's definition of violations of human rights or breach of applicable law protecting fundamental rights.
Thumbnail Image

Alla fine i telefoni ci possono ascoltare davvero: il caso della funzione Active Listening

2024-09-03
Fanpage
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Active Listening) that listen to and analyze private conversations to target advertising, which constitutes a violation of privacy rights and potentially breaches applicable laws protecting fundamental rights. The AI system's development and use have directly led to harm in terms of privacy violations and unauthorized data collection. Therefore, this qualifies as an AI Incident under the framework, as it involves realized harm to human rights through the AI system's use.
Thumbnail Image

Sai che il tuo telefono ascolta quello che dici? Ecco cosa devi sapere

2024-09-05
Money.it
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Active Listening') that processes voice data from users' devices to generate targeted advertising, which directly implicates privacy violations and potential breaches of fundamental rights. The AI system's development and use have led to realized harm in terms of privacy infringement and unauthorized data collection. Therefore, this qualifies as an AI Incident under the framework, as it involves violations of human rights and privacy due to AI system use.
Thumbnail Image

Rivelazione shock: Facebook spia gli utenti per indirizzare le pubblicità

2024-09-05
Tom's Hardware
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system that listens to and analyzes conversations in real time for marketing purposes, which qualifies as an AI system under the definitions. The use of this AI system has directly led to violations of privacy and potentially breaches legal obligations regarding user consent and data protection, constituting harm to human rights. Therefore, this event meets the criteria for an AI Incident due to realized harm from the AI system's use.
Thumbnail Image

Questa è la conferma che i nostri smartphone ascoltano quello che diciamo

2024-09-04
TuttoAndroid
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Active Listening) that uses AI to analyze audio and behavioral data from smartphones to target ads. This use of AI has directly led to violations of privacy rights, a breach of obligations intended to protect fundamental rights, which qualifies as harm under the AI Incident definition. The harm is realized, not just potential, as the system is actively collecting and using personal conversation data. Therefore, this is classified as an AI Incident.
Thumbnail Image

Microfono smartphone ascolta conversazioni private: succede, non è fantascienza

2024-09-05
IlSoftware.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Active Listening) that processes private conversations for advertising purposes, which could lead to violations of privacy rights and potentially other harms. However, the article does not provide evidence that such harms have already materialized; rather, it raises concerns about possible misuse and legal issues. Therefore, this situation represents a plausible risk of harm stemming from the use of AI technology, fitting the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the core subject is the potential for harm from the AI system's use, not just a response or update to a past event.
Thumbnail Image

Smartphone ascoltano gli utenti: la conferma di un partner di Meta

2024-09-05
HTML.it
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system that processes voice data collected from smartphones to generate targeted ads. The use of AI to analyze conversations without explicit user consent constitutes a breach of privacy rights, a violation of human rights and legal protections. The harm is realized as users are being monitored and their data exploited without proper consent, which is a direct harm caused by the AI system's use. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Occhio che ci spiano ( ma lo sapevamo già...) !

2024-09-06
ComeDonChisciotte Forum
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Voice Data') that actively listens to users' conversations via microphones on devices, processes this data using AI, and uses it for targeted advertising. This use of AI directly leads to violations of privacy and potentially other human rights, as it occurs without clear consent and may influence political opinions. The harm is realized and ongoing, fitting the definition of an AI Incident due to violations of human rights and harm to communities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Pričate o nečemu s prijateljima, a onda krenu iskakati reklame na mrežama? Marketing agencija tvrdi da nas uređaji prisluškuju, reagirali Facebook i Google

2024-09-04
www.dubrovackidnevnik.net.hr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Active Listening software) that allegedly listens to user conversations and analyzes data for targeted advertising, which could violate privacy rights and human rights. Although major companies deny involvement and no confirmed harm is reported, the potential for privacy violations and unauthorized surveillance is credible and plausible. Therefore, this event fits the definition of an AI Hazard, as it could plausibly lead to violations of human rights and privacy harm, but there is no confirmed direct or indirect harm yet.
Thumbnail Image

Jesu li vam se ikada nakon telefonskog razgovora na Fejsu počele prikazivati reklame o proizvodima o kojima ste pričali?

2024-09-04
Zimo.co
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Active Listening' software) that allegedly listens to users' conversations to collect data for targeted advertising without explicit consent, which constitutes a violation of privacy and potentially breaches legal protections of fundamental rights. The involvement of AI in analyzing real-time audio data and linking it to user profiles for advertising purposes fits the definition of an AI system causing harm. The harm is realized as users' privacy rights are violated, making this an AI Incident. Although the implicated companies deny involvement, the marketing agency's claims and the described AI system's role in surveillance and data processing justify classification as an AI Incident due to direct or indirect harm to human rights.
Thumbnail Image

Da li nas aplikacije prisluškuju i na osnovu naših razgovora šalju reklame? Marketinška agencija tvrdi da to radi za velike klijente

2024-09-05
Dnevni list Danas
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Active Listening') that listens to and analyzes user conversations to deliver targeted ads, which directly implicates privacy violations and breaches of user rights. The harm is realized as users' conversations are surveilled without clear consent, leading to violations of fundamental rights. The involvement of AI in processing and analyzing real-time intent data for advertising purposes is explicit. Despite denials from major companies, the marketing agency's claims and the described software's function meet the criteria for an AI Incident due to direct harm to human rights (privacy).
Thumbnail Image

Pričali ste o nečemu i počele su vam iskakati reklame? Evo zašto - Akta.ba

2024-09-05
Akta.ba
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-based system used for active listening and data analysis to target users with ads based on their conversations, which is a direct use of AI leading to harm in the form of privacy violations and unauthorized data collection. The harm is realized, not just potential, as users experience targeted ads linked to their private conversations. The involvement of major companies and their responses further confirm the AI system's role. Hence, this is an AI Incident involving violations of human rights and privacy.
Thumbnail Image

Pričali ste o nečemu i počele su vam iskakati reklame? Evo zašto - Zenit.ba

2024-09-05
Zenit.ba
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Active Listening software) that listens to and analyzes user conversations to deliver targeted ads, which directly implicates privacy rights and data protection laws. The use of AI in this manner without clear user consent constitutes a violation of human rights and legal obligations protecting privacy. The harm is realized as users' privacy is infringed upon, and their data is exploited for commercial gain. Hence, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through unauthorized surveillance and data misuse.
Thumbnail Image

Pričali ste o nečemu i počele su da vam iskaču reklame, zašto?

2024-09-05
N1
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Active Listening software) that is claimed to be used for real-time audio surveillance and data analysis for targeted advertising. Such use implicates potential violations of privacy and fundamental rights, which fall under the category of harm to human rights. However, the article presents these claims as allegations without confirmed evidence of actual harm or misuse. The companies involved have denied participation, and the article focuses on the controversy and responses rather than confirmed incidents. Therefore, this situation represents a plausible risk of harm due to the AI system's use but does not document a realized harm incident. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Pričali ste o nečemu i počele su da vam iskaču reklame, zašto?

2024-09-05
RTCG - Radio Televizija Crne Gore - Nacionalni javni servis
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (active listening software) allegedly used to eavesdrop on users, which could lead to violations of privacy and human rights. Although the companies deny involvement and no confirmed harm is reported, the plausible risk of privacy violations and unauthorized data collection constitutes an AI Hazard. There is no confirmed direct or indirect harm yet, so it is not an AI Incident. The article is not merely complementary information because it focuses on the potential misuse of AI for surveillance and targeted advertising, not just updates or responses. Therefore, the classification is AI Hazard.
Thumbnail Image

Skandaloznu marketinšku praksu prisluškivanja mobitela razotkrili nezavisni mediji

2024-09-03
bug.hr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to analyze voice data from devices to target advertising, which is a clear AI system involvement. The use of this AI system has directly led to potential violations of privacy and wiretapping laws, which are breaches of fundamental rights. The controversy and corporate responses indicate that the harm is realized, not just potential. Hence, this is an AI Incident under the framework, as it involves AI use causing violations of human rights and legal obligations.
Thumbnail Image

Skandaloznu marketinšku praksu prisluškivanja mobitela razotkrili nezavisni mediji - Monitor.hr

2024-09-03
Monitor.hr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI or AI-like technology to analyze voice data for targeted advertising, which can be reasonably inferred as involving AI systems for speech recognition and data analysis. The practice is alleged to be illegal and involves privacy violations, which constitute a breach of fundamental rights. Since the event describes ongoing or past use of this technology leading to violations of rights (privacy and consent), it qualifies as an AI Incident. The harm is realized (privacy violation and illegal data collection), not just potential.
Thumbnail Image

Pričate o nečemu, pa vidite reklamu na mobitelu? Skandalozna marketinška praksa konačno razotkrivena

2024-09-04
Haber.ba
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to analyze voice data from devices, which constitutes an AI system involved in the use phase. The practice has directly led to potential violations of privacy and wiretapping laws, which are legal protections related to fundamental rights. The lack of clear user consent and the covert nature of data collection imply a breach of rights. Although the article does not confirm legal rulings, the described practices have already caused public backlash and corporate responses, indicating realized harm. Therefore, this qualifies as an AI Incident due to violations of human rights and legal obligations related to privacy and consent.
Thumbnail Image

"Šokantno": Mobilni telefoni nas prisluškuju - BIGportal.ba

2024-09-03
BIGportal.ba
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by CMG to analyze voice data from consumer devices, which is an AI system involved in the use phase. The practice allegedly involves recording and analyzing conversations without explicit consent, violating wiretapping laws and privacy rights, which are human rights and legal protections. This constitutes a breach of obligations under applicable law intended to protect fundamental rights. The controversy and actions by Google and Meta to distance themselves from CMG further indicate the seriousness of the harm. Therefore, this event qualifies as an AI Incident due to realized violations of rights caused directly or indirectly by the AI system's use.
Thumbnail Image

Pričate o nečemu, pa vidite reklamu na mobitelu? Skandalozna marketinška praksa konačno razotkrivena | Face | Jedina neovisna medijska kuća

2024-09-04
Face.ba
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the marketing company to analyze voice data from consumers' devices, which is an AI system involvement. The use of this AI system has directly led to potential violations of privacy laws and rights, which are considered breaches of applicable law protecting fundamental rights. The controversy and reactions, including removal from advertising programs, confirm the harm has materialized or is ongoing. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI in causing legal and rights violations through unauthorized data collection and analysis.
Thumbnail Image

Windows 11 : la fonction Recall ne pourra pas être désinstallée

2024-09-03
01net
Why's our monitor labelling this an incident or hazard?
The Recall feature involves AI-like activity monitoring capabilities that could impact user privacy, which relates to potential violations of rights. However, the article does not report any realized harm or incidents caused by the feature's use or malfunction. Instead, it focuses on the feature's development, deployment delays, and regulatory compliance considerations, which indicate plausible future risks but no current incident. Therefore, this event fits the definition of an AI Hazard, as the feature's use could plausibly lead to privacy harms, but no direct or indirect harm has yet occurred.
Thumbnail Image

Si vous utilisez cette application, votre smartphone écoute vos conversations

2024-09-03
Presse-citron
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system that listens to and analyzes audio data from users' devices to target advertisements, which is a clear AI system involvement. The use of this system has directly led to privacy violations and potential breaches of legal frameworks protecting personal data and consent, fulfilling the criteria for an AI Incident under violations of human rights and legal obligations. The harm is realized as users' intimate conversations are being recorded and analyzed without clear consent, impacting their privacy rights. The ethical and legal concerns raised confirm the direct harm caused by the AI system's use.
Thumbnail Image

À San Francisco, les taxis sans conducteur de Waymo perdent les pédales

2024-09-05
Presse-citron
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Waymo's autonomous taxis) and their malfunction in parking behavior leading to a chain reaction of horn honking and traffic jams. This caused significant noise pollution, which is harm to the community (harm category d). The AI system's safety feature misinterpreted other vehicles, leading to the incident. Waymo's intervention and apology confirm the harm was materialized and recognized. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La guerre des robots boostés à l’IA est lancée : Elon Musk et Jeff Bezos veulent la gagner

2024-09-03
Presse-citron
Why's our monitor labelling this an incident or hazard?
The article focuses on the development and investment in AI-enhanced robots by prominent entrepreneurs, describing the technology and potential applications without reporting any actual harm, malfunction, or misuse. There is no indication that these AI systems have caused or are causing injury, rights violations, disruption, or other harms. While the technology could plausibly lead to future risks, the article does not emphasize or warn about such hazards explicitly. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides contextual information about AI developments and investments, fitting the definition of Complementary Information.