Controversy Over Neon App Selling User Call Recordings for AI Training

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Neon app, rapidly rising in popularity in the US, pays users to record their phone calls, then sells these recordings to AI companies for model training. This practice has sparked major privacy, legal, and security concerns, including risks of identity theft and voice fraud due to AI misuse of personal voice data.[AI generated]

Why's our monitor labelling this an incident or hazard?

The application 'Neon' explicitly uses AI systems by selling recorded voice data to AI companies for training models, which directly involves AI development and use. The event reports realized harms including privacy violations, legal concerns about consent, and risks of identity theft and fraud stemming from AI misuse of voice data. These harms fall under violations of human rights and harm to communities. The involvement of AI in the development and use of the recordings is central to the incident. Hence, this is classified as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountability

Industries
Consumer services

Affected stakeholders
ConsumersGeneral public

Harm types
Human or fundamental rightsEconomic/Property

Severity
AI incident

Business function:
Other

AI system task:
Other


Articles about this incident or hazard

Thumbnail Image

تطبيق يثير الجدل.. يدفع للمستخدمين مقابل تسجيل مكالماتهم وبيعها لشركات الذكاء الاصطناعي

2025-09-25
قناة العربية
Why's our monitor labelling this an incident or hazard?
The application 'Neon' explicitly uses AI systems by selling recorded voice data to AI companies for training models, which directly involves AI development and use. The event reports realized harms including privacy violations, legal concerns about consent, and risks of identity theft and fraud stemming from AI misuse of voice data. These harms fall under violations of human rights and harm to communities. The involvement of AI in the development and use of the recordings is central to the incident. Hence, this is classified as an AI Incident.
Thumbnail Image

تطبيق صاعد يثير جدلا.. نيون يمنحك 30 دولار مقابل تسجيل مكالماتك - اليوم السابع

2025-09-25
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The app explicitly uses AI systems to process and sell recorded voice calls for AI training purposes, which involves the development and use of AI systems. The harms include violations of privacy rights and potential security risks such as voice fraud and identity theft, which are realized or ongoing harms. The article details how the app's operation leads directly to these harms, fulfilling the criteria for an AI Incident. The legal and ethical concerns, as well as the potential for misuse of voice data, further support this classification. Hence, the event is not merely a potential hazard or complementary information but a concrete AI Incident.
Thumbnail Image

تطبيق يدفع دولارات للمستخدمين مقابل تسجيل مكالماتهم؟.. وخبراء يحذرون

2025-09-25
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
The app 'Neon' uses AI systems to develop and improve AI models by selling recorded voice data. The use of these recordings can plausibly lead to harms including identity theft, fraud, and creation of fake voice calls, which are significant harms to individuals and communities. The article highlights expert warnings about these risks, indicating a credible potential for harm. Since no actual harm is reported as having occurred yet, but the risk is credible and directly linked to the AI system's use, this event qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

تطبيق "نيون" يثير جدلاً واسعاً: مكافآت مالية مقابل تسجيل المكالمات

2025-09-25
akhbarona.com
Why's our monitor labelling this an incident or hazard?
The application 'Neon' uses AI systems to process recorded voice calls for training and testing, which involves AI system use. The event highlights serious concerns about privacy violations and potential misuse of data for fraud, which could plausibly lead to harms such as violations of privacy rights and fraud-related harms. Since no actual harm is reported but the risks are credible and significant, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or a response update, so it is not Complementary Information or Unrelated.
Thumbnail Image

جدل في أمريكا بسبب تطبيق "نيون".. يدفع للمستخدمين لتسجيل مكالماتهم - الوطن

2025-09-26
الوطن
Why's our monitor labelling this an incident or hazard?
The application 'Neon' uses AI systems by selling recorded phone calls to AI companies for model training, which directly implicates AI system use. The event reports realized harm in terms of privacy violations and ethical concerns, as users' conversations are recorded and sold, potentially without full consent of all parties involved. This constitutes a violation of rights and privacy, fitting the definition of an AI Incident. The rapid rise in app popularity and the business model based on selling user data to AI companies further supports the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

تحذيرات من تطبيق يوهم مستخدميه بكسب المال: يسرب تسجيلات خاصة ويبيعها لمختلف الشركات | المصري اليوم

2025-09-27
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
The application 'Neon' uses AI-related technology by collecting voice data to train AI models. The unauthorized recording and selling of private calls constitute a violation of users' rights and privacy, which is a breach of applicable laws protecting fundamental rights. Since the harm (privacy violation) is occurring due to the AI system's use, this qualifies as an AI Incident under the framework's definition of violations of human rights or legal obligations.
Thumbnail Image

اخبارك نت | تحذيرات من تطبيق يوهم مستخدميه بكسب المال: يسرب تسجيلات خاصة ويبيعها لمختلف الشركات | المصري اليوم

2025-09-27
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The application uses AI systems for training voice models, and the users' private phone call recordings are sold to third parties without clear informed consent, leading to violations of privacy and potentially legal rights. This constitutes a breach of obligations intended to protect fundamental rights, fitting the definition of an AI Incident. The harm is realized as users' private data is exploited, not just a potential risk.
Thumbnail Image

"نيون"... تطبيق يُسوّق نفسه أداةً لكسب المال

2025-09-28
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The application explicitly collects and sells user call recordings to AI companies for training AI models, which involves the use of AI systems. The broad license terms allow extensive use and distribution of personal data, leading to violations of privacy and potentially other legal rights. The harm is realized as users' private data is exploited, constituting a breach of fundamental rights. Hence, this is an AI Incident as the AI system's use has directly led to harm (privacy violations and rights breaches).
Thumbnail Image

Neon, la famosa app que paga en dólares por grabar tus llamadas y venderlas a las IA

2025-09-25
infobae
Why's our monitor labelling this an incident or hazard?
The Neon app uses AI-related technology by collecting voice data to train AI models, which fits the definition of an AI system. The app's operation involves the use of AI systems for voice model training, and the extensive data collection and resale with broad licensing terms create a credible risk of privacy violations and misuse of personal data. Although no specific incident of harm is reported, the plausible future harms related to privacy breaches and unauthorized use of personal data justify classification as an AI Hazard rather than an AI Incident. The article focuses on the potential risks and ethical concerns rather than reporting a realized harm event, so it does not qualify as an AI Incident or Complementary Information.
Thumbnail Image

Neon, la app que te paga por grabar tus llamadas, es desactivada tras presentar fallas de seguridad

2025-09-26
infobae
Why's our monitor labelling this an incident or hazard?
Neon is an AI-related system that collects and sells user call data to train AI models, involving AI system use. The security flaw allowed unauthorized access to sensitive personal data, directly harming users' privacy and violating data protection rights. This is a clear AI Incident because the AI system's malfunction (security failure) led to realized harm (exposure of sensitive data). The article details the incident, the harm caused, and the company's response, confirming the direct link between the AI system and the harm.
Thumbnail Image

La aplicación para iPhone que paga por grabar llamadas

2025-09-25
Digital Trends Español
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system being trained on user call recordings collected through an app. The app's use involves development and use of AI systems relying on personal data. While the app claims to anonymize data and obtain user consent, the large-scale collection and sale of sensitive voice data for AI training poses a credible risk of privacy violations and misuse. No direct harm or incident is described, only the potential for harm due to the nature of the data and AI use. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Una de las apps más descargadas para iPhone paga por grabar llamadas para entrenar modelos de IA. Es un desastre de seguridad

2025-09-26
Xataka
Why's our monitor labelling this an incident or hazard?
The app uses AI systems to record and process phone calls for training AI models, which qualifies as AI system involvement. The security failure exposed sensitive personal data, directly leading to harm in terms of privacy violations and potential identity theft, which are breaches of fundamental rights. The incident involves the AI system's use and malfunction, causing direct harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

¿Qué es Neon? La polémica app que te paga (bastante) por vender tus llamadas para entrenar modelos de IA

2025-09-26
El Confidencial
Why's our monitor labelling this an incident or hazard?
Neon involves the use of AI systems trained on voice data collected from users' phone calls. The app's operation includes recording calls and selling anonymized voice data to third parties to improve AI models. Experts warn that voice is a sensitive identifier that could be used for fraudulent impersonation, which is a violation of personal security and privacy rights. The broad licensing terms and lack of transparency increase the risk of misuse. While no direct harm is reported, the potential for significant harm to individuals' rights and security is credible and plausible, making this an AI Hazard rather than an AI Incident at this stage.
Thumbnail Image

La app para iPhone más descargada del momento es una absoluta locura: te paga (bastante) por grabar tus llamadas

2025-09-25
Hipertextual
Why's our monitor labelling this an incident or hazard?
The app Neon uses AI-related data (voice recordings) to train AI agents, which constitutes an AI system involvement. The app's use directly leads to harm by violating users' privacy and potentially breaching data protection laws, which are human rights violations. The harm is realized as users' calls are recorded and sold without full transparency or control, and the app continues recording even after uninstallation unless the account is closed. This fits the definition of an AI Incident because the AI system's use has directly led to violations of fundamental rights and harm to individuals' privacy.
Thumbnail Image

Neon, la famosa app que paga en dólares por grabar tus llamadas y venderlas a las IA

2025-09-25
eju.tv
Why's our monitor labelling this an incident or hazard?
The Neon app uses AI systems that rely on large datasets of voice recordings to train AI models, which fits the definition of an AI system. The event involves the use of AI systems in the development and training phase, with the app collecting data from users. While there is a clear risk of harm related to privacy violations and potential misuse of personal data, the article does not document any actual harm or incident resulting from this use. The concerns are about plausible future harms due to the broad and unrestricted licensing of personal data and lack of transparency about data buyers and usage. Therefore, this event is best classified as an AI Hazard, as it plausibly could lead to violations of privacy rights and other harms but has not yet directly caused them.
Thumbnail Image

Polémica por una aplicación que paga por escuchar conversaciones para entrenar a la IA

2025-09-28
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The application Neon Mobile uses AI-related data collection (recorded conversations) for training AI, which qualifies as an AI system involvement. The security flaw that exposed sensitive user data is a malfunction leading to harm (privacy breach), which fits the definition of an AI Incident under violations of rights and harm to individuals. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Polémica por una aplicación que paga por escuchar conversaciones para entrenar a la inteligencia artificial

2025-09-28
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The app Neon Mobile is an AI system that collects voice data to train AI algorithms. The critical security flaw allowed unauthorized access to personal data, constituting a violation of privacy and potentially human rights. The harm has materialized as users' private conversations were exposed, and the risk of misuse (voice cloning, deepfakes) is present. The company's response to disconnect services and audit does not negate the incident. Hence, this is an AI Incident involving harm to rights and privacy due to the AI system's malfunction and use.
Thumbnail Image

Polémica aplicación para grabar llamadas desaparece tras exponer números de teléfonos e información de sus usuarios

2025-09-28
Semana.com Últimas Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The application Neon is an AI system as it collects and uses call recordings and transcriptions to train AI models. The security flaw in the app's backend servers allowed unauthorized access to personal data, including phone numbers and call content, which is a violation of privacy and potentially human rights. This exposure of sensitive user data is a direct harm caused by the AI system's malfunction. The incident involves the use and malfunction of the AI system leading to realized harm, meeting the criteria for an AI Incident under the framework.
Thumbnail Image

Aplicación que prometía ingresos diarios por llamadas terminó en escándalo por datos filtrados

2025-09-28
Semana.com Últimas Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The application used AI systems trained on user call data, which were supposed to be anonymized but were exposed due to a security flaw. The unauthorized access to sensitive personal data and conversations is a clear harm to users' rights and privacy. The AI system's use and the company's failure to secure the data directly caused this harm. Hence, it meets the criteria for an AI Incident involving violations of human rights and legal protections.