AI Bots and Deepfakes Deceive Dating App Users

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Surveys in Costa Rica, Colombia, and Guatemala reveal that 62–76% of dating app users suspect AI chatbots, fake profiles, and deepfakes posing as matches. These malicious AI-driven profiles have led to phishing attempts, romance scams, and emotional harm, prompting demands for identity verification measures to assure genuine human interactions.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (conversational bots, deepfakes) used in dating apps that have directly led to harms including emotional manipulation, deception, and phishing attacks on users. These constitute harm to people and communities, fulfilling the criteria for an AI Incident. The article describes realized harms, not just potential risks, and the AI's role is pivotal in causing these harms. Therefore, this is classified as an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyHuman wellbeingPrivacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketingDigital securityConsumer servicesFinancial and insurance services

Affected stakeholders
Consumers

Harm types
Economic/PropertyPsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Inteligencia artificial desafía las citas: 71% de los peruanos sospecha haber hablado con un bot

2025-02-11
infobae
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (conversational bots, deepfakes) used in dating apps that have directly led to harms including emotional manipulation, deception, and phishing attacks on users. These constitute harm to people and communities, fulfilling the criteria for an AI Incident. The article describes realized harms, not just potential risks, and the AI's role is pivotal in causing these harms. Therefore, this is classified as an AI Incident.
Thumbnail Image

El 64% de los colombianos que usan apps de citas en Colombia cree que hablan con un robot o un perfil falso

2025-02-13
infobae
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (bots and AI-generated profiles) used in dating apps, which have led to user distrust and emotional harm, a form of harm to communities and individuals. However, the harm described is general and perceived rather than a specific incident of harm caused by AI. The article mainly presents survey results and discusses potential and ongoing social impacts, along with responses like verification tools. Therefore, it does not meet the threshold for an AI Incident, nor is it solely about potential future harm (AI Hazard). It is best classified as Complementary Information because it provides context, user sentiment, and developments related to AI's societal impact in dating apps without reporting a discrete harmful event.
Thumbnail Image

Apps de citas: crece el miedo de estar chateando con bots de inteligencia artificial

2025-02-13
La Nacion
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI chatbots in dating apps that have directly led to harms such as emotional distress, anxiety, insecurity, phishing attempts, and deception of users. These harms fall under injury to health (mental/emotional), harm to communities (social disruption), and potential violations of data privacy rights. The AI systems are clearly involved in the use phase, causing these harms through their interaction with users. The article also reports survey data confirming that users have experienced these harms, not just potential risks. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Amor en la era de la IA

2025-02-14
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of AI-generated profiles and bots used in dating apps. It describes harms such as user distrust, increased addiction, and heightened opportunities for scams, which are realized harms affecting communities and individuals. The AI systems' use has directly led to these harms, fulfilling the criteria for an AI Incident. The article also discusses responses like human verification technologies, but the primary focus is on the existing harms caused by AI in dating apps.
Thumbnail Image

San Valentín: uno de cada 4 peruanos ha coqueteado con un bot

2025-02-12
Gestión
Why's our monitor labelling this an incident or hazard?
The presence of AI systems (bots and possibly deepfake profiles) in dating apps is explicitly mentioned. The use of these AI systems has led to realized harms: emotional harm from deceptive interactions, phishing attempts risking personal data, and undermining trust in human connections. These harms fall under injury or harm to persons and harm to communities. The article describes these harms as occurring, not just potential. Hence, this is an AI Incident due to the direct and indirect harm caused by AI systems in the dating app context.
Thumbnail Image

Uno de cada tres usuarios de apps de citas admite haber coqueteado con un bot o IA

2025-02-13
Portafolio.co
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (bots, deepfakes) used in dating apps, which can cause harm such as deception and emotional distress. However, it does not document a specific event where such harm has been realized or a concrete incident caused by AI malfunction or misuse. The focus is on survey results about user experiences and concerns, and calls for improved verification measures. This fits the definition of Complementary Information, as it provides supporting data and context about AI's societal impact without reporting a new AI Incident or Hazard.
Thumbnail Image

1 de cada 4 ha coqueteado con un bot en apps de citas

2025-02-14
Hoy Digital
Why's our monitor labelling this an incident or hazard?
The presence of AI systems (chatbots, bots, deepfakes) in dating apps is clearly established, and the survey indicates users have experienced interactions with these AI systems, including phishing attempts. While these interactions can cause harm (e.g., deception, privacy risks), the article does not document a specific AI Incident with realized harm or a particular AI Hazard event. Instead, it reports survey findings and discusses ongoing concerns and technological solutions, which fits the definition of Complementary Information. It enhances understanding of AI's societal impact in this domain without describing a new incident or hazard.
Thumbnail Image

La IA es Dra Corazón, pero también trae fraudes

2025-02-13
Expansión
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used maliciously to create fake profiles, generate persuasive messages, and produce deepfakes, which have led to a 72% increase in romantic scams and a high percentage of users being victimized. This constitutes direct harm to persons through fraud and deception, fulfilling the criteria for an AI Incident. The AI system's development and use have directly led to these harms, not just a plausible future risk. Hence, the event is classified as an AI Incident.
Thumbnail Image

¿Coqueteado con un bot? Usuarios de apps de citas sospechan

2025-02-14
www.vanguardia.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems like chatbots and deepfakes being used in dating apps, leading to users being deceived or emotionally harmed. The harms include emotional distress, deception, and phishing attempts, which fall under harm to persons and violation of rights. The AI systems' use is directly linked to these harms, fulfilling the criteria for an AI Incident. Although the article also discusses potential solutions and verification technologies, the main focus is on the realized harms caused by AI misuse in dating apps, not just potential risks or responses.
Thumbnail Image

Aplicaciones de citas: Hoy día los usuarios temen estar haciendo 'match' con un bot

2025-02-13
Colombia.com
Why's our monitor labelling this an incident or hazard?
The article centers on user perceptions and concerns about AI and bots in dating apps, supported by survey data, and discusses technological responses to these concerns. There is no description of realized harm or a specific event causing harm due to AI systems. The focus is on the broader ecosystem and responses, making it Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Usuarios de apps de citas en Guatemala desconfían de los perfiles en línea | Diario de Centro América

2025-02-14
DCA Digital
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of bots and AI-generated profiles on dating apps, which have directly led to harms such as digital fraud (phishing), deception, and erosion of trust among users. These harms affect users' emotional well-being and potentially their privacy and security, constituting harm to persons and communities. Since the AI systems' use has already caused these harms, this qualifies as an AI Incident rather than a hazard or complementary information. The article does not focus on responses or updates but on the current impact and user experiences with AI-generated profiles causing harm.
Thumbnail Image

Amor en los tiempos de la IA: 76% de los usuarios de apps de citas sospechan o descubrieron haber interactuado con bots - Revista Summa

2025-02-11
Revista Summa
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (chatbots, AI-generated profiles, deepfakes) used in dating apps that have directly caused harm by deceiving users, leading to phishing attempts and emotional or informational harm. The harms include violation of users' rights and harm to communities through deception and fraud. The presence of AI systems is clear, and the harms are realized, not just potential. Hence, this is an AI Incident rather than a hazard or complementary information.