Bumble Deploys AI 'Deception Detector' to Reduce Fake Profiles and Scams

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Bumble launched an AI-powered tool, Deception Detector, to identify and block fake profiles, scams, and spam on its dating platform. Within two months, user reports of such deceptive accounts dropped by 45%, significantly reducing risks of financial and emotional harm for users.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Bumble AI system is explicitly described as a machine learning-based model used to detect and block fake profiles, which are linked to scams and user anxiety. The system's deployment has led to a significant reduction in reported fake profiles, indicating its active role in preventing harm. The article also references a real scam incident involving a Bumble user, illustrating the harm that the AI system aims to prevent. Since the AI system's use is directly connected to preventing and addressing harm to users (financial loss, emotional distress), this event meets the criteria for an AI Incident.[AI generated]
Industries
Consumer servicesDigital security

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Bumble introduces new AI powered 'Deception Detector' to detect fake profiles

2024-02-07
The Indian Express
Why's our monitor labelling this an incident or hazard?
The Bumble AI system is explicitly described as a machine learning-based model used to detect and block fake profiles, which are linked to scams and user anxiety. The system's deployment has led to a significant reduction in reported fake profiles, indicating its active role in preventing harm. The article also references a real scam incident involving a Bumble user, illustrating the harm that the AI system aims to prevent. Since the AI system's use is directly connected to preventing and addressing harm to users (financial loss, emotional distress), this event meets the criteria for an AI Incident.
Thumbnail Image

Bumble Adds Generative AI Scam and Catfish Detective to Dating App

2024-02-09
Voicebot.ai
Why's our monitor labelling this an incident or hazard?
The Bumble AI system is explicitly mentioned as being used to detect and block fake profiles and scams, which are known harms in online dating. However, the AI system's deployment has led to a reduction in these harms, not their occurrence. There is no harm caused or plausible future harm indicated from the AI system itself. The article also references regulatory actions against deepfake misuse, which is related context but not a direct harm caused by Bumble's AI. Therefore, the event is not an AI Incident or AI Hazard but rather Complementary Information about AI safety features and governance responses in the AI ecosystem.
Thumbnail Image

Bumble launch Deception Detector to combat scam, spam & fake profiles

2024-02-08
SHEmazing!
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI (Deception Detector™) to combat scams, spam, and fake profiles, which are harmful to users. The AI system is actively used to reduce these harms, with reported success in blocking harmful accounts. There is no indication that the AI system caused harm; rather, it mitigates existing harm. The event focuses on the introduction and impact of this AI tool as a positive development and a response to prior harms. This fits the definition of Complementary Information, as it details a governance and technical response to AI-related harms rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

Bumble Inc. launches Deception Detector, an AI-powered shield against spam, scam and fake profiles

2024-02-07
Gadgets Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Deception Detector) to identify and block harmful fake profiles, which is a direct application of AI to reduce harm. There is no indication that the AI system caused any injury, violation of rights, or other harms; rather, it is used to prevent such harms. The event focuses on the deployment and positive impact of the AI system, which aligns with Complementary Information as it details a governance and technical response to AI-related risks in online dating. It does not report an incident or hazard but rather an innovation aimed at harm reduction.
Thumbnail Image

Bumble launches AI tool Deception Detector to counter fake profiles

2024-02-07
SecurityBrief Asia
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (machine learning model) to detect and counteract fake profiles and scams, which are forms of deception that harm users by undermining trust and potentially exposing them to scams. The AI system's deployment has directly led to a reduction in these harms, indicating realized harm mitigation. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to a reduction of harm related to fake profiles and scams, which impact users' safety and community trust on the platform.
Thumbnail Image

Bumble Unveils A New "Deception Detector" With AI Capability To Identify Phony Profiles - AI Next

2024-02-08
Latest News on AI, Healthcare & Energy updates in India
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the "Deception Detector" uses machine learning to detect fake profiles. The use of this AI system has directly led to harm reduction by blocking scam and fake profiles, which are associated with financial and emotional harm to users (e.g., the example of a user losing money to a scam). Therefore, the AI system's use has directly contributed to preventing harm to individuals, qualifying this as an AI Incident under the framework because harm to persons (financial loss, emotional harm) has occurred and the AI system played a pivotal role in mitigating it.
Thumbnail Image

¿Adiós a los perfiles falsos?: Bumble usa IA para filtrar "engaños" en su app de citas

2024-02-12
BioBioChile
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used to detect and block harmful fake profiles and scams, which is a clear AI system involvement. However, the AI system is used to reduce harm rather than causing it. There is no report of harm caused by the AI system or plausible future harm from its use. Instead, the article highlights the positive impact and ongoing AI initiatives by Bumble, which fits the definition of Complementary Information as it provides context and updates on AI deployment and responses to AI-related risks in the dating app ecosystem.
Thumbnail Image

Detector de estafadores románticos: la potencialidades de la IA en apps de citas para luchar contra perfiles falsos

2024-02-14
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as being used to detect and block harmful fake profiles and scams, which are known to cause harm to users through fraud, privacy violations, and emotional distress. The system's deployment has already resulted in a measurable reduction in harmful incidents (45% fewer reports of fake accounts). Therefore, the AI system's use has directly led to harm mitigation, qualifying this event as an AI Incident. The article does not merely discuss potential risks or future harms, nor is it solely about governance or research updates, so it is not a hazard or complementary information. The presence and role of AI are clear and central to the harm addressed.
Thumbnail Image

Bumble presenta Deception Detector: Una barrera basada en IA contra spam, estafas y perfiles falsos - ebizLatam.com

2024-02-14
ebizLatam.com
Why's our monitor labelling this an incident or hazard?
The event describes the deployment and use of an AI system that directly reduces harm by identifying and blocking fake profiles, spam, and scams on a dating platform. These harms relate to user safety, protection from fraud, and maintaining authentic social connections, which fall under harm to persons and communities. The AI system's role is pivotal in detecting and preventing these harms, making this an AI Incident. There is no indication that the event is merely potential harm or a governance update; rather, it reports realized harm reduction through AI use.