
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Bumble launched an AI-powered tool, Deception Detector, to identify and block fake profiles, scams, and spam on its dating platform. Within two months, user reports of such deceptive accounts dropped by 45%, significantly reducing risks of financial and emotional harm for users.[AI generated]
Why's our monitor labelling this an incident or hazard?
The Bumble AI system is explicitly described as a machine learning-based model used to detect and block fake profiles, which are linked to scams and user anxiety. The system's deployment has led to a significant reduction in reported fake profiles, indicating its active role in preventing harm. The article also references a real scam incident involving a Bumble user, illustrating the harm that the AI system aims to prevent. Since the AI system's use is directly connected to preventing and addressing harm to users (financial loss, emotional distress), this event meets the criteria for an AI Incident.[AI generated]