
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Babylon Health's AI triage chatbot has been criticized by Dr. David Watkins for providing unsafe medical advice, potentially endangering patient health. In response, Babylon publicly attacked Watkins and posted his data online, raising additional concerns about privacy and the company's handling of safety issues.[AI generated]
Why's our monitor labelling this an incident or hazard?
The Babylon chatbot is an AI system used for medical triage. The criticism that it provides unsafe advice indicates a direct or indirect harm to patient health (harm category a). The public posting of a doctor's data raises privacy and confidentiality issues, implicating violations of rights (harm category c). These harms have materialized, not just potential, making this an AI Incident. The event also highlights concerns about data handling and patient safety, which are central to the incident's impact.[AI generated]