Babylon Health AI Chatbot Faces Criticism Over Unsafe Medical Advice and Data Privacy Breach

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Babylon Health's AI triage chatbot has been criticized by Dr. David Watkins for providing unsafe medical advice, potentially endangering patient health. In response, Babylon publicly attacked Watkins and posted his data online, raising additional concerns about privacy and the company's handling of safety issues.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Babylon chatbot is an AI system used for medical triage. The criticism that it provides unsafe advice indicates a direct or indirect harm to patient health (harm category a). The public posting of a doctor's data raises privacy and confidentiality issues, implicating violations of rights (harm category c). These harms have materialized, not just potential, making this an AI Incident. The event also highlights concerns about data handling and patient safety, which are central to the incident's impact.[AI generated]
AI principles
SafetyPrivacy & data governanceAccountabilityRespect of human rights

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
ConsumersOther

Harm types
Physical (injury)Physical (death)Human or fundamental rights

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Virtual care firm Babylon fired back at a doctor critiquing its chatbot by publicly posting his data on Twitter

2020-02-27
Business Insider
Why's our monitor labelling this an incident or hazard?
The Babylon chatbot is an AI system used for medical triage. The criticism that it provides unsafe advice indicates a direct or indirect harm to patient health (harm category a). The public posting of a doctor's data raises privacy and confidentiality issues, implicating violations of rights (harm category c). These harms have materialized, not just potential, making this an AI Incident. The event also highlights concerns about data handling and patient safety, which are central to the incident's impact.
Thumbnail Image

AI chatbot startup Babylon Health attacks physician for '2,400 Twitter troll tests'

2020-02-26
Hospital Review
Why's our monitor labelling this an incident or hazard?
The event centers on the use of an AI system in healthcare that has allegedly failed to properly triage serious health conditions, which constitutes harm to the health of individuals (harm category a). The physician's documented cases and concerns indicate that the AI system's malfunction or erroneous outputs have led to potential or actual harm. Although Babylon Health disputes the scale of errors, the presence of 'genuine errors' and ongoing safety concerns justify classification as an AI Incident. The event is not merely about general AI news or responses but involves realized or ongoing harm linked to the AI system's use.
Thumbnail Image

Babylon Health, physician tussle over triage chatbot's safety

2020-02-27
MedCity News
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (the triage chatbot) whose use has raised safety concerns. The oncologist's examples show the chatbot giving potentially dangerous advice, which could plausibly lead to harm if patients rely on it incorrectly. However, there is no evidence in the article that actual harm has occurred or been reported. The company's response and partial corrections indicate ongoing development and risk management but do not confirm harm. Thus, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the safety concerns and potential risks, not on responses or ecosystem updates. It is not Unrelated because the AI system and its potential impact are central to the event.
Thumbnail Image

AI chatbot maker Babylon Health attacks clinician in PR stunt after he goes public with safety conce (Natasha Lomas/TechCrunch)

2020-02-26
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Babylon Health's symptom triage chatbot) and concerns about its safety raised by a clinician. The company's actions in pulling app data and publicly attacking the clinician relate to the use and handling of the AI system and its data. However, there is no explicit or direct evidence of realized harm caused by the AI system's malfunction or outputs. The concerns are about potential safety issues, but no concrete incident of harm is described. Therefore, this is best classified as Complementary Information, as it provides context on safety concerns and company responses rather than reporting a specific AI Incident or Hazard.
Thumbnail Image

Babylon Health lashes out at doctor who raised AI chatbot safety concerns

2020-02-26
AI News
Why's our monitor labelling this an incident or hazard?
Babylon Health's AI chatbot is explicitly described as an AI system used for medical triage. The article details multiple instances where the chatbot gave unsafe advice, such as advising a patient with chest pains to delay emergency care, which could lead to serious injury or death. This is a direct harm to health caused by the AI system's malfunction or erroneous outputs. The ongoing nature of these safety issues and the company's defensive response further support classification as an AI Incident rather than a hazard or complementary information. The harm is realized and documented, not merely potential.