AI-Generated Fake Doctors Spread Harmful Medical Misinformation on Social Media

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated bots impersonating doctors have spread false medical advice, such as misleading claims about chia seeds curing diabetes, to millions on social media. These videos, viewed and shared widely, risk public health by exploiting trust in medical professionals and disseminating inaccurate health information.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating fake doctor personas and false medical claims on social media, which directly leads to harm by spreading misinformation that can negatively affect people's health decisions. The AI-generated content impersonates authoritative medical figures, increasing the risk of harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and health (harms a and d).[AI generated]
AI principles
SafetyTransparency & explainabilityHuman wellbeingAccountabilityRobustness & digital securityRespect of human rightsDemocracy & human autonomy

Industries
Media, social platforms, and marketingHealthcare, drugs, and biotechnology

Affected stakeholders
General public

Harm types
Physical (injury)Public interestReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Fact check: AI doctors on social media spreading fake claims

2023-10-09
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake doctor personas and false medical claims on social media, which directly leads to harm by spreading misinformation that can negatively affect people's health decisions. The AI-generated content impersonates authoritative medical figures, increasing the risk of harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and health (harms a and d).
Thumbnail Image

Fact check: AI doctors on social media spreading fake claims

2023-10-07
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake doctor images and videos that spread false medical claims, which can directly harm people's health by misleading them about treatments and cures. The AI's role in creating these deceptive personas and content is central to the harm caused. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people's health through misinformation. The article also discusses the risks of AI in medical diagnostics and chatbots providing incorrect answers, reinforcing the harm potential. Therefore, this is classified as an AI Incident.
Thumbnail Image

Fact check: AI doctors on social media spreading fake claims

2023-10-11
Frontline
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic doctor personas and medical advice videos that spread false health claims. This misinformation can directly harm public health by misleading people about treatments and cures, constituting harm to communities and a violation of rights to truthful information. The AI's role is pivotal as it creates the deceptive content and impersonates medical professionals, amplifying the impact. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated misinformation and impersonation.
Thumbnail Image

Fact check: AI doctors on social media spreading fake claims

2023-10-09
The Daily Star
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic doctor videos that spread false medical information, which has already caused harm by misleading people about health treatments. The AI-generated content impersonates medical professionals, exploiting their authority to disseminate misinformation. This directly violates the right to accurate health information and can cause harm to individuals' health (harm to persons). The article documents realized harm (false claims widely viewed and shared) and discusses the AI system's role in producing and disseminating this misinformation. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fact check: AI doctors on social media spreading fake claims - The Street Journal

2023-10-07
Breaking News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake doctor personas that spread false medical advice on social media, which has already caused harm by misleading large audiences. The AI's role is pivotal in creating and disseminating this misinformation, which can lead to injury or harm to people's health (harm category a) and harm to communities (harm category d). Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.