AI-Generated Deepfake Doctors Spread Health Misinformation on Social Media

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos featuring fake doctors have flooded TikTok and other social media platforms, promoting unproven supplements and spreading false health information. These videos, based on manipulated real footage, mislead users—especially menopausal women—into buying products from a U.S. supplement company, prompting calls for stricter AI content regulation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly through the creation of deepfake videos using artificial intelligence. The misuse of these AI-generated videos to spread false health information and promote supplements constitutes harm to individuals' health and communities by misleading them. The harm is realized, not just potential, as users are deceived and influenced by these videos. The platforms' delayed response and the widespread nature of the videos further confirm the incident's impact. Hence, this is classified as an AI Incident.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainabilityHuman wellbeing

Industries
Healthcare, drugs, and biotechnologyMedia, social platforms, and marketing

Affected stakeholders
ConsumersWomen

Harm types
Economic/Property

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Απάτη με "μαϊμού" γιατρούς στο TikTok: Deepfake βίντεο κατακλύζουν την social πλατφόρμα

2025-12-07
parapolitika.gr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation of deepfake videos using artificial intelligence. The misuse of these AI-generated videos to spread false health information and promote supplements constitutes harm to individuals' health and communities by misleading them. The harm is realized, not just potential, as users are deceived and influenced by these videos. The platforms' delayed response and the widespread nature of the videos further confirm the incident's impact. Hence, this is classified as an AI Incident.
Thumbnail Image

Προσοχή: Γιατροί-μαϊμού κατακλύζουν τα social media σκορπώντας λάθος ιατρικές συμβουλές

2025-12-07
The TOC
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation of deepfake videos using artificial intelligence. The misuse of these AI-generated videos to spread false medical advice constitutes a direct harm to communities and individuals' health, fulfilling the criteria for an AI Incident. The misinformation can lead to physical harm or health deterioration if people follow incorrect advice. Therefore, this is not merely a potential hazard or complementary information but a realized AI Incident involving harm caused by AI misuse.
Thumbnail Image

"Γιατροί-μαϊμού" κατακλύζουν τα κοινωνικά δίκτυα με deep fake βίντεο

2025-12-07
HuffPost Greece
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation of deepfake videos using artificial intelligence. These videos impersonate trusted medical professionals to promote products falsely, which directly leads to misinformation and potential health harm to viewers. The harm is realized as the misinformation is actively disseminated and has caused reputational damage and public concern. The involvement of AI in generating deceptive content that misleads people about health matters fits the definition of an AI Incident, as it causes harm to communities and potentially to individuals' health.
Thumbnail Image

TikTok: "Γιατροί-μαϊμού" κατακλύζουν την πλατφόρμα διαδίδοντας fake news

2025-12-07
www.topontiki.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake videos that spread false health claims, which have been disseminated on major social media platforms. This misinformation can cause harm to individuals' health by promoting unproven supplements and misleading vulnerable populations. The AI system's role in generating these videos is central to the harm, fulfilling the criteria for an AI Incident involving harm to health and communities. The removal of videos by TikTok after complaints confirms the harm has materialized and is recognized by platforms.
Thumbnail Image

"Γιατροί-μαϊμού" κατακλύζουν τα κοινωνικά δίκτυα διαδίδοντας παραπληροφόρηση

2025-12-07
ertnews.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos that misrepresent real health experts to promote unproven supplements, leading to misinformation that can harm people's health. The harm is direct and realized, as the videos are actively spreading false health claims and influencing consumer behavior. The involvement of AI in generating these videos is central to the incident, fulfilling the criteria for an AI Incident under the framework, specifically harm to health and communities through misinformation.
Thumbnail Image

Επιτήδειοι "κλωνοποιούν" γιατρούς και influencers μέσω της Τεχνητής Νοημοσύνης

2025-12-09
Cretalive
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are manipulated audiovisual outputs generated by AI. The use of these AI-generated deepfakes has directly led to harm by misleading consumers into purchasing potentially ineffective or harmful supplements, thus causing harm to communities and violating rights of the impersonated individuals. The presence of real harm (deception, misinformation, violation of personal rights) and the direct role of AI in generating the harmful content qualifies this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Επιτήδειοι "κλωνοποιούν" γιατρούς και influencers μέσω της Τεχνητής Νοημοσύνης για να πωλούν συμπληρώματα διατροφής

2025-12-08
www.topontiki.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos that manipulate real individuals' images and voices to promote products falsely. This misuse of AI leads to misinformation, deception, and potential health harm to viewers, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the videos are actively spreading false endorsements causing misleading influence on consumers. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Deepfake / "Γιατροί-μαϊμού" κατακλύζουν τα κοινωνικά δίκτυα διαδίδοντας παραπληροφόρηση

2025-12-08
TVXS - TV Χωρίς Σύνορα
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos that misrepresent medical professionals and spread false health information, which has already occurred and caused harm by misleading the public. The misinformation about health products and fabricated endorsements can lead to health risks and exploitation, thus meeting the definition of an AI Incident due to direct harm to communities and potential injury to individuals' health.
Thumbnail Image

Deepfake / "Γιατροί-μαϊμού" κατακλύζουν τα κοινωνικά δίκτυα διαδίδοντας παραπληροφόρηση

2025-12-08
news.makedonias.gr
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit through the creation of deepfake videos using artificial intelligence. The use of these AI-generated videos to spread false health claims constitutes indirect harm to the health of people and communities by promoting misinformation. Therefore, this event meets the criteria of an AI Incident as the AI system's use has directly led to harm through misinformation dissemination.