AI Deepfake Videos Impersonate Doctors to Promote Dangerous Treatments

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos are impersonating trusted doctors to promote unverified and dangerous medical treatments on social media platforms like Facebook and Instagram. These videos exploit the credibility of well-known medical professionals, potentially endangering public health. Legal actions are being taken against Meta by affected individuals, including doctors like Michel Cymes and Hilary Jones.[AI generated]

Why's our monitor labelling this an incident or hazard?

The core issue is that AI-powered social media feeds are promoting harmful health content without proper vetting, directly driving the spread of dangerous medical claims. This constitutes an AI system’s use causing or facilitating harm to people’s health and well-being, meeting the definition of an AI Incident.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainabilityPrivacy & data governanceRespect of human rightsRobustness & digital securityHuman wellbeing

Industries
Media, social platforms, and marketingHealthcare, drugs, and biotechnology

Affected stakeholders
WorkersGeneral public

Harm types
Physical (injury)Public interestReputationalHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisementMonitoring and quality control

AI system task:
Content generationOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

深偽名醫影片湧現 推銷危險療法

2024-09-15
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
This is a report of a government and public‐health governance process around the use of AI in epidemic response (mentions meetings, platform engagement, research findings, and guidelines), with no new realized or imminent AI‐driven harm. It therefore constitutes complementary information on AI policy and ecosystem developments.
Thumbnail Image

專家示警 深偽名醫影片氾濫販售危險療法

2024-09-14
Yahoo News
Why's our monitor labelling this an incident or hazard?
The core issue is that AI-powered social media feeds are promoting harmful health content without proper vetting, directly driving the spread of dangerous medical claims. This constitutes an AI system’s use causing or facilitating harm to people’s health and well-being, meeting the definition of an AI Incident.
Thumbnail Image

Beware 'deepfakes' of famous doctors promoting scams: experts

2024-09-14
The Daily Star
Why's our monitor labelling this an incident or hazard?
This is a direct AI Incident: generative AI is being misused to create realistic deepfake videos that impersonate well-known medical professionals, spreading harmful medical misinformation and posing a clear threat to public health.
Thumbnail Image

Beware 'deepfakes' of famous doctors promoting scams - experts

2024-09-14
The Guardian
Why's our monitor labelling this an incident or hazard?
This is a realized harm: AI-generated deepfake videos misuse doctors’ likenesses to push untested ‘miracle cures,’ misleading vulnerable patients and endangering lives (health harm). The deepfakes are an AI system malfunction or misuse causing direct harm, so it is classified as an AI Incident.
Thumbnail Image

Beware 'deepfakes' of famous doctors promoting scams: experts

2024-09-17
Digital Journal
Why's our monitor labelling this an incident or hazard?
The article describes active misuse of AI systems to create realistic fake videos of doctors recommending untested or harmful cures. This misuse directly endangers patients by disseminating misleading medical advice, fulfilling the definition of an AI Incident due to harm to health.
Thumbnail Image

Experts Warn Of Scammers Using 'Deepfakes' Of Famous Doctors On Social Media

2024-09-14
NDTV
Why's our monitor labelling this an incident or hazard?
The event describes the active misuse of AI deepfake technology to deceive audiences into buying harmful products, directly endangering lives and constituting a realized harm from the malicious use of an AI system.
Thumbnail Image

Beware deepfakes Of Famous Doctors Promoting Scams: Experts - UrduPoint

2024-09-14
UrduPoint
Why's our monitor labelling this an incident or hazard?
This is an AI incident because generative AI deepfake technology is already being deployed on social media to impersonate real medical experts, actively misleading and endangering users. The misuse of AI here directly contributes to misinformation scams that threaten people’s health and finances, satisfying the criteria for realized harm.
Thumbnail Image

Beware 'deepfakes' of famous doctors promoting scams

2024-09-17
Hurriyet Daily News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos impersonating trusted doctors to promote harmful and unproven medical treatments, which risks endangering lives. The AI system (generative AI creating deepfakes) is directly involved in producing misleading content that causes harm to public health and trust. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities and health. The harm is realized, not just potential, as these scams are actively spreading and influencing people.
Thumbnail Image

Beware 'deepfakes' of famous doctors promoting scams: Experts

2024-09-14
The Gulf Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake videos that impersonate trusted medical professionals to promote unproven and potentially harmful products. This misuse of AI has directly led to harm by deceiving vulnerable audiences, risking their health and safety. The article details realized harm through scams and misinformation, fulfilling the criteria for an AI Incident under harm to health and harm to communities. The AI system's role is pivotal as it enables the creation of convincing fake videos that cause the harm.
Thumbnail Image

Beware 'deepfakes' of famous doctors promoting scams: experts

2024-09-14
Daily Journal
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate realistic but fake content. The use of these AI-generated videos to promote dangerous and untested health cures directly leads to harm to people's health by spreading misinformation and potentially causing people to avoid effective treatments. Therefore, this constitutes an AI Incident due to harm to health caused by the use of AI systems.
Thumbnail Image

Beware 'deepfakes' of famous doctors promoting scams: experts

2024-09-14
KULR-8 Local News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake videos that impersonate trusted medical professionals to promote harmful scams. The use of these AI-generated videos has directly led to harm by spreading misinformation that risks endangering lives, especially among older audiences who trust these figures. This fits the definition of an AI Incident because the AI system's use has directly led to harm to health (a).
Thumbnail Image

Beware 'deepfakes' of famous doctors promoting scams: experts

2024-09-14
Brattleboro Reformer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake videos (an AI system) impersonating trusted doctors to promote scams involving false medical cures. This misuse of AI has directly led to harm by endangering lives through misinformation about health treatments, fulfilling the criteria for an AI Incident under harm to health. The harm is realized and ongoing, not merely potential, as people are being misled and at risk of injury or death. Therefore, this is classified as an AI Incident.
Thumbnail Image

Beware 'deepfakes' of famous doctors promoting scams: experts

2024-09-14
Clay Center Dispatch On-Line
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI creating deepfake videos) used maliciously to produce false medical endorsements that have already caused harm by promoting dangerous scams. The harm is realized as these videos mislead vulnerable populations, risking injury or harm to health. Therefore, this is an AI Incident because the AI system's use has directly led to harm to people's health through misinformation and scams.