AI Deepfake of Eckart von Hirschhausen Endorses Fake Medications

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Eckart von Hirschhausen, a German physician and TV host, had AI-generated deepfake videos falsely showing him endorsing weight loss, heart, and potency drugs. These deepfakes deceived consumers into buying ineffective or harmful medications, causing financial and health risks and reputational damage, prompting him to call for stricter AI regulation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated videos (deepfakes) that impersonate a person to promote medications, causing deception and harm to individuals who trust and act on this misinformation. This constitutes harm to people (health-related harm) due to the AI system's use in creating misleading content. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and potential health risks.[AI generated]
AI principles
Transparency & explainabilitySafetyPrivacy & data governanceAccountabilityRobustness & digital securityRespect of human rightsHuman wellbeing

Industries
Healthcare, drugs, and biotechnologyMedia, social platforms, and marketing

Affected stakeholders
ConsumersOther

Harm types
Economic/PropertyPhysical (injury)Reputational

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Hirschhausen will mehr über KI reden

2025-01-27
stern.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos (deepfakes) that impersonate a person to promote medications, causing deception and harm to individuals who trust and act on this misinformation. This constitutes harm to people (health-related harm) due to the AI system's use in creating misleading content. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and potential health risks.
Thumbnail Image

Eckart von Hirschhausen wurde Opfer von KI-Deepfake

2025-01-27
DIGITAL FERNSEHEN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake videos that impersonate a public figure to promote fake medication recommendations. This has caused direct harm to people who were deceived into buying these products, as well as reputational harm to the individual and broader societal harm by undermining trust. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Hirschhausen will mehr über KI reden

2025-01-27
shz.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos causing harm by misleading people into purchasing ineffective or harmful medications, which constitutes direct harm to individuals' health and finances. Additionally, the erosion of trust in media and institutions is a harm to communities. Therefore, this qualifies as an AI Incident due to realized harms directly linked to the use of AI systems (deepfake generation).
Thumbnail Image

TV-News: Hirschhausen will mehr über KI reden

2025-01-27
News.de
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake videos that have been used to deceive people into purchasing ineffective or harmful medications, causing financial harm and misinformation. The AI system's use directly led to harm to people and communities, as well as reputational damage and erosion of trust, fitting the definition of an AI Incident. The article explicitly states that harm has occurred, not just potential harm, and the AI system's role is pivotal in creating the fake videos.
Thumbnail Image

Hirschhausen will mehr über KI reden

2025-01-27
Volksstimme.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated deepfake videos impersonating a known person to promote fake medical products, causing direct harm to individuals who are deceived and lose money, as well as indirect harm to the reputation of the person and media trust. The AI system's use has directly led to realized harms (financial and reputational), fulfilling the criteria for an AI Incident under the framework.