AI Deepfakes Used in Fraudulent Medical Product Scams in Germany

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Criminal networks have used AI-generated deepfake videos and audio to impersonate German doctor Eckart von Hirschhausen, promoting fake medical products online. This has led to widespread deception, financial loss, and potential health risks for victims. Despite legal action, many deepfake ads remain online, highlighting ongoing harm and regulatory challenges.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating deepfake videos and audio that impersonate a real person to promote fraudulent medical products. This misuse has directly caused harm by misleading millions, potentially endangering health (harm to persons) and causing emotional distress. The AI system's role is pivotal in creating realistic fake content that facilitates this harm. Hence, it meets the criteria for an AI Incident due to realized harm stemming from AI misuse.[AI generated]
AI principles
Transparency & explainabilityAccountability

Industries
Healthcare, drugs, and biotechnologyMedia, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Economic/PropertyPhysical (injury)

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Deepfake-Werbung: "Mein Gesicht gehört mir" - Hirschhausen und die ARD-Doku

2026-05-03
Bild
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake videos and audio that impersonate a real person to promote fraudulent medical products. This misuse has directly caused harm by misleading millions, potentially endangering health (harm to persons) and causing emotional distress. The AI system's role is pivotal in creating realistic fake content that facilitates this harm. Hence, it meets the criteria for an AI Incident due to realized harm stemming from AI misuse.
Thumbnail Image

Eckart von Hirschhausen: "Sie spielen auf perfide Art mit der Sehnsucht von chronisch Kranken"

2026-05-04
ZEIT ONLINE
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system capable of generating realistic synthetic videos. The criminal use of such AI-generated deepfakes to deceive and defraud people constitutes an AI Incident because it directly leads to harm (financial and psychological) to victims. The article highlights ongoing harm from these deepfakes and the insufficiency of current legal measures, confirming realized harm rather than just potential risk.
Thumbnail Image

Die Dokumentation "Hirschhausen und die Deepfake-Mafia

2026-05-04
Frankfurter Allgemeine
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos that mislead consumers into purchasing fake medical products, causing direct harm (financial loss, deception, potential health risks) and violating personal rights. The AI system's use is malicious and leads directly to harm. The documentary documents realized harm and ongoing issues, not just potential risks. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

KI-Fälschungen entlarven: Eckart von Hirschhausen warnt vor Deepfake-Betrug - so schützen Sie sich

2026-05-04
News.de
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-based deepfake technology to create fake videos of a public figure endorsing fraudulent medical products. This has caused direct harm to individuals who purchased these products, suffering financial loss and health dangers. The AI system's malicious use is central to the harm, fulfilling the criteria for an AI Incident. The article also discusses mitigation advice but the primary focus is on the realized harm caused by the AI system's misuse.
Thumbnail Image

"Hirschhausen und die Deepfake-Mafia" heute im TV und Livestream: Ingo Zamperoni und Matthias Riedl am Montag zu Gast

2026-05-04
News.de
Why's our monitor labelling this an incident or hazard?
The article does not describe a direct or indirect AI incident or hazard but rather promotes a documentary that discusses AI-related harms such as deepfake-enabled fraud. Since it mainly serves as an informational preview and does not report new or ongoing harm or plausible future harm directly, it fits the category of Complementary Information, providing context and awareness about AI-related issues without describing a specific event of harm or risk.
Thumbnail Image

"Hirschhausen und die Deepfake-Mafia" heute im TV und Livestream: Die Dokureihe am Montag, 4.5.2026

2026-05-04
News.de
Why's our monitor labelling this an incident or hazard?
The article mentions AI-related issues such as deepfakes and data misuse, which are relevant to AI harms, but it is primarily a TV show preview and does not report a concrete AI incident or hazard. It does not describe a specific event where AI systems have directly or indirectly caused harm or where harm is plausibly imminent. Therefore, it serves as complementary information providing context and awareness about AI-related risks rather than reporting a new incident or hazard.
Thumbnail Image

Hirschhausen deckt die Gefahren der Deepfake-Mafia auf

2026-05-04
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The documentary focuses on the real and ongoing use of AI systems (deepfake technology) by criminals to produce deceptive content that harms people's trust and health, which fits the definition of an AI Incident. The AI system's use has directly led to harm through manipulation and fraud. Although the article is about a documentary, it reports on actual harms caused by AI deepfakes, not just potential or hypothetical risks, so it is not merely complementary information or unrelated news.
Thumbnail Image

Hirschhausen zeigt in Doku die perfide Deepfake-Industrie

2026-05-04
stern.de
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates realistic fake videos or images. The article highlights that the deepfakes have caused significant harm to individuals, including psychological distress and reputational damage, which fits the definition of an AI Incident. The harm is realized and directly linked to the use of AI-generated content, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Hirschhausen zeigt in Doku die perfide Deepfake-Industrie

2026-05-04
SÜDKURIER Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake videos that impersonate real people to promote fake health products, leading to psychological harm, identity theft, and health risks for consumers. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. The harm is direct and realized, including violations of rights and health-related dangers. The article also highlights the systemic nature of this misuse and its impact on communities, confirming the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Köln | Hirschhausen zeigt in Doku die perfide Deepfake-Industrie

2026-05-04
Radio Bielefeld
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology, which is a form of AI-generated synthetic media. The deepfakes have been used maliciously to impersonate real people and promote false medical products, leading to direct harm including psychological trauma, identity theft, and health risks to vulnerable individuals. The article provides concrete examples of harm occurring, such as an elderly woman being misled to stop her medications based on fake advertisements. This meets the criteria for an AI Incident as the AI system's use has directly led to violations of rights, harm to health, and harm to communities. The focus is on actual harm caused by AI misuse rather than potential or future harm, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Hirschhausen zeigt in Doku die perfide Deepfake-Industrie | Foto: Dominik Butzmann/WDR/dpa

2026-05-04
main-echo.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake videos that impersonate a real person without consent, causing identity theft, reputational harm, and psychological trauma. Additionally, the deepfakes promote false medical products, which pose health risks to vulnerable people who rely on these deceptive advertisements. These harms fall under violations of rights, harm to health, and harm to communities. The AI system's use is central to the incident, as the deepfake technology enables the creation and dissemination of these harmful fakes. Hence, this is an AI Incident.
Thumbnail Image

Hirschhausen zeigt in Doku die perfide Deepfake-Industrie

2026-05-04
Marler Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology, which is a form of AI-generated synthetic media. The deepfakes have been used maliciously to impersonate individuals and promote false health products, causing direct harm to the victims' reputations and potentially to the health of consumers. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons (psychological trauma, health risks) and violations of rights (identity theft, defamation).