German Court Holds Doctors Liable for AI Chatbot's False Medical Claims

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A German court ruled that doctors operating Aesthetify GmbH are liable for their website's AI chatbot, which falsely claimed they held specialist medical titles. The chatbot's misleading responses led to legal action by a consumer protection group, resulting in a ban on such false statements and a requirement for corrective measures.[AI generated]

Why's our monitor labelling this an incident or hazard?

The chatbot, an AI system, made false claims about specialist medical titles, misleading consumers and constituting unlawful business practices. The court ruling attributes responsibility to the doctors operating the chatbot, confirming the AI system's role in causing harm through misinformation. This meets the criteria for an AI Incident because the AI system's use directly led to a violation of legal obligations and consumer rights. The event is not merely a potential risk or complementary information but a realized harm caused by the AI system's outputs.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
ReputationalEconomic/Property

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Nordrhein-Westfalen: Ärzte haften für Chatbot-Aussagen zu Facharztbezeichnungen

2026-05-12
N-tv
Why's our monitor labelling this an incident or hazard?
The chatbot, an AI system, made false claims about specialist medical titles, misleading consumers and constituting unlawful business practices. The court ruling attributes responsibility to the doctors operating the chatbot, confirming the AI system's role in causing harm through misinformation. This meets the criteria for an AI Incident because the AI system's use directly led to a violation of legal obligations and consumer rights. The event is not merely a potential risk or complementary information but a realized harm caused by the AI system's outputs.
Thumbnail Image

Urteil: Ärzte haften für Chatbot-Aussagen zu Facharztbezeichnungen

2026-05-12
ZEIT ONLINE
Why's our monitor labelling this an incident or hazard?
An AI system (the chatbot) was used to communicate with customers and made false claims about medical qualifications. This misinformation constitutes a violation of legal obligations protecting consumers from misleading commercial practices, which falls under violations of applicable law intended to protect fundamental rights (consumer protection). The harm is realized as consumers were misled by the chatbot's false statements. The AI system's use directly led to this harm, and the court's ruling confirms liability. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Ärzte-Ärger wegen KI: Chatbot erfindet Titel - Gerichtsurteil

2026-05-12
Express.de
Why's our monitor labelling this an incident or hazard?
The chatbot is explicitly an AI system generating content (false medical titles). The false information caused harm by misleading consumers and violating legal standards, which is a breach of obligations under applicable law protecting rights. The court ruling confirms the harm and responsibility linked to the AI system's outputs. Hence, this is an AI Incident involving the use of an AI system that directly led to harm (legal and reputational) and violation of rights.
Thumbnail Image

Ärzte haften für Chatbot-Aussagen zu Facharztbezeichnungen

2026-05-12
stern.de
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a chatbot) that provided false information, which is relevant to AI system use. However, there is no indication that this has directly or indirectly caused harm such as injury, rights violations, or other significant harms. The legal case is ongoing, and the article mainly reports on the court's decision and the allowance of an appeal. Therefore, this is not an AI Incident or AI Hazard but rather complementary information about legal and governance responses related to AI chatbots.
Thumbnail Image

Urteil in Hamm: Schönheitsärzte Rick und Nick sind für Antworten von KI-Chatbots verantwortlich

2026-05-12
Westdeutscher Rundfunk
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the chatbot) that generated false information about the doctors' qualifications. The court ruling establishes that the operators are responsible for the chatbot's outputs, indicating that the AI system's use led to a violation of legal obligations and misinformation harm. This fits the definition of an AI Incident as the AI system's use directly led to a breach of applicable law and harm to the parties involved. The harm is realized (legal consequences and misinformation), not just potential, so it is not an AI Hazard. It is not merely complementary information because the main focus is the legal ruling about the chatbot's harmful outputs. It is not unrelated because the AI system is central to the event.
Thumbnail Image

KI-Urteil in NRW: Ärzte haften für falsche Angaben ihres Chatbots zu Facharzttiteln

2026-05-12
Kölner Stadt-Anzeiger
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system providing information about medical qualifications. Its false statements caused harm by misleading consumers, which is a violation of legal protections. The court ruling confirms that the AI system's outputs led to this harm, making it an AI Incident. The event involves the use of an AI system whose malfunction or misuse caused harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ärzte haften für Chatbot-Aussagen zu Facharztbezeichnungen

2026-05-12
NOZ
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system generating responses to customer inquiries, including false claims about medical specialist titles. These false statements constitute a violation of legal and consumer protection rights, causing harm to consumers by misleading them. The court ruling attributes responsibility to the physicians operating the chatbot, confirming the AI system's role in causing the harm. Hence, this qualifies as an AI Incident due to the direct link between the AI system's outputs and the legal harm caused.
Thumbnail Image

Ärzte müssen für Falschangaben des eigenen KI-Chatbots haften

2026-05-12
AerzteZeitung.de
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system used to answer patient inquiries in real time. Its false statements about the doctors' qualifications directly led to legal harm and a court ruling against the company. The AI system's outputs caused a violation of legal and consumer rights, which fits the definition of an AI Incident as the AI system's use directly led to harm (legal and reputational harm, violation of laws). Therefore, this event qualifies as an AI Incident.