Austrian Man Sentenced for Using ChatGPT to Forge Medical Diploma

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A 27-year-old Austrian engineer used ChatGPT to create a fake medical diploma, which he submitted to the Upper Austrian Medical Association in an attempt to be listed as a doctor. He received a five-month suspended sentence for document forgery. The incident highlights the misuse of AI for generating fraudulent credentials.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (ChatGPT) was explicitly used to create a fake official document, which was then used to deceive a regulatory body. This constitutes a violation of law and intellectual property rights, fulfilling the criteria for an AI Incident. The harm is realized as it involves forgery of protected documents and attempts to mislead official institutions, which is a breach of legal obligations and rights.[AI generated]
AI principles
Accountability

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Business

Harm types
Other

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Fake-Mediziner wurde zu bedingter Haft verurteilt

2026-02-16
Kronen Zeitung
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly used to create a fake official document, which was then used to deceive a regulatory body. This constitutes a violation of law and intellectual property rights, fulfilling the criteria for an AI Incident. The harm is realized as it involves forgery of protected documents and attempts to mislead official institutions, which is a breach of legal obligations and rights.
Thumbnail Image

Mittels KI: Fake-Ärztediplom: Fünf Monate bedingt für Oberösterreicher

2026-02-16
www.kleinezeitung.at
Why's our monitor labelling this an incident or hazard?
The use of ChatGPT to generate a fake medical diploma directly led to a criminal act of forgery and an attempt to deceive a medical regulatory body. This misuse of AI caused harm by undermining legal protections and potentially endangering public health if the fake credentials had been accepted. The event meets the criteria for an AI Incident because the AI system's use directly resulted in a violation of law and harm to institutional integrity.
Thumbnail Image

Medizin-Abschluss mit KI gefälscht: Bedingte Strafe für Diplomingenieur

2026-02-16
nachrichten.at
Why's our monitor labelling this an incident or hazard?
The use of ChatGPT, an AI language model, to create a fake medical degree document that was submitted to an official body directly caused legal harm and a breach of trust in professional credentials. This constitutes a violation of legal obligations and intellectual property rights related to official documentation. The event describes a realized harm caused by the AI system's use, meeting the criteria for an AI Incident.
Thumbnail Image

Mit KI zum Fake-Ärztediplom: Fünf Monate bedingt für Oberösterreicher

2026-02-16
Die Presse
Why's our monitor labelling this an incident or hazard?
The use of ChatGPT to create a fake medical diploma directly contributed to the commission of a forgery crime, which is a breach of legal obligations protecting intellectual property and official documentation. This constitutes a violation of applicable law and fundamental rights related to trust in professional credentials. The event describes actual harm through the fraudulent act and subsequent legal judgment, thus qualifying as an AI Incident under the framework.
Thumbnail Image

Fünf Monate bedingt für Ärztediplom von ChatGPT

2026-02-16
Vienna Online
Why's our monitor labelling this an incident or hazard?
The use of ChatGPT to create a fake medical diploma directly led to a criminal act of forgery, which is a violation of law protecting intellectual property and professional rights. The AI system's involvement was instrumental in the commission of this offense, fulfilling the criteria for an AI Incident. The harm here is legal and societal, involving breach of obligations under applicable law and potential harm to the medical profession's integrity. The event is not merely a potential risk but a realized harm with legal consequences, thus it is not an AI Hazard or Complementary Information.
Thumbnail Image

Fünf Monate bedingt für Ärztediplom von ChatGPT

2026-02-16
BVZ - Burgenlndische Volkszeitung | BVZ HOMEPAGE
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) to generate fraudulent content (a fake medical diploma), which was then used to deceive a professional authority. This constitutes misuse of AI leading to a violation of legal and professional rights, thus meeting the criteria for an AI Incident. The harm is indirect but clear: the AI system was instrumental in producing false documents that could undermine trust in medical certification and regulatory processes.