Italian Woman Uses AI-Generated Images to Commit Funeral Fraud

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Northern Italy, a woman used AI-generated images to fabricate the death of her pregnant daughter, deceiving a former colleague and obtaining money under false pretenses. The AI-created funeral photos made the story more convincing, leading to financial harm before the fraud was uncovered by relatives.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to generate fabricated images (photos of a funeral) to perpetrate a scam, which directly caused financial harm to the victim. The AI's role in creating convincing fake content was pivotal in enabling the deception and resulting harm. Therefore, this qualifies as an AI Incident due to realized harm (financial fraud) caused by AI-generated content.[AI generated]
AI principles
Transparency & explainabilityRobustness & digital security

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Abzocke von Ex-Kollegin: Italienerin täuscht mit Tod ihrer schwangeren Tochter vor

2026-05-14
Spiegel Online
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate fabricated images (photos of a funeral) to perpetrate a scam, which directly caused financial harm to the victim. The AI's role in creating convincing fake content was pivotal in enabling the deception and resulting harm. Therefore, this qualifies as an AI Incident due to realized harm (financial fraud) caused by AI-generated content.
Thumbnail Image

KI-generierte Bilder decken dreisten Betrug auf

2026-05-14
20 Minuten
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated images were used to fabricate evidence of a death and funeral, which was part of a scam that caused financial harm to the victim. The AI system's outputs were pivotal in enabling the fraud, which resulted in realized harm. This fits the definition of an AI Incident because the AI system's use directly led to harm (financial loss) to a person. Hence, the event is classified as an AI Incident.
Thumbnail Image

Fake-Beerdigung: Italienerin täuscht Tod der Tochter mit KI-Bildern vor

2026-05-14
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
The use of AI-generated images to perpetrate a fraud that caused financial harm to a person qualifies as an AI Incident. The AI system's outputs were instrumental in deceiving the victim, leading to realized harm. Therefore, this event meets the criteria for an AI Incident due to the direct involvement of AI in causing harm through fraudulent use.
Thumbnail Image

Frau erfindet Tod der Tochter mit KI-Fotos: Ex-Kollegin um Geld betrogen

2026-05-14
Express.de
Why's our monitor labelling this an incident or hazard?
The use of AI-generated images to fabricate evidence in a fraudulent scheme constitutes the use of an AI system in a harmful way. The harm here is financial fraud, which is a significant harm to the victim. Since the AI system's use directly contributed to the deception and resulting financial loss, this qualifies as an AI Incident under the definition of harm caused by the use of an AI system.
Thumbnail Image

Mutter täuscht mit KI den Tod ihrer Tochter vor: Und kassiert ab

2026-05-14
Der Westen
Why's our monitor labelling this an incident or hazard?
The woman used AI to create fake images of a funeral, which were sent to the victim to support a false story about her daughter's death. This use of AI directly enabled the fraud and financial harm to the victim. The AI system's involvement is explicit and directly caused harm (financial loss through deception). Hence, this event meets the criteria for an AI Incident because the AI system's use led directly to harm to a person (financial harm) through deception.
Thumbnail Image

Frau täuscht Tod ihrer Tochter mit KI-Bildern vor | Heute.at

2026-05-15
Heute.at
Why's our monitor labelling this an incident or hazard?
The use of AI-generated images to deceive and commit fraud directly led to financial harm (a form of harm to property) to the victim. The AI system's involvement in generating fake images was pivotal in enabling the fraudulent scheme. Therefore, this qualifies as an AI Incident because the AI system's use directly contributed to realized harm through deception and financial loss.
Thumbnail Image

Italienerin täuscht Tod der Tochter mit KI-Bildern vor

2026-05-14
shz.de
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate fake images that were instrumental in deceiving the victim, causing financial harm. The harm is realized and directly linked to the AI-generated content used in the fraud. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (financial loss) to a person.
Thumbnail Image

Fake-Beerdigung: Italienerin täuscht Tod der Tochter mit KI-Bildern vor

2026-05-14
Schwarzwälder Bote
Why's our monitor labelling this an incident or hazard?
The use of AI-generated images to deceive and cause financial loss constitutes harm to a person (the victim of the scam). The AI system's use in generating fake images was pivotal in enabling the fraud, thus directly leading to harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs.
Thumbnail Image

Südtirol: Italienerin täuscht Tod der Tochter mit KI-Bildern vor

2026-05-14
Frankenpost
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate images that were instrumental in committing fraud, which caused financial harm to a person. The AI system's use directly led to harm (financial loss) through deception, fitting the definition of an AI Incident.
Thumbnail Image

Italienerin täuscht Tod der Tochter mit KI-Bildern vor

2026-05-14
Westdeutsche Zeitung
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate images that were part of a deceptive scheme causing financial harm to a person. The AI-generated images were instrumental in making the false story believable, leading to the victim transferring money under false pretenses. This constitutes an AI Incident because the AI system's use directly contributed to harm (financial fraud) against an individual.
Thumbnail Image

Fake-Beerdigung: Italienerin täuscht Tod der Tochter mit KI-Bildern vor

2026-05-14
Neue Presse Coburg
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate images that were instrumental in committing fraud, which is a form of harm to a person (financial harm). The AI system's use directly led to the harm by enabling the deception. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated content.
Thumbnail Image

Italienerin täuscht Tod der Tochter mit KI-Bildern vor

2026-05-14
BVZ - Burgenlndische Volkszeitung | BVZ HOMEPAGE
Why's our monitor labelling this an incident or hazard?
The use of AI-generated images to fake a death story involves an AI system's use in deception. However, the article does not describe any direct or indirect harm such as physical injury, rights violations, or community harm resulting from this deception. The event is about misuse of AI-generated content but lacks evidence of materialized harm meeting the AI Incident threshold. It also does not describe a plausible future harm scenario that would qualify as an AI Hazard. Hence, it fits best as Complementary Information illustrating misuse of AI-generated images in social contexts.
Thumbnail Image

Fake-Beerdigung: Italienerin täuscht Tod der Tochter mit KI-Bildern vor

2026-05-14
General-Anzeiger Bonn
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-generated images to perpetrate a fraud, which directly led to financial harm to the victim. The AI system's involvement is clear and pivotal in enabling the deception. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (financial loss) to a person.