Families Condemn OpenAI Sora for AI-Generated Deepfake Videos of Deceased Celebrities

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Families of Robin Williams and George Carlin criticized OpenAI's Sora for generating AI videos that replicate the voices and images of deceased celebrities, causing emotional distress and raising concerns about digital rights and legacy. The incident highlights harm from unauthorized AI-generated deepfakes and calls for stricter platform controls.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (Sora) is explicitly mentioned as generating videos that replicate deceased celebrities' voices and images. The families' reactions indicate that harm has occurred, specifically emotional harm and violation of rights related to digital likeness and legacy, which falls under violations of human rights or breach of obligations protecting intellectual property and personal rights. The distress caused by unsolicited AI-generated videos and the potential ongoing misuse of deceased persons' images constitute direct harm linked to the AI system's use. Hence, this event meets the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Psychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Οι οικογένειες Γουίλιαμς και Κάρλιν απέναντι στην OpenAI και την "ψηφιακή ανάσταση" των καλλιτεχνών | LiFO

2025-10-09
LiFO
Why's our monitor labelling this an incident or hazard?
The AI system (Sora) is explicitly mentioned as generating videos that replicate deceased celebrities' voices and images. The families' reactions indicate that harm has occurred, specifically emotional harm and violation of rights related to digital likeness and legacy, which falls under violations of human rights or breach of obligations protecting intellectual property and personal rights. The distress caused by unsolicited AI-generated videos and the potential ongoing misuse of deceased persons' images constitute direct harm linked to the AI system's use. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Η Sora προσφέρει καλύτερο έλεγχο στα βίντεο με την AI εκδοχή σας

2025-10-06
SecNews.gr
Why's our monitor labelling this an incident or hazard?
Sora is an AI system generating deepfake videos, which can cause harm through misinformation and misuse. The article focuses on new controls to mitigate these risks, indicating awareness of potential harms but no actual incident reported. The presence of AI-generated deepfakes and concerns about misinformation constitute a plausible risk of harm, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. The article is not merely general AI news, as it discusses specific AI system features and their risk management.
Thumbnail Image

OpenAI: Η Sora άγγιξε το 1 εκατομμύριο λήψεις

2025-10-09
Business Voice
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Sora) used to generate video content, which is alleged to infringe on intellectual property rights, a recognized form of harm under the AI Incident definition. However, the article focuses on reactions, concerns, and calls for OpenAI to take action rather than documenting a confirmed legal violation or realized harm. There is no direct evidence that the AI system's use has yet led to a legally recognized breach or harm, only plausible concerns and ongoing debate. This aligns with the definition of Complementary Information, which includes societal and governance responses to AI-related issues. Hence, the event is not an AI Incident or AI Hazard but Complementary Information.