OpenAI's Sora Sparks Deepfake Crisis and Emotional Harm

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI's Sora app has enabled the mass creation and distribution of hyper-realistic AI-generated videos, including deepfakes of public figures and deceased celebrities. This has led to widespread misinformation, identity misuse, emotional distress for families, and erosion of public trust, with safety measures proving insufficient to prevent harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes an AI system (OpenAI's Sora 2) generating realistic videos of deceased public figures that are false and offensive, causing emotional harm to their families and communities. The AI system's use directly leads to harm (emotional distress, reputational harm, and potential misinformation), fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as families express distress and public backlash is evident. The AI system's development and use are central to the event, and the harms include violations of rights related to likeness and harm to communities. Thus, the event is best classified as an AI Incident.[AI generated]
AI principles
AccountabilityHuman wellbeingPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
PsychologicalHuman or fundamental rightsPublic interestReputational

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Hollywood's Fight With OpenAI Over Sora 2 Deepfakes Raises Legal and Market Questions

2025-10-12
Markets Insider
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates deepfake videos using AI from text prompts. The use of copyrighted characters and real people's likenesses without consent constitutes a violation of intellectual property and personal rights, which are recognized harms under the framework. Although the article does not report a specific incident of harm having already occurred, the widespread unauthorized content creation and the ability of users to bypass restrictions indicate a credible risk of harm. The ongoing legal disputes and concerns about misleading or harmful deepfake content further support the classification as an AI Hazard. The event focuses on potential and emerging harms rather than a documented incident, so it does not meet the threshold for an AI Incident. It is more than just complementary information because it centers on the risk and controversy around the AI system's use and its implications.
Thumbnail Image

AI videos of dead celebrities are horrifying many of their families

2025-10-11
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (OpenAI's Sora 2) generating realistic videos of deceased public figures that are false and offensive, causing emotional harm to their families and communities. The AI system's use directly leads to harm (emotional distress, reputational harm, and potential misinformation), fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as families express distress and public backlash is evident. The AI system's development and use are central to the event, and the harms include violations of rights related to likeness and harm to communities. Thus, the event is best classified as an AI Incident.
Thumbnail Image

Sora revived dead celebs in AI slop, leaving families to fight for their dignity

2025-10-12
Digital Trends
Why's our monitor labelling this an incident or hazard?
The AI system (Sora 2) is explicitly mentioned as generating videos of deceased public figures without consent, which has caused harm to the families in the form of emotional distress and violation of rights to dignity and likeness. This constitutes a violation of human rights and intellectual property rights, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's use is central to the incident.
Thumbnail Image

Are deepfakes of dead people rewriting the past?

2025-10-12
NBC News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenAI's Sora) that generates deepfake videos, which are AI-generated content. The use of this AI system has directly led to harms including violation of rights of deceased individuals (intellectual property and personal rights), distress to families, and the spread of misinformation that can harm communities and societal trust. The harms are realized and ongoing, not merely potential. The article also discusses responses and detection tools but the primary focus is on the harms caused by the AI system's use. Hence, the event is best classified as an AI Incident.
Thumbnail Image

OpenAI's Sora used to make deepfake AI videos of dead celebrities, outraging their families

2025-10-11
Fast Company
Why's our monitor labelling this an incident or hazard?
OpenAI's Sora is an AI system capable of generating deepfake videos, which directly led to the creation and dissemination of unauthorized videos of deceased celebrities. This use has caused emotional harm to their families, constituting harm to communities and a violation of rights. The incident is not merely a potential risk but an ongoing harm, as evidenced by family members' public complaints and distress. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system causing realized harm through misuse of likeness and emotional distress.
Thumbnail Image

New app from OpenAI will have you never trust videos again

2025-10-12
KTAR News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates realistic videos from prompts, which is explicitly described. The misuse of this AI system is causing direct harm, including defamation, harassment, stalking, impersonation, and spreading false information, which constitute violations of rights and harm to communities. The article states these harms are occurring now, not just potential, and highlights the lack of legal recourse, indicating a significant impact. Therefore, this qualifies as an AI Incident due to realized harms directly linked to the AI system's use.
Thumbnail Image

Did Jake Paul Really Come Out As Gay? AI Videos Spark Massive Online Confusion

2025-10-13
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's Sora 2) used to create deepfake videos that have directly led to misinformation and unauthorized use of individuals' likenesses, which constitutes harm to personal rights and communities. The harm is realized as the videos have fooled thousands and caused distress to those depicted, including Jake Paul and others. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities through misinformation and identity misuse. Although some affected individuals have responded, the primary focus is on the harm caused by the AI-generated content, not on responses or governance measures, so it is not Complementary Information.
Thumbnail Image

Sora gives deepfakes 'a publicist and a distribution deal.' It could change the internet

2025-10-12
Louisville Public Media
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (OpenAI's Sora app) that generates hyper-realistic synthetic videos (deepfakes) which are flooding social media platforms. The use of this AI system is directly linked to harms such as misinformation, erosion of public trust, potential scams, and the creation of harmful content including synthetic child sexual abuse material and state-sponsored propaganda. These constitute harm to communities and violations of rights. The article also notes that safety measures are being circumvented and that unregulated versions could exacerbate these harms. Given that these harms are occurring or are imminent and directly linked to the AI system's use, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Here's how you can get your special invite code for OpenAI's Sora 2 app and explore its AI video features

2025-10-13
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates realistic deepfake videos, which can plausibly lead to harms such as misinformation, reputational harm, or violations of rights if misused. Since no actual harm is reported but the potential for harm is credible and highlighted, this fits the definition of an AI Hazard rather than an AI Incident. The article focuses on the capabilities and potential risks rather than describing a realized harmful event.
Thumbnail Image

"Estamos fritos": así funciona Sora 2, la nueva IA que preocupa por sus videos hiperrealistas

2025-10-13
BioBioChile
Why's our monitor labelling this an incident or hazard?
Sora 2 is an AI system generating realistic deepfake videos that are actively spreading on social media, misleading people and raising concerns about misinformation and manipulation. The article documents realized harm such as erosion of trust in digital evidence, potential incitement of hate, and ethical violations regarding the use of deceased persons' images. These harms fall under harm to communities and violations of rights. The AI system's use is the direct cause of these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

New AI Tool Makes Faking Reality Frighteningly Easy

2025-10-13
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (Sora 2) that generates realistic videos from text prompts, which is a clear AI system. Although no specific incident of harm has been documented yet, the article outlines credible risks of disinformation, political manipulation, and reputational harm that could plausibly arise from the use of this AI system. The presence of watermarks and safeguards is noted but does not eliminate the risk. Since the harm is potential and not yet realized, this fits the definition of an AI Hazard rather than an AI Incident. The article is not merely a product announcement but focuses on the implications and risks of the AI system's use, excluding it from being Complementary Information or Unrelated.
Thumbnail Image

Sora, la app de OpenAI que crea videos con inteligencia artificial: cómo se usa y lo que tenés que evitar

2025-10-13
Clarin
Why's our monitor labelling this an incident or hazard?
The article focuses on the introduction of an AI video generation system with advanced capabilities and acknowledges potential risks related to misuse (deepfakes, disinformation). However, it does not report any actual incidents of harm or violations caused by the AI system. The presence of safeguards and the lack of reported harm indicate that the situation is a plausible future risk rather than a realized incident. Hence, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm but no harm has yet occurred.
Thumbnail Image

OpenAI permitirá solicitar a los familiares de figuras públicas "recientemente fallecidas" que no se utilice su imagen en Sora

2025-10-13
El Español
Why's our monitor labelling this an incident or hazard?
The AI system Sora is explicitly mentioned as generating deepfake videos of deceased public figures, which have caused harm to families and communities by spreading misleading and offensive content. The harm is direct and realized, as families have expressed horror and distress. The event also involves the use of AI in a way that violates rights related to image and dignity. OpenAI's policy change to allow families to request blocking the use of images is a response to this harm but does not negate the fact that harm has occurred. Thus, this is an AI Incident, not merely a hazard or complementary information.
Thumbnail Image

Sam Altman defiende Sora 2 de OpenAI y los millones de clips que vulneran el copyright: 'Los vídeos se sienten diferentes'

2025-10-14
Vandal
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the deployment and use of an AI system (Sora 2) that generates video content including copyrighted characters without authorization, which directly implicates violations of intellectual property rights. The harm is realized as the AI-generated videos infringe on copyright holders' rights, causing legal and ethical issues. The presence of the AI system is clear, and the harm is direct and ongoing, fitting the definition of an AI Incident under violations of intellectual property rights.
Thumbnail Image

Is art dead? What Sora 2 means for your rights, creativity, and legal risk

2025-10-14
ZDNet
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Sora 2 generative AI video creator) whose use has directly led to harms, including violations of intellectual property rights and the creation of misleading deepfake videos. The involvement of the AI system in generating infringing content and deepfakes that affect rights holders and individuals meets the criteria for an AI Incident. The discussion of legal disputes, rights holder objections, and the proliferation of infringing videos confirms that harm has materialized rather than being merely potential. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Sora 2: les agences d'artistes et studios américains somment OpenAI de respecter leur propriété intellectuelle

2025-10-10
BFMTV
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Sora 2) used to generate videos that infringe on copyright-protected characters and works, leading to violations of intellectual property rights. The harm is realized as agencies and studios report unauthorized use of their clients' likenesses and copyrighted content. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of intellectual property rights. The ongoing responses and mitigation efforts by OpenAI are complementary information but do not negate the fact that harm has occurred.
Thumbnail Image

C'est déjà demain - Sora 2, quand l'IA vidéo tourne au cauchemar

2025-10-14
RMC
Why's our monitor labelling this an incident or hazard?
Sora 2 is an AI system capable of generating realistic video content, which can plausibly lead to harms such as misinformation or deception. However, the article does not describe any realized harm or incident resulting from its use, only raising concerns about possible future misuse. Therefore, this event qualifies as an AI Hazard, reflecting the plausible risk of harm from the AI system's capabilities.
Thumbnail Image

" Une perte stupide de temps et d'énergie " : OpenAI " ressuscite " Robin Williams et Malcolm X et se prend une polémique en pleine face

2025-10-13
Frandroid
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) generating videos of deceased public figures without consent, which constitutes a violation of rights (intellectual property and personal rights) and causes harm to communities by spreading disrespectful and potentially misleading content. The harm is realized as families express distress and public controversy arises. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use. The article also highlights the AI system's role in creating manipulated content that can be used for questionable purposes, reinforcing the incident classification.
Thumbnail Image

Avec Sora 2, Open AI rend les deepfakes accessibles à tous (et ça fait paniquer Hollywood)

2025-10-10
Le Huffington Post
Why's our monitor labelling this an incident or hazard?
Sora 2 is an AI system generating deepfake videos that infringe on intellectual property rights by using likenesses of celebrities and fictional characters without authorization. The article reports actual harm occurring, including violations of copyright and creators' rights, which fits the definition of an AI Incident under violations of intellectual property rights. The involvement of the AI system in generating unauthorized content directly leads to these harms. The article also mentions responses and mitigation efforts, but the primary focus is on the realized harm caused by the AI system's use.
Thumbnail Image

Los generadores de video con IA ahora son tan buenos que ya no puedes confiar en tus ojos

2025-10-14
El Diario de Juárez
Why's our monitor labelling this an incident or hazard?
The AI system (Sora and similar AI video generators) is explicitly mentioned and is used to generate realistic fake videos. The use of these AI systems has directly led to harms including the spread of disinformation and copyright infringement, which are violations of rights and harm to communities. The article provides concrete examples of such harms occurring, not just potential risks. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Hollywood-AI battle heats up, as OpenAI and studios clash over copyrights and consent

2025-10-13
The Spokesman Review
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's Sora 2) that generates synthetic videos using real people's likenesses and copyrighted characters. The use of this AI system has directly led to disputes over copyright infringement and unauthorized use of likenesses, which constitute violations of intellectual property rights and economic harm to individuals and studios. These harms fall under the definition of AI Incident, specifically under violations of human rights or breach of intellectual property rights. The article describes actual use and resulting backlash, not just potential harm, so it qualifies as an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

Families of Dead Celebrities Speak Out Against AI Videos

2025-10-13
EURweb
Why's our monitor labelling this an incident or hazard?
The AI system (Sora 2) is explicitly mentioned and is used to create realistic videos of deceased celebrities. The families' statements indicate emotional harm and distress caused by these videos, which is a form of harm to communities and potentially a violation of rights related to posthumous identity and likeness. The AI's use in generating these videos is the direct cause of the harm. Although legal protections are limited, the harm is real and ongoing. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Torenza Passport Woman Sparks Viral Frenzy: AI Deepfake Raises New Fears About Online Misinformation

2025-10-13
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the video is an AI-generated deepfake created using advanced generative AI tools (Sora 2). The viral spread of this fabricated content has led to widespread misinformation, which harms communities by blurring the line between fact and fiction and influencing public belief. The harm is realized as viewers were misled, and experts warn about public safety risks from such AI-generated hoaxes. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities through misinformation and potential safety risks.
Thumbnail Image

OpenAI's Sora 2 Faces Backlash Over Copyright and Deepfake Concerns

2025-10-14
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The article describes a deployed AI system (Sora 2) that is actively generating content leading to realized harms: copyright violations, misinformation, and risks of fraud and identity theft. The harms are directly linked to the AI system's outputs and its widespread use. Although OpenAI has implemented safeguards, the harms are ongoing and significant. This meets the criteria for an AI Incident because the AI system's use has directly and indirectly led to violations of intellectual property rights and harm to communities through misinformation and deception. The article also discusses governance and mitigation responses, but these do not overshadow the presence of actual harm.
Thumbnail Image

A l'heure de l'IA, notre visage ne nous appartient plus

2025-10-13
L'Opinion
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems that generate deepfake videos and images, which have been used maliciously to impersonate individuals and extort money, causing direct harm to persons' rights and reputations. The misuse of AI-generated content for scams and identity theft constitutes a violation of rights and harm to communities. The presence of real incidents of harm (e.g., Bree Smith's case) confirms this is not merely a potential risk but an actual AI Incident. The article also discusses the lack of control over personal images used to train AI, reinforcing the violation of rights. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Hollywood-AI battle heats up, as OpenAI and studios clash over copyrights and consent

2025-10-13
Eagle-Tribune
Why's our monitor labelling this an incident or hazard?
The article focuses on the release and capabilities of an AI system for video generation involving real people, which implies AI system involvement. However, there is no indication that this has led to any injury, rights violations, or other harms yet. The mention of copyright and consent disputes points to potential future risks but does not confirm any actual harm or incident. Therefore, this is best classified as Complementary Information, as it provides context on AI developments and related societal/legal responses without describing a specific AI Incident or Hazard.
Thumbnail Image

OpenAI revoit Sora 2 pour concilier créativité et éthique

2025-10-13
Begeek.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) and its development and use, specifically addressing ethical and legal concerns related to the use of real persons' images and artists' styles. However, the article focuses on OpenAI's response and planned improvements to mitigate potential harms rather than describing any direct or indirect harm that has occurred. There is no report of injury, rights violations, or other harms caused by the AI system at this time. Therefore, this is best classified as Complementary Information, as it provides context on governance and ethical responses to prior issues rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

Sora 2 AI VIdeos: Families of Malcolm X, Robin Williams Condemn OpenAI Over 'Hurtful' Deepfakes - WinBuzzer

2025-10-13
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The Sora 2 app is an AI system generating deepfake videos, explicitly mentioned and clearly AI-based. The use of this AI system has directly led to harm: emotional distress to families of deceased individuals, reputational harm, and violations of intellectual property rights. The harm is realized and ongoing, as families have publicly condemned the content and OpenAI has had to revise policies in response. This fits the definition of an AI Incident because the AI system's use has directly caused harm to persons (families) and breaches of intellectual property rights. The event is not merely a potential hazard or complementary information but a clear incident with direct harm.
Thumbnail Image

People Are Crashing Out Over Sora 2's New Guardrails

2025-10-13
404 Media
Why's our monitor labelling this an incident or hazard?
The article discusses the deployment and moderation of an AI image generation system and the resulting user experience, but it does not describe any realized harm or credible potential harm caused by the AI system. The content restrictions are a form of governance or safety measure rather than an incident or hazard. Therefore, this is best classified as Complementary Information, as it provides context on AI system use and governance without describing an AI Incident or AI Hazard.
Thumbnail Image

Japanese Government Calls on Sora 2 Maker OpenAI to Refrain From Copyright Infringement, Says Characters From Manga and Anime Are 'Irreplaceable Treasures' That Japan Boasts to the World - IGN

2025-10-14
IGN
Why's our monitor labelling this an incident or hazard?
Sora 2 is an AI system capable of generating videos with copyrighted characters, and its use has directly led to copyright infringement concerns. The Japanese government's formal request and the discussion of potential legal actions indicate that harm to intellectual property rights has occurred or is ongoing. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of obligations under applicable law intended to protect intellectual property rights. The article does not merely discuss potential future harm or responses but reports on realized infringement issues and official governmental reactions, confirming the incident status.
Thumbnail Image

Japanese Government Calls on Sora 2 Maker OpenAI to Refrain From Copyright Infringement, Says Characters From Manga and Anime Are 'Irreplaceable Treasures' That Japan Boasts to the World

2025-10-14
IGN India
Why's our monitor labelling this an incident or hazard?
The AI system Sora 2 is explicitly mentioned as generating videos that include copyrighted characters without authorization, leading to copyright infringement. This constitutes a violation of intellectual property rights, which is a recognized harm under the AI Incident definition. The Japanese government's formal request and political responses underscore the seriousness of the infringement. Although OpenAI plans to implement controls, the infringement is ongoing at the time of the report. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Japanese Government Makes Formal Request For OpenAI To Stop Copyright Infringement

2025-10-14
GameSpot
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Sora 2) generating content that infringes on copyrighted material, leading to a formal government request to cease such infringement. This constitutes a violation of intellectual property rights, which fits the definition of an AI Incident under category (c). The harm is realized, not just potential, as the government and rights holders are responding to actual infringement and unauthorized use of protected content. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Why people are angry about guardrails on this AI platform | CBC Arts

2025-10-14
CBC News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates realistic videos from text prompts, indicating AI involvement. However, the article centers on the introduction of stricter usage controls and the resulting user dissatisfaction, without reporting any realized harm or incident caused by the AI system. There is no indication of injury, rights violations, or other harms occurring or plausible harm imminent. The content is primarily about governance or policy changes and public reaction, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

I used the new Sora video app, and we're all doomed

2025-10-14
Android Police
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Sora video generation app) and its use, highlighting potential risks of AI-generated deepfake videos spreading misinformation and causing harm to communities by undermining trust. However, it does not report any actual harm or incident resulting from the AI system's use. The concerns are about plausible future harms, making this an AI Hazard rather than an AI Incident. The article also notes existing safeguards but doubts their sufficiency, reinforcing the potential for future harm.
Thumbnail Image

Sora 2 and Bob Ross Are a Match Made in A.I. Hell

2025-10-15
artnet News
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (Sora 2) that generates synthetic video content, including deepfake-like clips of public figures and users. The harms described include emotional distress, potential misinformation, and reputational risks, but these are discussed in a general, ongoing, and cultural context rather than as a specific harmful event or incident. There is no direct report of injury, rights violation, or disruption caused by the AI system, nor a clear imminent risk of such harm. Instead, the article provides a critical perspective on the societal and artistic impact of AI-generated video content, making it a form of Complementary Information that enhances understanding of AI's broader implications and responses.
Thumbnail Image

Hollywood-AI battle heats up, as OpenAI and studios clash over copyrights and consent

2025-10-14
Hartfort Courant
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's Sora 2) that generates videos using AI to create likenesses of real people and copyrighted characters. The use of these likenesses without explicit consent or licensing has led to direct legal and rights-based harms, including violations of copyright and labor rights, as stated by multiple industry stakeholders and unions. The harm is realized and ongoing, not merely potential, as evidenced by the backlash, legal threats, and union statements. This fits the definition of an AI Incident because the AI system's use has directly led to violations of intellectual property and labor rights, which are protected under applicable law.