OpenAI Shuts Down Sora App After Deepfake Harms and Backlash

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI abruptly shut down its AI-powered video app Sora following widespread backlash over the creation and dissemination of non-consensual and misleading deepfake videos, including those involving public figures. The app's misuse led to significant concerns about personal rights violations and reputational harm, prompting its discontinuation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Sora app is an AI system that generates video content from user prompts, including deepfakes. Its use has directly caused harm by enabling the creation and dissemination of non-consensual and potentially defamatory videos, violating rights and causing societal harm. The shutdown is a response to these harms, indicating that the AI system's use has already led to an AI Incident. The article details the harms and the reaction to them, not just potential future risks or general information, so it is not merely a hazard or complementary information.[AI generated]
AI principles
Respect of human rightsPrivacy & data governance

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Human or fundamental rightsReputational

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Η OpenAI κλείνει αιφνιδιαστικά την εφαρμογή βίντεο Sora μετά τις αντιδράσεις για deepfakes Πηγή: Euronews

2026-03-25
Investing.com Ελληνικά
Why's our monitor labelling this an incident or hazard?
The Sora app is an AI system that generates video content from user prompts, including deepfakes. Its use has directly caused harm by enabling the creation and dissemination of non-consensual and potentially defamatory videos, violating rights and causing societal harm. The shutdown is a response to these harms, indicating that the AI system's use has already led to an AI Incident. The article details the harms and the reaction to them, not just potential future risks or general information, so it is not merely a hazard or complementary information.
Thumbnail Image

Τέλος για το Sora: η OpenAI ανακοινώνει το κλείσιμο της εφαρμογής και του API δημιουργίας βίντεο

2026-03-24
Insomnia.gr
Why's our monitor labelling this an incident or hazard?
The article details the closure of an AI-powered video generation application and API by OpenAI, which is a significant development in the AI ecosystem. However, there is no indication that the AI system caused any injury, rights violations, disruption, or other harms, nor does it suggest any credible risk of such harms in the future. The content is primarily about a product shutdown and strategic refocus, which fits the definition of Complementary Information as it provides context and updates about AI developments without reporting an incident or hazard.
Thumbnail Image

Η OpenAI αποσύρει το Sora: Ακύρωση του deal με την Disney και στροφή στο Coding AI

2026-03-25
Techgear.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora) but does not describe any realized or potential harm caused by its development, use, or malfunction. The cancellation of the product and the Disney deal are business decisions driven by cost and strategy, not by AI-related harm. The article mainly provides complementary information about OpenAI's strategic pivot and the AI industry's maturation, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Γιατί η OpenAI τραβά την πρίζα στο Sora | Techblog.gr

2026-03-25
Techblog.gr
Why's our monitor labelling this an incident or hazard?
The Sora app is an AI system for generating short videos using AI. Its use has led to the creation of non-consensual and misleading deepfake videos involving public figures, which constitutes harm to communities and violations of rights. The article describes these harms as having occurred and being significant enough to prompt OpenAI to discontinue the app. This fits the definition of an AI Incident, as the AI system's use has directly or indirectly led to harm. The article is not merely about potential future harm or a governance response but about an actual harm situation and the company's response to it.
Thumbnail Image

Η OpenAI κλείνει την εφαρμογή Sora μετά τις αντιδράσεις για deepfake

2026-03-25
euronews
Why's our monitor labelling this an incident or hazard?
The Sora application is an AI system that generated synthetic video content, including deepfakes of real people without consent, which constitutes a violation of intellectual property and personal rights. The article details that these harms occurred and led to significant backlash and the eventual shutdown of the app. The AI system's use directly caused these harms, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a concrete case of harm caused by AI use.
Thumbnail Image

OpenAI: Τέλος στην εφαρμογή Sora - Πλήγμα για τη Disney

2026-03-25
Business Voice
Why's our monitor labelling this an incident or hazard?
Sora is an AI system that generated realistic videos from text, including deepfakes, which are known to cause harm such as misinformation, violation of personal rights, and reputational damage. The article explicitly mentions the spread of non-consensual images and deepfakes, which are forms of harm to individuals and communities. The shutdown is a response to these realized harms, indicating that the AI system's use has directly led to an AI Incident. The involvement of Disney and the licensing of characters further underline the significance of intellectual property rights concerns. Therefore, this event qualifies as an AI Incident.