OpenAI's Sora 2 AI Video App Raises Copyright and Privacy Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI is preparing to launch Sora 2, a TikTok-style app featuring only AI-generated videos. The app includes identity verification and allows use of personal likenesses, raising concerns about potential misuse, privacy violations, and copyright infringement, as rights holders must opt out to prevent their content's use.[AI generated]

Why's our monitor labelling this an incident or hazard?

The app Sora is an AI system enabling deepfake generation, which can plausibly lead to harms such as violations of personality rights and disinformation impacting communities. Although misuse is already occurring in terms of users bypassing safeguards, the article does not report any realized harm or incidents resulting from these deepfakes. Hence, the event fits the definition of an AI Hazard, as it could plausibly lead to AI Incidents involving harm to rights and communities, but no direct or indirect harm has yet been documented.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainability

Industries
Media, social platforms, and marketingArts, entertainment, and recreation

Affected stakeholders
General publicBusiness

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI hazard

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

OpenAI is preparing Sora 2 with TikTok style AI video platform: Report

2025-09-30
FoneArena
Why's our monitor labelling this an incident or hazard?
The article describes the development and internal testing of an AI video generation platform with features to mitigate risks such as copyright infringement and child safety. While it references ongoing lawsuits related to copyright, these are not new incidents caused by Sora 2 itself. No harm has occurred or is reported as imminent. The content mainly provides updates on AI system development, governance measures, and market context, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Sora 2和同名视频App问世,欲角逐短视频社交"新王

2025-10-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) and its use in a social app generating AI videos. However, the article does not describe any direct or indirect harm caused by the AI system, nor does it indicate a plausible risk of harm occurring imminently. It is primarily an announcement and analysis of a new AI product and its potential societal effects, without concrete incidents or hazards. Therefore, it fits best as Complementary Information, providing context and understanding of AI developments and their implications rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

OpenAIs neue Social-App Sora: Ein Paradies für Deepfake-Kreationen

2025-10-01
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The app Sora is an AI system enabling deepfake generation, which can plausibly lead to harms such as violations of personality rights and disinformation impacting communities. Although misuse is already occurring in terms of users bypassing safeguards, the article does not report any realized harm or incidents resulting from these deepfakes. Hence, the event fits the definition of an AI Hazard, as it could plausibly lead to AI Incidents involving harm to rights and communities, but no direct or indirect harm has yet been documented.
Thumbnail Image

OpenAI's new Sora video generator to require copyright holders to opt out, WSJ reports

2025-09-29
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora video generator) that uses copyrighted material in generated videos. The use of copyrighted content without explicit consent can constitute a violation of intellectual property rights, which is a form of harm under the AI Incident definition. However, since the product is planned for release and the opt-out process is being set up, no actual harm or violation has yet been reported. Therefore, this situation represents a plausible risk of harm to intellectual property rights in the future, qualifying it as an AI Hazard rather than an AI Incident.
Thumbnail Image

OpenAI's new Sora video generator to require copyright holders to opt out, WSJ reports

2025-09-29
Reuters
Why's our monitor labelling this an incident or hazard?
The article discusses the upcoming release of an AI video generator that will include copyrighted material unless rights holders opt out, which could plausibly lead to violations of intellectual property rights (an AI Incident category). However, since the product is not yet released and no harm or violation has been reported, this situation represents a plausible future risk rather than a realized harm. Thus, it fits the definition of an AI Hazard rather than an AI Incident. The article also provides contextual information about the product and its features but does not focus primarily on responses or governance measures, so it is not Complementary Information.
Thumbnail Image

OpenAI's new Sora video generator to require copyright holders to opt out, WSJ reports By Reuters

2025-09-29
Investing.com
Why's our monitor labelling this an incident or hazard?
The article discusses the upcoming release of an AI video generator that could use copyrighted material unless rights holders opt out, which could plausibly lead to violations of intellectual property rights. Since no actual harm or violation has been reported yet, and the focus is on the planned product and its opt-out mechanism, this fits the definition of an AI Hazard. The AI system's development and intended use could plausibly lead to harm (copyright infringement), but no direct or indirect harm has occurred at this stage.
Thumbnail Image

OpenAI's New Sora Video Generator to Require Copyright Holders to Opt Out

2025-09-29
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora video generator) that uses copyrighted content unless copyright holders opt out. This implicates potential violations of intellectual property rights, a form of harm under the AI Incident definition. Since the use of copyrighted material is planned and will occur unless opt-out is exercised, this constitutes an ongoing or imminent violation rather than a mere potential risk. Therefore, this qualifies as an AI Incident due to the direct involvement of the AI system in causing or enabling intellectual property rights violations.
Thumbnail Image

OpenAI is secretly building a TikTok-style app where every video is AI generated

2025-09-30
India Today
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (the Sora 2 video generation model) in development and intended use. However, it does not describe any actual harm or incident caused by the AI system. The concerns raised (copyright issues, use of likenesses, potential misuse) are potential risks that could plausibly lead to harm in the future but have not materialized as incidents. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to AI incidents but no direct or indirect harm has yet occurred or been reported.
Thumbnail Image

OpenAI's New Social Network Is Reportedly TikTok If It Was Just an AI Slop Feed

2025-09-29
Gizmodo
Why's our monitor labelling this an incident or hazard?
The article discusses a planned AI system (the Sora 2 app) that uses AI-generated video content and facial recognition for identity verification and likeness usage. Although no actual harm has occurred yet since the app is not launched, the described features present credible risks of harm, including non-consensual deepfakes and privacy violations, which align with violations of human rights and harm to communities. Therefore, this event fits the definition of an AI Hazard, as the development and intended use of the AI system could plausibly lead to an AI Incident in the future.
Thumbnail Image

OpenAI Reportedly Developing TikTok-Style App with Fully AI-Generated Videos

2025-09-30
The Hans India
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) generating video content autonomously, which fits the definition of an AI system. The article focuses on the development and intended use of this AI system and highlights potential risks such as misuse of personal likeness and copyright infringement. However, no actual harm or incident has been reported; the harms mentioned are potential and speculative. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harms such as violations of rights or harm to communities if misuse occurs in the future. It is not an AI Incident since no harm has materialized, nor is it Complementary Information or Unrelated as the article centers on the AI system and its potential impacts.
Thumbnail Image

OpenAI's new Sora video generator to require copyright holders to opt out, WSJ reports - Profit by Pakistan Today

2025-09-30
Profit by Pakistan Today
Why's our monitor labelling this an incident or hazard?
The article discusses the upcoming release of an AI video generator that uses copyrighted material unless rights holders opt out, which could plausibly lead to violations of intellectual property rights if rights holders do not opt out or if the system generates unauthorized content. However, since the product is not yet released and no harm has been reported, this constitutes a potential risk rather than an actual incident. The event is thus best classified as an AI Hazard, reflecting the plausible future harm related to copyright infringement through AI-generated content.
Thumbnail Image

OpenAI Is Preparing to Launch a Social App for AI-Generated Videos

2025-09-30
Skeptic Society Magazine
Why's our monitor labelling this an incident or hazard?
The article focuses on the upcoming launch of an AI video generation app and discusses potential concerns such as copyright lawsuits and child safety issues, but it does not describe any actual harm or incidents caused by the AI system. The content is primarily informational about the AI system's features, development status, and related societal and legal context. Therefore, it fits the definition of Complementary Information, as it provides supporting context and updates without reporting a new AI Incident or AI Hazard.
Thumbnail Image

OpenAI 或推 AI 版 TikTok,集合用 Sora 2 生成的短影片

2025-09-30
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) generating content and being integrated into a social app, which fits the definition of an AI system. However, the article does not report any actual harm, violation, or malfunction caused by the AI system. The copyright and identity use issues are potential concerns but are not described as causing harm or legal breaches yet. The article mainly provides information about OpenAI's plans and features, which aligns with Complementary Information as it enhances understanding of AI ecosystem developments and governance considerations without reporting a specific incident or hazard.
Thumbnail Image

OpenAI launches new AI video app spun from copyrighted content

2025-09-30
Reuters
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (the Sora app) that generates videos using AI techniques. However, the event primarily discusses the launch of the app, its copyright policy, and related legal and ethical considerations. There is no indication that the app has caused any direct or indirect harm such as copyright infringement incidents, violations of rights, or other harms at this stage. The concerns raised are about potential future issues and ongoing debates, which constitute plausible risks but not realized harm. Therefore, this event is best classified as Complementary Information, as it provides context and updates on AI development and governance without describing a specific AI Incident or AI Hazard.
Thumbnail Image

OpenAI's new Sora video generator to require copyright holders to opt out, WSJ reports - The Economic Times

2025-09-30
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora video generator) that generates videos using copyrighted material without explicit prior consent, unless rights holders opt out. This implicates potential violations of intellectual property rights, which is a recognized harm under the AI Incident definition. Since the system is planned for release and the process of using copyrighted content without explicit opt-in is described, this constitutes an AI Incident due to the direct involvement of AI in potentially violating rights. The mention of identity verification and positive feedback does not negate the core issue of rights violation risk.
Thumbnail Image

OpenAI launches new AI video app spun from copyrighted content - The Economic Times

2025-09-30
Economic Times
Why's our monitor labelling this an incident or hazard?
The article centers on the launch of an AI system that generates video content using copyrighted material unless owners opt out, which implicates intellectual property rights. However, no actual harm or violation has been reported as having occurred; the potential for copyright infringement exists but is not confirmed as realized harm. Therefore, this event represents a potential risk or concern about future harm related to copyright violations, making it an AI Hazard rather than an AI Incident. It is not merely complementary information because the launch and policy details directly relate to the plausible future harm from AI-generated copyrighted content.
Thumbnail Image

OpenAI launches new AI video app spun from copyrighted content By Reuters

2025-09-30
Investing.com
Why's our monitor labelling this an incident or hazard?
The article focuses on the release of a new AI system and its copyright policy, including potential legal and ethical concerns around the use of copyrighted content and likenesses. However, it does not describe any actual harm, violation, or incident caused by the AI system at this time. The concerns about copyright infringement and misuse are potential risks but have not materialized into an incident. Therefore, this is best classified as Complementary Information, providing context and updates on AI system deployment and governance issues without reporting a specific AI Incident or Hazard.
Thumbnail Image

Sam Altman's Sora could be setting up a big fight with Hollywood

2025-09-30
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenAI's Sora) that uses copyrighted content without prior permission, which directly relates to violations of intellectual property rights. This constitutes harm under the AI Incident definition (c). The event describes actual use of the AI system in a way that infringes on rights, not just a potential risk, and discusses ongoing legal and business conflicts arising from this use. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI's latest Sora AI video generator won't create individuals without approval

2025-09-30
CNBC
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2.0) used for generating videos, which can incorporate copyrighted material unless rights holders opt out. The use of copyrighted material without explicit approval constitutes a violation of intellectual property rights, a form of harm under the AI Incident definition. Since the system is already deployed and generating content that may infringe on rights, this is a realized harm rather than a potential one, qualifying it as an AI Incident.
Thumbnail Image

Sora vs TikTok vs Instagram Reels: How OpenAI's new social media app stacks against rivals

2025-10-01
MoneyControl
Why's our monitor labelling this an incident or hazard?
The article focuses on describing a new AI-powered social media app and its features, including safety and wellbeing considerations. There is no indication of any harm occurring or any credible risk of harm that could plausibly lead to an AI Incident. The content is primarily informative and contextual, discussing the app's approach and potential impact without reporting any incidents or hazards. Therefore, it fits the definition of Complementary Information, as it provides supporting data and context about an AI system and its ecosystem without describing a specific AI Incident or AI Hazard.
Thumbnail Image

OpenAI takes on TikTok, Instagram Reels with new social media app called Sora

2025-10-01
MoneyControl
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Sora 2) used for generating AI videos and audio, and a social app that uses AI-driven recommendation algorithms. The concerns raised relate to potential misuse of AI-generated content, such as non-consensual videos, which could plausibly lead to harms like violations of privacy or reputational harm. Since no actual harm has occurred yet, but plausible future harm is credible and recognized, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to prior incidents, so it is not Complementary Information, nor is it unrelated to AI.
Thumbnail Image

OpenAI ultima el lanzamiento de una red social de vídeos generados por IA similar a TikTok

2025-09-30
EL PAÍS
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Sora 2 generative AI model and AI-based recommendation algorithm) and its use in a new social media platform. While there are potential risks related to copyright infringement and misuse of personal images, no actual harm or violation has been reported as having occurred. The article focuses on the upcoming launch and the possible implications, which fits the definition of an AI Hazard (plausible future harm) rather than an AI Incident. It is not merely general AI news or product launch without risk, because the article highlights the potential for rights violations and misuse inherent in the system's design and policies. Therefore, the classification is AI Hazard.
Thumbnail Image

OpenAI may introduce streaming platform similar to TikTok with AI-generated content

2025-09-30
The Indian Express
Why's our monitor labelling this an incident or hazard?
The article discusses an AI system (Sora 2 video-generating AI model) and its upcoming deployment, highlighting potential risks such as copyright infringement and child safety concerns. However, it does not report any actual incidents of harm caused by the AI system's use or malfunction. The concerns mentioned are plausible future risks but have not materialized into direct or indirect harm yet. Therefore, this event fits the definition of an AI Hazard, as the development and use of this AI system could plausibly lead to harms like copyright violations or child safety issues in the future.
Thumbnail Image

更精准更可控 Sora 2来了!OpenAI:迈入视频领域的"GPT-3.5时刻"

2025-10-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The article primarily focuses on the announcement and capabilities of a new AI system and the company's business updates. While it mentions safety and mitigation measures to prevent misuse or harm, it does not describe any actual incidents of harm, nor does it present a credible risk of future harm directly linked to the AI system. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides complementary information about AI developments and governance efforts.
Thumbnail Image

OpenAI's Answer to TikTok Is Sora 2, a Fever Dream of Deepfakes

2025-09-30
PC Mag Middle East
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates deepfake videos, which can plausibly lead to harms such as misinformation or copyright infringement. However, the article only discusses the app's launch and potential risks without any reported incidents of harm occurring. This fits the definition of an AI Hazard, as the development and deployment of this AI system could plausibly lead to AI Incidents in the future if misused or if moderation fails.
Thumbnail Image

OpenAI's Answer to TikTok Is Sora 2, a Fever Dream of Deepfakes

2025-09-30
PC Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Sora 2) that generates AI videos including deepfakes, which could plausibly lead to harms such as misinformation or copyright violations. However, no actual incidents or harms have been reported or documented in the article. The discussion of potential misuse and OpenAI's moderation measures indicates awareness of risks but no realized harm. Hence, this qualifies as an AI Hazard due to the plausible future harm from misuse of the AI system, but not an AI Incident or Complementary Information.
Thumbnail Image

OpenAI's Sora joins Meta in pushing AI-generated videos. Some are worried about a flood of 'AI slop'

2025-09-30
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating videos and personalized feeds that could flood social media with AI-generated content, potentially leading to misinformation and manipulation that harms the information environment and democratic processes. While these harms are not yet realized, the plausible risk is credible and significant. The article also notes OpenAI's efforts to mitigate some risks, but these do not negate the potential hazard. Since no actual harm has occurred yet, and the focus is on potential future harm, the classification is AI Hazard.
Thumbnail Image

OpenAI reportedly plans to launch TikTok-like app with Sora 2 launch

2025-09-30
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Sora 2) to generate video content and potentially use users' likenesses. While no direct harm is reported, the nature of the app and its AI-generated content could plausibly lead to harms such as misinformation, identity misuse, or copyright issues. Since the app is not yet launched and no harm has materialized, this constitutes a plausible future risk related to AI use, fitting the definition of an AI Hazard rather than an Incident or Complementary Information. The mention of a lawsuit against OpenAI for copyright infringement is background context and does not change the classification of this event.
Thumbnail Image

OpenAI launches new AI video app spun from copyrighted content

2025-09-30
bdnews24.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Sora app) that generates video content using AI. However, the article does not report any realized harm such as copyright infringement incidents, violations of rights, or other harms caused by the app's use. Instead, it focuses on the app's launch, the copyright policy, and measures to prevent misuse, as well as ongoing discussions with copyright holders. This fits the definition of Complementary Information, as it provides context and updates about AI system deployment and governance responses without describing a specific AI Incident or AI Hazard.
Thumbnail Image

OpenAI prepara app estilo TikTok com vídeos feitos por IA * Tecnoblog

2025-09-30
Tecnoblog
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Sora 2 model generating AI videos and facial verification) in development and internal use. No direct or indirect harm has yet occurred, but the described features, especially the use of users' facial images by others to generate videos, could plausibly lead to violations of privacy and rights, constituting potential harm. The article also references ongoing legal challenges related to AI training data, reinforcing the risk context. Since no realized harm is reported, and the main focus is on the potential risks and the app's features, the classification as an AI Hazard is appropriate.
Thumbnail Image

OpenAI's Sora tool is a direct challenge to TikTok and Instagram

2025-09-30
Axios
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's generative AI video tool) whose use could lead to violations of intellectual property rights if copyrighted material is used without explicit permission. However, the article describes a potential or ongoing policy approach rather than a confirmed incident of harm or legal violation. Therefore, it represents a plausible risk of harm related to copyright infringement but does not document a realized harm or incident yet.
Thumbnail Image

OpenAI, TikTok Benzeri Sosyal Medya Uygulaması Geliştiriyor

2025-09-30
Webtekno
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2 video model) used to generate and moderate video content on a new social media platform. However, there is no mention of any harm caused or plausible harm imminent from the AI system's development or use. The article mainly reports on the planned features and intentions behind the platform, which is typical of AI-related product development news. This fits the definition of Complementary Information as it provides context and updates on AI ecosystem developments without describing an incident or hazard.
Thumbnail Image

OpenAI unveils new video-generating app

2025-09-30
The News International
Why's our monitor labelling this an incident or hazard?
While the app involves AI systems generating video content and raises potential legal and ethical issues around copyright and likeness rights, the article does not describe any actual harm or incidents caused by the AI system. The concerns about copyright and likeness use are potential issues but have not materialized into harm or legal violations yet. Therefore, this event is best classified as Complementary Information, providing context on AI developments, policies, and societal responses rather than reporting an AI Incident or Hazard.
Thumbnail Image

OpenAI irrumpe en el mundo de las redes sociales con una aplicación de videos creados por IA

2025-10-01
Ambito
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) used to generate video content, which could infringe on intellectual property rights if copyrighted materials are recreated without permission. Since no actual harm or legal breach has been reported yet, but the potential for such harm exists, this fits the definition of an AI Hazard rather than an AI Incident. The article focuses on the development and launch of the AI-powered social media platform and the potential legal challenges, not on realized harm or incidents.
Thumbnail Image

OpenAI debuts Sora 2 AI video generator app with sound and self-insertion cameos, API coming soon

2025-09-30
VentureBeat
Why's our monitor labelling this an incident or hazard?
The article details the release and features of an AI video generation system, including safety and identity protections, but does not report any actual harm, violation, or malfunction caused by the AI system. Potential risks are acknowledged but not realized or documented as incidents. The main focus is on the product launch, its capabilities, and governance measures, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

早报|OpenAI发布Sora 2:AI视频进入GPT-3.5时刻/罗永浩称小米小字是行业陋习/三星戒指电池鼓包,用户手指被卡就医

2025-10-01
爱范儿
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems such as OpenAI's Sora 2 video generation model, large language models like GLM-4.6 and Ring-1T-preview, and robotics advancements. However, none of these are associated with any realized harm or malfunction. The article also includes other news unrelated to AI harm. The AI-related content focuses on new releases, performance improvements, and ecosystem growth, which fits the definition of Complementary Information. There is no report of direct or indirect harm, nor credible plausible future harm from these AI systems. Hence, the classification as Complementary Information is appropriate.
Thumbnail Image

OpenAI's 'Infinite Slop' Moment: Backlash Mounts Over AI Shopping Push and Video App - Decrypt

2025-09-30
Decrypt
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in active use (AI-powered shopping and video generation) with concerns about privacy, market control, environmental impact, and societal effects. These concerns represent plausible future harms that could arise from these AI deployments. Since no actual harm or incident is reported as having occurred, but credible risks are identified, the event fits the definition of an AI Hazard. It is not Complementary Information because the article is not primarily about responses or updates to past incidents, nor is it unrelated as it clearly involves AI systems and their impacts.
Thumbnail Image

OpenAI bringt neue KI-Video-App auf den Markt - Inhalte basieren auf urheberrechtlich geschütztem Material

2025-09-30
de.marketscreener.com
Why's our monitor labelling this an incident or hazard?
The article focuses on the launch of an AI system (Sora) that generates videos using copyrighted content and includes measures to prevent unauthorized use of personal images. While there are potential legal and ethical issues related to copyright infringement and personal rights, no actual harm or violation has been reported as having occurred. The concerns are about plausible future harms, such as copyright violations and misuse of likeness, but these remain potential risks rather than realized incidents. Therefore, this event is best classified as Complementary Information, providing context and updates on AI system deployment and related governance issues without describing a specific AI Incident or Hazard.
Thumbnail Image

OpenAI prepara su propio TikTok de videos generados por inteligencia artificial

2025-10-01
Portafolio.co
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (OpenAI's Sora 2) to generate synthetic video content and an AI recommendation algorithm. Although no actual harm is reported yet, the platform's design—especially the automatic use of content for training unless explicitly excluded—raises plausible risks of intellectual property rights violations. Additionally, the generation of synthetic videos involving users' images could lead to misuse or identity-related harms. Therefore, this event represents an AI Hazard, as it plausibly could lead to AI Incidents involving rights violations or other harms in the future.
Thumbnail Image

OpenAI está por lanzar una app de videos para competir con TikTok

2025-09-30
Hipertextual
Why's our monitor labelling this an incident or hazard?
The event describes the development and internal testing of an AI system (Sora 2) used to generate videos for a new social media app. However, there is no mention of any harm, malfunction, or misuse resulting from the AI system. The article focuses on the app's features, potential popularity, and technical/legal challenges, which are typical for new AI products. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context and updates about AI system development and its ecosystem without describing any specific harm or credible risk of harm.
Thumbnail Image

OpenAI lança o Sora App e entra na disputa das redes sociais com vídeos de IA | Exame

2025-09-30
Exame
Why's our monitor labelling this an incident or hazard?
While the Sora App is an AI system capable of generating videos and avatars, the article focuses on its launch and features without mentioning any realized harm, incidents, or risks that have materialized. There is no indication of injury, rights violations, disruption, or other harms caused or plausibly caused by the app at this stage. The content is primarily an announcement and contextual information about the AI ecosystem, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

OpenAI Is Reportedly Building A TikTok-Style App For Sora 2 AI Video Slop - BGR

2025-09-30
BGR
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI-generated video content and facial recognition), but there is no indication that any harm has occurred. The mention of facial recognition and copyright opt-out raises potential privacy and intellectual property concerns, but these are prospective and not described as causing harm. Therefore, this is best classified as Complementary Information, providing context and updates about AI developments and potential societal implications without reporting an AI Incident or Hazard.
Thumbnail Image

OpenAI lanza 'Sora', aplicación de video de IA con contenido protegido por derechos de autor

2025-09-30
Forbes México
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Sora) that generates videos from existing content, including copyrighted material, which is a clear AI system involvement. The use of copyrighted content without explicit permission raises potential violations of intellectual property rights, a recognized harm category. However, no actual harm or legal violation has yet occurred or been reported; the article focuses on the launch and policy framework, with potential for future harm if rights holders do not opt out or if misuse occurs. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to violations of rights. It is not an AI Incident because no harm has materialized yet, nor Complementary Information since this is a new launch with potential risks rather than a response or update to a prior incident. It is not unrelated because the AI system and its policy have direct implications for rights and potential harm.
Thumbnail Image

OpenAI's Sora: The New Frontier in AI Video Creation

2025-09-30
Devdiscourse
Why's our monitor labelling this an incident or hazard?
While Sora is an AI system involved in generating video content from copyrighted material, the article focuses on the launch, features, and ongoing policy discussions without reporting any realized harm or incidents. The potential for copyright disputes or misuse exists, but no direct or indirect harm has been reported yet. Therefore, this is not an AI Incident or AI Hazard but rather a general AI-related development and policy context, fitting the category of Complementary Information.
Thumbnail Image

Sora 2 est disponible : OpenAI autorise les deepfakes, une limite dangereuse franchie par l'IA

2025-09-30
Les Numériques
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates deepfake videos, which can directly impact individuals' rights and privacy. Although the system includes consent mechanisms, the article highlights risks of misuse through fake accounts and insufficient identity verification, which could plausibly lead to violations of rights or reputational harm. Since no actual harm has been reported yet, but the potential for harm is credible and significant, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

OpenAI arbeitet an einem rein KI-basierten TikTok-Klon

2025-09-30
Notebookcheck
Why's our monitor labelling this an incident or hazard?
The article outlines a planned AI-based video platform that could plausibly lead to harms such as copyright violations or misinformation dissemination in the future, but no actual harm or incident has occurred yet. The AI system's development and intended use could lead to incidents, but at this stage, it is a potential risk rather than a realized harm. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

OpenAI's Sora App Sparks Copyright Debate in Hollywood - EconoTimes

2025-10-01
EconoTimes
Why's our monitor labelling this an incident or hazard?
The article focuses on the potential for copyright violations due to the app's use of copyrighted material for AI video generation and the legal debates surrounding fair use. While the app's design and policy could plausibly lead to violations of intellectual property rights, no concrete incident of harm or legal breach has been reported yet. Therefore, this situation constitutes an AI Hazard, as it plausibly could lead to an AI Incident involving copyright infringement but has not yet done so.
Thumbnail Image

OpenAI dévoile Sora 2, un modèle pour générer des vidéos et des deepfakes, intégré à un nouveau réseau social

2025-09-30
Numerama.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a generative video and deepfake model (Sora 2) integrated into a social media platform. The use of deepfake technology to realistically impersonate individuals' faces and voices can plausibly lead to harms such as misinformation, identity fraud, reputational damage, and violations of privacy and rights. While no direct harm is reported at this stage, the deployment of this technology with broad access and social sharing features creates a credible risk of future harm. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harms, even if none have yet materialized.
Thumbnail Image

Here comes the AI slop: OpenAI launches Sora 2 and people are already using it to create new scenes from movies and TV shows

2025-10-01
JoBlo's Movie Emporium
Why's our monitor labelling this an incident or hazard?
The article focuses on the release of a new AI system and its current use to generate videos mimicking copyrighted content. This raises plausible risks of intellectual property rights violations and economic harm to the entertainment industry. However, no direct harm or legal violations are reported as having occurred at this time. Therefore, the event represents a plausible future risk (AI Hazard) rather than a realized harm (AI Incident). It is not merely general AI news because it highlights specific concerns about misuse and potential harm linked to the AI system's capabilities.
Thumbnail Image

OpenAI's Sora 2 Adopts 'Opt-Out' Option for Rightsholders: Report

2025-10-01
Digital Music News
Why's our monitor labelling this an incident or hazard?
The AI system (Sora 2) is explicitly described as a generative AI video and audio generator trained on copyrighted works, producing outputs that may include protected materials without explicit permission unless rightsholders opt out. This constitutes a violation of intellectual property rights, a breach of obligations under applicable law, which is a recognized category of AI harm. The event reports ongoing use and deployment of the AI system with these characteristics, indicating realized harm rather than just potential risk. Therefore, this qualifies as an AI Incident due to the direct involvement of the AI system in causing intellectual property rights violations.
Thumbnail Image

Can OpenAI's Sora 2-powered social media app rival TikTok?

2025-09-30
Silicon Republic
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Sora 2 AI video generation model) and its use in a new social media app, which is relevant to AI developments. However, there is no indication that any harm has occurred or that the AI system has malfunctioned or been misused to cause harm. The copyright opt-out policy is a preventive governance measure rather than evidence of a rights violation. The app is not yet launched, so no incidents or hazards have materialized. The content is primarily about the upcoming launch and the implications for copyright holders, making it Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

OpenAI's Sora 2 Will Not Recreate Your Copyrighted Character - If You Opt Out

2025-09-30
eWEEK
Why's our monitor labelling this an incident or hazard?
The article focuses on policy, legal debates, and OpenAI's approach to managing copyrighted content in AI training and generation. It does not report any realized harm or direct/indirect incident caused by the AI system. Nor does it describe a plausible future harm event. Therefore, it is best classified as Complementary Information, as it provides important context and updates on AI governance and industry practices without describing a new AI Incident or AI Hazard.
Thumbnail Image

Sora 2: la apuesta de OpenAI por un feed tipo TikTok con contenido 100% generado por IA

2025-09-30
DiarioBitcoin
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (Sora 2) used for generating video content and includes algorithmic recommendation feeds. While it discusses legal challenges and safety concerns, these are prospective or ongoing issues rather than realized harms. There is no report of injury, rights violations, or other harms directly caused by the AI system's malfunction or misuse at this stage. The presence of safeguards and internal testing further supports that harm is not yet realized. Thus, the event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to incidents such as copyright violations, misuse of identity, or child safety risks in the future.
Thumbnail Image

OpenAI Sora's Opt-Out Model Ignites Hollywood Copyright Clash

2025-09-30
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's Sora) whose development and use directly implicate potential violations of intellectual property rights, a recognized harm under the AI Incident definition. The use of copyrighted materials without explicit consent and the burden placed on rights holders to opt out constitute a breach of obligations intended to protect intellectual property rights. The article indicates that these practices are already occurring or imminent, with lawsuits and industry disputes underway or expected, thus constituting realized harm rather than mere potential risk. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sora 2 : l'IA d'OpenAI se met aux deepfakes et ajoute un réseau social

2025-09-30
KultureGeek
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates deepfake videos and audio, which inherently carries risks of misuse and harm (e.g., misinformation, identity misuse). However, the article focuses on the platform's launch, features, and safeguards without describing any actual harm or incidents caused by the AI system. The potential for harm exists, but no direct or indirect harm has materialized yet. Therefore, this qualifies as an AI Hazard due to the plausible future risk of harm from deepfake misuse, but not an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves an AI system with societal implications.
Thumbnail Image

OpenAI版"TikTok"浮出水面:所有内容强制AI生成

2025-09-30
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's Sora2 AI video generation model) whose use has directly led to significant harms, including copyright infringement lawsuits (violation of intellectual property rights) and privacy concerns related to AI-generated digital avatars. The article details ongoing legal actions and industry backlash, indicating that harms have materialized rather than being merely potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's development and use have directly or indirectly caused violations of rights and privacy harms.
Thumbnail Image

OpenAI被曝将推出"AI版TikTok",所有短视频内容均为AI生成

2025-09-30
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) used for generating video content, which is central to the app's operation. However, the article does not describe any actual harm caused by the AI system, nor does it report any incident or malfunction. The concerns about copyright and safety are noted but remain potential issues rather than realized harms. Therefore, this event is best classified as an AI Hazard because the AI system's use could plausibly lead to harms such as copyright violations or misinformation, but no harm has yet occurred. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated since it clearly involves an AI system with potential risks.
Thumbnail Image

字节注定无眠,抖音的"AI社交游乐场"实验,让Sora抢先落了地

2025-10-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Sora) and discusses its development and use. However, it does not describe any actual harm or violation of rights caused by the AI system, nor does it indicate a plausible risk of harm occurring imminently. The mention of concerns about unauthorized use of videos for training is noted but not presented as a realized incident or a direct hazard. The main focus is on the product's features, design philosophy, and market competition, which aligns with the definition of Complementary Information. Hence, it does not meet the criteria for AI Incident or AI Hazard but enriches understanding of AI's evolving social media applications and responses.
Thumbnail Image

OpenAI计划推出Sora 2独立App,默认使用版权内容引发争议

2025-09-29
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2 video generator) whose use could directly lead to violations of intellectual property rights by generating copyrighted content without explicit permission, unless copyright holders opt out. This is a direct link between the AI system's use and harm (violation of rights). The article reports that this is already planned and partially implemented internally, indicating the harm is imminent or occurring. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The focus is on realized or imminent harm due to the AI system's use, not just potential or background context.
Thumbnail Image

OpenAI版抖音要来了!Sora 2加持,只能发AI生成视频

2025-09-30
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's Sora 2 video generation model and its associated social app) and its use in content creation and recommendation. However, there is no indication of any direct or indirect harm caused by the AI system at this stage. The app is in internal testing with positive feedback, and no incidents or hazards are reported. The article mainly provides information about the AI system's development, deployment plans, and competitive landscape, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

OpenAI推出新的视频生成模型,奥特曼宣布四大原则

2025-10-01
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2 video generation model) and discusses potential risks and mitigation strategies, but does not describe any realized harm or incident caused by the AI system. The principles and safeguards are preventive and aspirational, aiming to avoid misuse and negative impacts. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information providing context on responsible AI deployment and governance.
Thumbnail Image

OpenAI深夜炸场:家族最强视频生成模型Sora 2发布,还能生成音频

2025-10-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article primarily reports on the launch of a new AI product and the associated safety and governance measures OpenAI is implementing. While it acknowledges potential risks and the need for content moderation, it does not describe any realized harm or incidents caused by the AI system. The focus is on the capabilities and safety features rather than any specific harm or plausible imminent harm. Therefore, this is best classified as Complementary Information, providing context and updates about AI development and governance rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Sora模型重磅升级,OpenAI挑战AI视频社交赛道

2025-10-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) used for generating videos with audio and realistic avatars, clearly fitting the AI system definition. However, the article focuses on the launch and capabilities of the system, along with potential societal implications and concerns about future misuse. There is no indication that the AI system has directly or indirectly caused any injury, rights violations, disruption, or other harms yet. Therefore, this is a plausible future risk scenario, making it an AI Hazard rather than an Incident. It is not merely complementary information because the main focus is on the potential risks and the system's capabilities, not on responses or updates to past incidents.
Thumbnail Image

OpenAI developing TikTok-style app with fully AI-Generated videos

2025-09-30
NEO TV | Voice of Pakistan
Why's our monitor labelling this an incident or hazard?
The app involves AI systems for video generation and facial recognition, which are explicitly mentioned. The event concerns the development and intended use of these AI systems. Although no actual harm has occurred yet, the article highlights potential privacy, identity, and copyright issues that could plausibly lead to AI incidents in the future. Therefore, this qualifies as an AI Hazard rather than an Incident or Complementary Information, since the harms are potential and not realized.
Thumbnail Image

Sora 2 is here

2025-09-30
openai.com
Why's our monitor labelling this an incident or hazard?
The article primarily serves as an announcement and overview of a new AI system and its features, along with the safety and ethical considerations being implemented. There is no indication of realized harm or incidents resulting from the AI system's use or malfunction. While the system has potential risks inherent in generative AI technology, the article focuses on mitigation strategies and user empowerment rather than describing any AI Incident or AI Hazard. Therefore, this is best classified as Complementary Information, providing context and governance responses related to the AI system and its ecosystem.
Thumbnail Image

OpenAI Launches Sora 2: TikTok-Style App With AI-Generated Videos

2025-10-01
TechnoSports Media Group
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates videos and personal likenesses, which can be misused to create deceptive content. While the article highlights potential abuse and risks, no actual harm or incident has been reported. The potential for misuse and deception could plausibly lead to harms such as misinformation or violation of personal rights, fitting the definition of an AI Hazard rather than an AI Incident. The article is not primarily about responses or governance, so it is not Complementary Information, nor is it unrelated to AI.
Thumbnail Image

OpenAI刚刚发布了"AI原生版"抖音 还有Sora 2 - cnBeta.COM 移动版

2025-10-01
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2 video model and the Sora app) and its deployment, but there is no indication that any harm has occurred or that the AI system has malfunctioned or been misused to cause harm. The article discusses potential societal concerns and the app's design to mitigate risks, but these are prospective considerations rather than actual incidents or hazards. Therefore, the event is best classified as Complementary Information, providing context and updates about AI developments and governance responses without describing an AI Incident or AI Hazard.
Thumbnail Image

OpenAI Sora 2: Permission for faces, opt-out for copyright

2025-10-01
implicator.ai
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (OpenAI's Sora 2) that generates video content using AI-driven synthesis of likenesses and copyrighted characters. The system's development and use raise plausible risks of harm, including copyright violations and misuse of personal likenesses, which could lead to legal disputes and rights infringements. However, the article does not report any actual incidents of harm occurring yet; it focuses on the policy approach, competitive context, and potential legal challenges. Thus, the event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future, but no direct or indirect harm has materialized at this time.
Thumbnail Image

OpenAI erweitert KI-Videomarkt mit Sora-App

2025-10-01
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora app) that generates videos with user integration, which is clearly AI-based. However, the article does not describe any direct or indirect harm caused by the app's development or use. It discusses potential copyright concerns and legal discussions that might arise, but these are speculative and not actualized harms. Therefore, the event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on AI developments and potential governance issues without reporting a specific incident or hazard.
Thumbnail Image

全部影片由AI生成!OpenAI傳準備打造「AI短影音平台」 | ETtoday AI科技 | ETtoday新聞雲

2025-09-30
ai.ettoday.net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Sora 2 model generating AI videos) and its planned use in a new social media platform. However, there is no indication that any harm has occurred or that there is a credible, imminent risk of harm. The article focuses on the announcement and features of the platform, including safety and copyright mechanisms, but does not report any incidents or hazards. Therefore, this is best classified as Complementary Information, providing context and updates about AI developments and ecosystem evolution without describing an AI Incident or AI Hazard.
Thumbnail Image

OpenAI launches Sora 2 -- and a TikTok-style app to use it

2025-10-01
Maginative
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) and its deployment in a consumer app, which inherently carries potential risks such as misinformation, misuse of likeness, addiction, and bullying. However, the article focuses on the launch, features, safety measures, and governance principles rather than any realized harm or incident. There is no indication that any injury, rights violation, or other harm has occurred yet. The discussion of risks and mitigation strategies aligns with providing complementary information about the AI ecosystem and responses to potential harms. Therefore, this event is best classified as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

OpenAI计划推出Sora 2独立App 背后版权问题引发争议 - cnBeta.COM 移动版

2025-09-30
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Sora 2) used for generating video content. The system's use of copyrighted material without explicit consent unless an opt-out is performed constitutes a breach of intellectual property rights, a form of harm under the AI Incident criteria. The harm is realized or ongoing because the system is already in internal use and being rolled out, and the controversy centers on the legal and rights violations stemming from its operation. Therefore, this qualifies as an AI Incident due to direct involvement of an AI system causing or enabling violations of intellectual property rights.
Thumbnail Image

OpenAI prepara rede social de vídeos criados por IA para rivalizar com TikTok - Hardware.com.br

2025-09-30
Hardware.com.br
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (OpenAI's Sora 2 video generation model) to create synthetic video content. However, the article does not report any realized harm or incident resulting from this AI system's development or use. Instead, it outlines the platform's upcoming launch, its features, and potential controversies, particularly regarding copyright and content ownership, which are concerns but not yet realized harms. Therefore, this is a case of Complementary Information, providing context and updates about AI developments and their societal implications without describing an AI Incident or AI Hazard.
Thumbnail Image

OpenAI Launches Sora 2, Social Platform Where Users Can Create, Share AI-Generated Clips Of Themselves, Friends - Tekedia

2025-10-01
Tekedia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (Sora 2, an AI video generator and social platform). The launch and use of this AI system could plausibly lead to harms such as non-consensual use of likeness, misinformation, or harmful content dissemination, which are recognized risks in AI-generated media platforms. However, the article does not describe any realized harm or incidents resulting from the system's use yet. Therefore, this qualifies as an AI Hazard due to the credible potential for future harm, but not an AI Incident. It is not merely complementary information because the main focus is the launch and the associated risks, not a response or update to a prior incident. It is not unrelated because the AI system and its potential impacts are central to the article.
Thumbnail Image

Las tecnológicas apuestan por inundar las redes sociales con contenido generado por IA

2025-10-01
Yahoo!
Why's our monitor labelling this an incident or hazard?
The article describes the deployment and expansion of AI systems generating hyperrealistic videos that can influence social media content. Although it does not report a specific incident of harm, the mention of lawsuits for copyright infringement and the potential for misleading or harmful AI-generated content indicate plausible future harms. The AI systems' use in generating and disseminating such content could plausibly lead to violations of intellectual property rights and harm to communities through misinformation or deceptive content. Hence, this situation fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Comment OpenAI veut rendre les deepfakes accessibles à tous avec Sora 2

2025-10-01
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates deepfake videos with realistic voice and image synthesis, which is a clear AI system. The article focuses on the launch and features of the app, highlighting potential risks of misuse (e.g., creating deepfakes of others) but does not describe any realized harm or incidents caused by the AI system. The concerns about possible harmful uses and the inherent risks of such technology qualify as plausible future harm. Therefore, this event is best classified as an AI Hazard, as the AI system's use could plausibly lead to harms such as misinformation, identity misuse, or reputational damage, but no direct or indirect harm has yet been reported.
Thumbnail Image

保證只有 AI 假影片?OpenAI 推出 Sora 短片平台,美、加限定試用

2025-10-02
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) used for video generation and social sharing, with AI-driven recommendation algorithms and identity verification via uploaded videos. The article discusses potential misuse risks (e.g., unauthorized creation or dissemination of AI-generated videos), which could plausibly lead to harms such as violations of rights or harm to communities. Since no actual harm has been reported and the focus is on the system's launch and potential risks, this qualifies as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

OpenAI推出「Sora」短影片平台,號稱AI版TikTok

2025-10-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2 generative AI model) used in a new social media platform, which fits the definition of an AI system. However, the article does not describe any direct or indirect harm caused by the AI system, nor does it report a credible risk of imminent harm. The concerns about copyright and content safety are potential challenges but not immediate hazards or incidents. The main focus is on the launch and features of the platform and the anticipated governance issues, which aligns with Complementary Information rather than an Incident or Hazard.
Thumbnail Image

保證只有 AI 假影片?OpenAI 推出 Sora 短片平台,美、加限定試用

2025-10-02
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (Sora 2) for generating realistic videos and AI avatars. The article explicitly mentions potential misuse risks, such as unauthorized creation and sharing of AI-generated videos, which could plausibly lead to violations of personal rights and harm to communities. However, no actual harm or incident is reported at this stage, only potential risks. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving rights violations or community harm in the future.
Thumbnail Image

Fact Check: Is that really Sam Altman in viral GPU theft video? Sora 2 clip sparks debate among netizens

2025-10-01
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated deepfake content, which is realistic but fabricated. While no actual harm has occurred from the video itself, the event highlights the plausible future risk of AI-generated misinformation and deception that could harm individuals' reputations and communities. Therefore, this event represents an AI Hazard because it plausibly could lead to harms such as misinformation, reputational damage, and social disruption, even though no direct harm has yet occurred.
Thumbnail Image

I'm addicted to Sora 2! I can't stop making AI slop videos.

2025-10-01
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates videos using AI based on user likenesses. While the article mentions possible negative effects like copyright infringement and societal problems from realistic fake videos, it does not report any actual harm or incident occurring. The focus is on the user's experience and the potential risks are speculative. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harms such as copyright violations or societal issues, but no harm has yet materialized.
Thumbnail Image

Brain Rot: OpenAI Prepares to Launch TikTok Clone Featuring AI-Generated Videos

2025-10-01
Breitbart
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2 video generation model and recommendation algorithm) and its use in a new app. No direct or indirect harm has yet occurred as per the article, but there are credible potential risks related to child safety, misuse of likeness, copyright infringement, and misinformation through AI-generated videos. These potential harms make this event an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because the AI system and its societal implications are central to the report.
Thumbnail Image

OpenAI made a TikTok for deepfakes, and it's getting hard to tell what's real

2025-10-01
The Verge
Why's our monitor labelling this an incident or hazard?
The event describes the deployment and active use of an AI system (Sora 2) that generates realistic deepfake videos and audio, which are already being consumed by users and are difficult to distinguish from real content. This directly relates to harm to communities through misinformation and deception, fulfilling the criteria for an AI Incident. The system's development and use have led to realized harm, not just potential harm, as the deepfakes are actively being created and shared. The article also discusses safeguards and controls, but these do not eliminate the harm or the risk of misuse. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

OpenAI: There's a Small Chance Sora Would Create a Sexual Deepfake of You

2025-10-01
PCMag Australia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora) that can generate realistic videos including sexual deepfakes, which could plausibly lead to harm such as violation of personal rights and psychological trauma. Although no actual harm is reported, the article acknowledges a small risk of harmful outputs bypassing safeguards. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving harm to individuals' rights and well-being. The article does not report a realized harm or incident, nor is it primarily about responses or governance measures, so it is not Complementary Information.
Thumbnail Image

OpenAI: There's a Small Chance Sora Would Create a Sexual Deepfake of You

2025-10-01
PC Magazine
Why's our monitor labelling this an incident or hazard?
The Sora app is an AI system generating videos from facial and voice data, which fits the AI system definition. The article reports that despite safety measures, the system allowed 1.6% of sexual or adult nudity deepfake videos to be generated, indicating a failure or limitation in the AI system's safeguards. This failure directly leads to harm in the form of sexual deepfakes, which can cause trauma and violate personal rights. The harm is realized or ongoing, not merely potential, as the system has generated such content during testing. Hence, this is an AI Incident involving harm to persons through misuse or malfunction of the AI system.
Thumbnail Image

Sam Altman Admits Sora 2 'Slop' Feed Is a Money Grab to Fund GPUs

2025-10-01
PC Magazine
Why's our monitor labelling this an incident or hazard?
The article centers on AI product launches, monetization strategies, and corporate financials, without describing any realized or potential harm caused by AI systems. While it references a copyright infringement lawsuit, it does not present this as a new AI Incident or Hazard but rather as background context. The discussion about AI-generated content being 'slop' is a subjective critique, not a harm event. Hence, the content fits the definition of Complementary Information, enhancing understanding of AI ecosystem developments without reporting a new harm or credible risk of harm.
Thumbnail Image

国庆第一天 我被OpenAI的新APP硬控了

2025-10-01
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Sora 2 video generation model and the Sora app) and its deployment and use. However, there is no indication of any realized harm (such as injury, rights violations, misinformation, or disruption) or credible potential harm described in the article. The content is primarily about the app's launch, capabilities, and user experience, which fits the definition of Complementary Information as it provides context and updates about AI developments without reporting an incident or hazard.
Thumbnail Image

OpenAI社交应用出师不利:AI伪造视频引发质疑 奥特曼回应

2025-10-02
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) generating deepfake videos, which are a form of AI-generated content. While no direct harm has been reported yet, the presence of deepfake videos and the expressed concerns by researchers indicate a plausible risk of harm to individuals' reputations and to communities through misinformation or manipulation. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms such as violations of rights or harm to communities, but no actual harm has been documented in the article.
Thumbnail Image

满屏AI换脸,OpenAI新应用Sora遭自家研究员吐槽

2025-10-02
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora) that generates deepfake videos and uses AI-driven content recommendation algorithms. The concerns raised by researchers and the company itself relate to potential harms like addiction, misinformation, and social harm, but these harms have not yet materialized. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms such as social disruption or psychological harm in the future. There is no indication of realized harm or incident at this stage, nor is the article primarily about responses or governance measures, so it is not Complementary Information.
Thumbnail Image

OpenAI Readies TikTok-Style App Powered Only By AI Videos

2025-10-01
ZeroHedge
Why's our monitor labelling this an incident or hazard?
The article focuses on the development and upcoming launch of an AI-powered social app generating video content autonomously. While it raises concerns about copyright infringement, consent, and likeness use, these are prospective issues rather than documented harms. The event does not describe any actual injury, rights violation, or disruption caused by the AI system so far. It also includes commentary on industry and legal responses, which aligns with providing context and updates on AI ecosystem developments. Hence, the event fits best as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

OpenAI lancia Sora 2, un'app per creare video AI (anche di se stessi): è il suo primo social network

2025-10-01
Corriere della Sera
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (AI-generated video creation and AI-curated social feed). However, no direct or indirect harm has occurred yet; the article focuses on the launch and the intended safeguards to prevent harm. The potential for misuse (e.g., bullying, deepfake misuse) is acknowledged but remains hypothetical. Hence, this qualifies as an AI Hazard due to plausible future harm but not an AI Incident or Complementary Information, as it is not an update on a past incident or governance response.
Thumbnail Image

OpenAI's new video generation tool Sora 2 is here, but don't worry, Sam Altman says it will avoid the 'degenerate case of AI video generation that ends up with us all being sucked into an RL-optimized slop feed'

2025-10-01
pcgamer
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (Sora 2) that generates realistic videos including deepfakes, which can plausibly lead to harms such as misinformation, violation of personal likeness rights, and social disruption. Although no direct harm is reported as having occurred, the article emphasizes credible concerns and potential negative impacts, including addiction and disinformation. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving harm to communities and rights violations. The article also discusses mitigation efforts but does not report a realized incident, so it is not an AI Incident or Complementary Information. It is not unrelated because the AI system and its risks are central to the article.
Thumbnail Image

Apparently the most popular clip on OpenAI's new AI video app Sora depicts Sam Altman stealing graphics cards

2025-10-01
pcgamer
Why's our monitor labelling this an incident or hazard?
An AI system (Sora 2) is explicitly involved as it generates deepfake videos using AI-powered text-to-video generation. The event does not report actual harm occurring but raises credible concerns about plausible future harms including disinformation, harassment, and bullying stemming from the misuse of AI-generated realistic videos without clear disclaimers. The presence of parental controls and mitigations is noted but does not eliminate the risk. Since harm is plausible but not yet realized, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Sora 2, Vibes, Feed: How much AI video do we need?

2025-10-01
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The article does not describe any direct or indirect harm caused by AI systems, nor does it report any specific event where AI use or malfunction led to injury, rights violations, or other harms. Instead, it focuses on the potential environmental impact of widespread AI video generation and the competitive dynamics among AI video platforms. This fits the definition of Complementary Information, as it provides contextual and ecosystem-level insights without reporting a new AI Incident or AI Hazard.
Thumbnail Image

The First 24 Hours of Sora 2 Chaos: Copyright Violations, Sam Altman Shoplifting, and More

2025-10-01
Gizmodo
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Sora 2 was trained on copyrighted material without permission and that users are generating and sharing videos that infringe on copyrights and likeness rights. These are direct violations of intellectual property rights and personal rights, constituting harm as per the AI Incident definition (c). The AI system's development (training on copyrighted content) and use (generation and sharing of infringing content) have directly led to these harms. Although OpenAI has some mitigation measures, the harms are ongoing and realized, not merely potential. Hence, the event is best classified as an AI Incident.
Thumbnail Image

OpenAI staff grapples with the company's social media push | TechCrunch

2025-10-01
TechCrunch
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora app with AI-generated video content and deepfakes) and discusses concerns about possible negative effects on users and society. However, no direct or indirect harm has yet occurred or been reported. The concerns are about plausible future harms such as addiction, misinformation, or social disruption, but these remain speculative at this point. Therefore, the event qualifies as an AI Hazard because it plausibly could lead to harm but no harm has materialized yet. The article also includes reflections on governance and mission alignment, but these are part of the broader context rather than a direct response to an incident. Hence, it is not Complementary Information or an AI Incident.
Thumbnail Image

OpenAI's new social app is filled with terrifying Sam Altman deepfakes | TechCrunch

2025-10-01
TechCrunch
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's Sora app) that uses AI to generate realistic deepfake videos. The article highlights the app's potential to facilitate harmful uses like disinformation and bullying, which are plausible harms to communities and individuals. Although no direct harm is documented as having occurred, the credible risk of such harms is evident given the app's capabilities and user behavior. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harms in the future.
Thumbnail Image

Las empresas de IA nos dicen que quieren lograr una AGI. Lo que realmente están conquistando es la economía de la atención

2025-10-01
Xataka
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used for video content generation and discusses realized harms such as misinformation, loss of trust in authentic content, privacy concerns, and potential fraud via deepfakes. These harms affect communities and individual rights, fitting the definition of AI Incident. The harms are not merely potential but are described as already occurring or highly likely given the current state of AI-generated content proliferation. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI lance Sora 2, un nouveau générateur de vidéos avec son propre réseau social

2025-10-01
CommentCaMarche
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as generating videos and deepfakes with audio, integrated into a social media platform that personalizes content via AI algorithms. The article highlights the potential and ongoing misuse of deepfakes for harmful purposes such as fake news, cyberharassment, and sextortion, which are direct harms to individuals and communities. The AI system's development and use have directly led or will lead to violations of rights and harm to communities, fulfilling the criteria for an AI Incident. Although protections are mentioned, the risks and harms are clearly articulated and pivotal to the AI system's deployment.
Thumbnail Image

OpenAI Just Made an App for Sharing Hyper-Realistic AI Slop

2025-10-01
Lifehacker
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (Sora 2) that generates hyper-realistic videos with audio and realistic human likenesses. The article explicitly discusses the risks of mass disinformation and privacy/security issues stemming from misuse of the system, which could lead to violations of rights and harm to communities. While no specific incident of harm is reported as having already occurred, the article strongly emphasizes the credible and foreseeable risk of such harms once the app is widely used. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to significant harms, including disinformation and privacy violations. The article does not report a realized harm event yet, so it is not an AI Incident. It is more than just complementary information because the main focus is on the risks and potential harms posed by the AI system's deployment, not on responses or ecosystem context.
Thumbnail Image

OpenAI lancia nuova app per generare video, si prepara a social - Future Tech

2025-10-01
ANSA.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) used to generate video content and digital avatars, which fits the definition of an AI system. The article discusses potential harms related to copyright infringement and deepfake misuse, which could plausibly lead to violations of intellectual property rights and harm to communities by spreading misinformation or fake content. However, no actual harm or incident has been reported so far, only concerns and preventive measures. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm in the future but no incident has yet occurred.
Thumbnail Image

OpenAI's New Video Tool Features User-Generated 'South Park,' 'Dune' Scenes. Will Studios Sue?

2025-10-01
The Hollywood Reporter
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenAI's Sora video generator) that uses copyrighted materials from movies and TV shows without licenses, enabling users to generate infringing content. This use directly leads to violations of intellectual property rights, a recognized harm under the AI Incident definition. The involvement is through the AI system's use and deployment, and the harm is realized as studios have grounds to sue and legal experts confirm the infringement. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI's new video tool can use copyrighted content by default: report

2025-10-01
The Daily Star
Why's our monitor labelling this an incident or hazard?
The article focuses on the launch and policy of an AI video generation tool that uses copyrighted content by default, which could raise intellectual property rights issues. However, it does not describe any actual violation or harm occurring due to this policy or the AI system's use. The potential for copyright infringement exists, but no incident or harm is reported or implied as having happened yet. Therefore, this is not an AI Incident or AI Hazard but rather complementary information about the AI system's development, use, and governance context.
Thumbnail Image

Sind KI-Videos nun wirklich reif für Hollywood?

2025-10-01
der Standard
Why's our monitor labelling this an incident or hazard?
The article describes a new AI system (Sora 2) and its capabilities, which qualifies as an AI system. However, it does not report any realized harm or direct/indirect consequences from its use or malfunction. The mention of actors protesting and cooperation risks reflects societal response but not an AI Incident or Hazard. There is no evidence of injury, rights violations, or other harms caused or plausibly caused by the AI system. Therefore, this is best classified as Complementary Information, providing context on AI development and societal reactions without reporting an incident or hazard.
Thumbnail Image

OpenAI, Sosyal Medyayı Ele Geçirecek Sora 2'yi Duyurdu: İşte İlk Örnekler [Video]

2025-10-01
Webtekno
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) with advanced generative capabilities. While the article mentions ethical concerns and potential risks like deepfakes and copyright infringement, it does not describe any realized harm or incidents resulting from the AI's use. The focus is on the announcement, capabilities, and potential implications, including future risks and ongoing safety efforts. Therefore, this qualifies as Complementary Information, providing context and updates about AI developments and their societal implications without reporting a specific AI Incident or Hazard.
Thumbnail Image

Comment Sora 2, le générateur de vidéos d'OpenAI, peut brouiller la frontière entre le vrai et le faux

2025-10-01
Frandroid
Why's our monitor labelling this an incident or hazard?
The event involves the release and use of an AI system (Sora 2) that can generate realistic videos and deepfakes. While no specific harm has yet been reported, the article explicitly discusses the plausible future harms from misuse, such as unauthorized use of licensed characters and deepfake creation without safeguards. This fits the definition of an AI Hazard, as the development and deployment of this AI system could plausibly lead to incidents involving harm to communities (through misinformation or reputational harm) and violations of intellectual property rights. The article also mentions security measures but does not report any realized harm, so it is not an AI Incident. It is more than just complementary information because the main focus is on the potential risks and capabilities of the system, not on responses or ecosystem updates. Therefore, the classification is AI Hazard.
Thumbnail Image

OpenAI's Sora app is already flooded with disturbing Sam Altman deepfakes

2025-10-02
Digit
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora app) that generates realistic deepfake videos using biometric data and AI video generation models. The use of AI to create disturbing and realistic deepfakes of a public figure, which are spreading widely, indicates a plausible risk of harm to individuals' reputations and to communities through misinformation or manipulation. However, the article does not report any actual realized harm yet, only the potential for misuse and ethical concerns. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm, but no direct or indirect harm has been confirmed at this stage.
Thumbnail Image

OpenAI запустила додаток, який перетворює ролики на реалістичні відео з живим звуком

2025-10-01
РБК-Украина
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora) used for video generation, but the article does not report any realized harm or incident resulting from its use or malfunction. It focuses on the app's launch, user consent features, and content restrictions, which are measures to prevent potential harm. Therefore, this is not an AI Incident or AI Hazard. It is not unrelated because it concerns an AI system, but it does not describe harm or plausible harm. The article is best classified as Complementary Information as it provides context and details about a new AI system and its governance measures without reporting any harm or risk.
Thumbnail Image

刚刚,OpenAI发布Sora 2!AI视频GPT-3.5 时刻来了,还有一个超好玩的app| 附下载链接

2025-10-02
爱范儿
Why's our monitor labelling this an incident or hazard?
The article describes a new AI system (Sora 2) and its capabilities, which clearly involve AI systems generating video and audio content with advanced physical simulation and user integration. However, it does not report any direct or indirect harm caused by the system's development or use. It discusses potential privacy and ethical concerns but frames them as anticipated challenges and mitigation measures rather than realized harms. There is no indication of an AI Incident or a plausible AI Hazard occurring now. The main focus is on the technological advancement and its societal and creative implications, fitting the definition of Complementary Information.
Thumbnail Image

OpenAI Launching TikTok Competitor for Short-Form AI Slop Videos

2025-10-01
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for generating and recommending deepfake-style videos, which can plausibly lead to significant harms such as misinformation, privacy violations, and copyright infringement. Although no actual harm is reported yet, the article highlights concerns about how such an app could become a platform for harmful AI-generated content. This fits the definition of an AI Hazard, as the development and intended use of the AI system could plausibly lead to an AI Incident in the future.
Thumbnail Image

OpenAI Releases Ghoulish AI Video of Sam Altman

2025-10-01
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenAI's Sora 2) generating video and audio content. However, it does not report any realized harm or credible risk of harm stemming from the AI system's development, use, or malfunction. The public's negative reaction to the AI-generated content is noted, but this does not constitute harm as defined (e.g., injury, rights violations, disruption). The article mainly provides an update on AI capabilities and societal responses, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

OpenAI's Sora 2 Generates Realistic Videos of People Shoplifting

2025-10-01
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates realistic videos, including potentially harmful fabricated content. The demonstration of fake criminal acts using AI-generated videos illustrates a credible risk of future harm, such as wrongful accusations and legal consequences based on fabricated evidence. Although no direct harm has occurred yet, the plausible future harm from misuse of this technology is significant. The article also discusses existing issues with AI facial recognition and law enforcement misuse, reinforcing the potential for harm. Since the harm is not yet realized but plausibly could occur, the event is best classified as an AI Hazard.
Thumbnail Image

OpenAI sabe que una IA triunfa siendo útil para el usuario de a pie. Ahora Sora 2 te deja crear vídeos fake con tus amigos desde el móvil

2025-10-01
Genbeta
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) used for generating realistic videos, which can plausibly be misused to create harmful deepfakes. However, the article does not report any realized harm, injury, rights violations, or disruptions caused by the AI system. The focus is on the product launch, its capabilities, and potential risks with some mitigation measures in place. Therefore, this is not an AI Incident or AI Hazard but rather general AI-related news about a new product and its ecosystem, which fits the definition of Complementary Information.
Thumbnail Image

OpenAI's Sora 2 Unleashed Internet Chaos in 24 Hours -- From Dildo Ads to Furry CEOs - Decrypt

2025-10-02
Decrypt
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Sora 2) that generates synthetic media including deepfake-like content and reproduces copyrighted material without explicit permission, which directly leads to violations of intellectual property rights and potential personal rights infringements. The harms are realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of intellectual property rights and harms to individuals' likeness rights.
Thumbnail Image

OpenAI's Sora 2 Generates Audio and Comes With Its Own App

2025-10-01
PetaPixel
Why's our monitor labelling this an incident or hazard?
The article primarily reports on the release of a new AI product and its capabilities, along with some contextual information about copyright issues and legal actions related to AI-generated content. There is no indication that the AI system has directly or indirectly caused any harm or incident yet. The mention of potential copyright issues and lawsuits relates to ongoing legal and societal responses rather than a new incident or hazard. Therefore, this is best classified as Complementary Information, providing context and updates about AI developments and governance without describing a specific AI Incident or AI Hazard.
Thumbnail Image

OpenAI veut rendre les deepfakes accessibles à tous (et c'est assez effrayant)

2025-10-01
Le Huffington Post
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (Sora 2) that generates realistic deepfake videos, including voice imitation, which can be used to impersonate individuals. This technology's development and release directly relate to AI systems capable of producing content that can harm individuals or communities through misinformation, identity theft, or reputational damage. While no specific harm has yet been reported, the article highlights credible concerns about potential misuse and harm, making this an AI Hazard. The presence of AI is explicit, and the plausible future harm from misuse of deepfakes is well recognized. Since no actual harm is reported yet, it does not qualify as an AI Incident. The article is not merely complementary information as it focuses on the potential risks and capabilities of the AI system.
Thumbnail Image

Tecnológicas apuestan por llenar las redes con contenido generado por IA

2025-10-01
Forbes México
Why's our monitor labelling this an incident or hazard?
The article primarily focuses on the deployment and expansion of AI-generated content platforms and the associated legal and societal responses, such as copyright lawsuits and content moderation policies. There is no description of an actual AI Incident (harm realized) or an AI Hazard (plausible future harm) occurring in this context. The mention of lawsuits and content restrictions are examples of governance and societal responses to AI developments. Therefore, this event fits best as Complementary Information, providing context and updates on AI ecosystem developments and responses rather than reporting a new incident or hazard.
Thumbnail Image

Avec Sora, son réseau social, OpenAI va-t-il rendre les deepfakes universels ?

2025-10-01
Le Point.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora) that generates realistic deepfake videos, including impersonation features. The use of this AI system could plausibly lead to harms such as misinformation, identity violations, and community harm, fitting the definition of an AI Hazard. Since the article does not report any realized harm or incident but focuses on the potential risks and the company's mitigation efforts, the classification as an AI Hazard is appropriate. The presence of AI, the nature of the system's use, and the credible risk of harm justify this classification.
Thumbnail Image

OpenAI lança ferramenta capaz de gerar vídeos falsos para concorrer com o TikTok

2025-10-01
Publico
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (Sora 2) that generates realistic synthetic videos, including deepfakes. Although no direct harm has been reported so far, the article emphasizes plausible future harms such as the spread of disinformation and difficulty distinguishing real from fake videos, which could significantly impact communities. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harms like misinformation and social disruption, but no actual incident of harm is described yet.
Thumbnail Image

L'app di Sora di OpenAI è reale ma non ci si può accedere facilmente

2025-10-01
Tom's Hardware
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as generating deepfake videos, which can plausibly lead to harms such as violations of privacy, consent, and potential misinformation. However, the article does not report any realized harm or incidents resulting from the app's use so far. Instead, it focuses on the app's features, safeguards, and cautious rollout strategy. Therefore, this qualifies as an AI Hazard, reflecting plausible future harm rather than an AI Incident or Complementary Information.
Thumbnail Image

OpenAI lança Sora 2 em meio a polêmica com Hollywood sobre direitos autorais - Money Times

2025-10-01
Money Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) and centers on the use of copyrighted works in AI training, which is a violation of intellectual property rights if done without consent. However, the article does not describe any concrete incident where harm has occurred due to the AI's use; rather, it presents the ongoing dispute and potential legal implications. Therefore, this situation represents a plausible risk of harm related to copyright infringement but no direct or indirect harm has been reported yet. As such, it fits the definition of an AI Hazard, reflecting a credible potential for future harm stemming from the AI system's development and use practices.
Thumbnail Image

OpenAI annonce Sora 2 et une nouvelle application sociale de vidéo et audio d'IA pour créer des deepfakes de vos amis grâce à des " caméos ", pour spammer le web avec encore plus d'AI Slop

2025-10-01
Developpez.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Sora 2) that generates photorealistic video and audio deepfakes, which can be used to create realistic appearances of people in videos. The system's use in a social media context with algorithmic feeds increases the risk of widespread dissemination of manipulated content. Although OpenAI has implemented safeguards, the potential for misuse (e.g., creating deceptive deepfakes without consent, spreading misinformation) remains plausible. No direct or realized harm is reported in the article, but the credible risk of future harm to individuals and communities from misuse of the technology fits the definition of an AI Hazard rather than an AI Incident. The article also discusses societal and governance responses (safeguards, moderation), but the main focus is on the launch and capabilities of the AI system and its plausible risks, not on a realized incident or complementary information about past incidents.
Thumbnail Image

OpenAI stirs buzz with talk of social video app

2025-10-01
Rolling Out
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2 video generation model) in development and internal use, but there is no indication that the AI system has caused or led to any harm or incidents. The article focuses on the potential and features of the app, internal enthusiasm, and strategic positioning, without any mention of realized harm, violations, or risks materializing. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is not merely a product launch announcement but provides contextual information about the AI ecosystem and potential future impacts, fitting the definition of Complementary Information.
Thumbnail Image

OpenAI prepara su propio "TikTok" con videos 100% generados por inteligencia artificial

2025-10-01
Urgente 24
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as generating video content and using AI for identity verification and face usage. The controversy around the ability of others to use a user's face in AI-generated videos without explicit consent, even if notifications are sent, points to plausible future harms related to privacy violations and misuse of personal identity. Since the platform is not yet publicly launched and no harm has been reported, this situation constitutes an AI Hazard rather than an AI Incident. The article focuses on the potential risks and the upcoming launch, not on realized harm or responses to harm, so it is not Complementary Information.
Thumbnail Image

L'app social di Sora di OpenAI è invasa da deepfake inquietanti

2025-10-02
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Sora app) that generates deepfake videos using biometric data and AI video generation technology. The article discusses the app's use and the potential for misuse, including the creation of disturbing or misleading content. While no direct harm is reported as having occurred, the plausible risk of harm such as misinformation, reputational harm, and social disruption is evident given the nature of the AI system and its outputs. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving harm to communities or violations of rights, but no actual harm has been documented yet in the article.
Thumbnail Image

OpenAI Sora 2 e app Sora: protezioni sufficienti?

2025-10-01
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) used for generating AI videos including deepfakes. The article focuses on the potential for non-consensual deepfake harms (violations of rights and harm to communities) that could plausibly arise from the app's use, despite current protections. Since no actual harm has been reported yet, but the risk is credible and recognized, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. The article is not merely general AI news or product launch without risk discussion, so it is not Unrelated.
Thumbnail Image

OpenAIs TikTok-Konkurrent macht Deepfakes von euren Freunden

2025-10-01
futurezone.at
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora) that generates deepfake videos, which can plausibly lead to harms such as privacy violations, misinformation, or reputational damage if misused. However, the article does not report any realized harm or incidents resulting from the app's use. Instead, it focuses on the app's features, consent mechanisms, and current limited availability. Therefore, this situation represents a plausible future risk (AI Hazard) rather than an actual incident or complementary information about a past incident.
Thumbnail Image

OpenAI's Sora AI Ignites Debates on Copyright Infringement and Ethics

2025-10-02
WebProNews
Why's our monitor labelling this an incident or hazard?
The AI system (Sora) is explicitly described as generating video content that closely replicates copyrighted materials, implying unauthorized use of protected works in its training data. This constitutes a violation of intellectual property rights, a form of harm under the AI Incident definition (c). The article details that this is not merely a potential risk but an ongoing issue with real outputs demonstrating the infringement, thus qualifying as an AI Incident rather than a hazard or complementary information. The ethical debates and regulatory discussions further support the significance of the harm caused by the AI system's development and use.
Thumbnail Image

OpenAI lance Sora 2 et dévoile une application sociale concurrente de TikTok

2025-10-01
Fredzone
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (Sora 2) for generating realistic videos and a social media platform that relies on AI recommendation algorithms. Although no direct harm is reported, the article explicitly mentions concerns about non-consensual use of users' images and the lack of clear legal frameworks to address such abuses. These factors indicate a credible risk that the AI system's use could plausibly lead to harms such as violations of personal rights and community harm. Therefore, this qualifies as an AI Hazard rather than an Incident, as the harms are potential and not yet realized.
Thumbnail Image

Las tecnológicas apuestan por inundar las redes sociales con contenido generado por IA

2025-10-01
UDG TV
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (advanced generative AI models for video content) and their use in social media, but the article focuses on the proliferation of AI-generated content and related legal and policy responses rather than any concrete harm or incident caused by these systems. The mention of lawsuits and copyright enforcement actions are responses to potential or alleged infringements, not confirmed incidents of harm caused by AI misuse or malfunction. Therefore, this is best classified as Complementary Information, providing context and updates on AI ecosystem developments and governance responses rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Sora 2 el TikTok bueno de OpenAi

2025-10-01
Frikipandi - Web de Tecnología - Lo más Friki de la red.
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (Sora 2) that generates realistic videos and has social sharing features. The article highlights credible concerns about misuse (deepfakes, misinformation), bias, and rights violations that could plausibly lead to harm. However, no actual harm or incidents have been reported or described as occurring. The discussion focuses on potential ethical, social, and legal challenges and the need for regulation and responsible use. Thus, the event is best classified as an AI Hazard, reflecting the plausible future risk of harm from the AI system's development and use.
Thumbnail Image

OpenAI staff grapples with the company's social media push

2025-10-01
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Sora app with AI-generated content and deepfakes) and concerns about its potential negative impacts. However, there is no indication that any actual harm has occurred yet. The researchers express worry about possible future harms, making this a plausible risk scenario rather than a realized incident. The article primarily reports on internal debate and reflections, not on a specific AI Incident or a concrete AI Hazard event such as a near miss or credible threat. Therefore, it fits best as Complementary Information, providing context and insight into societal and governance responses and concerns about AI deployment in social media.
Thumbnail Image

OpenAI Launches Sora 2 AI Video Model and Companion Social App - WinBuzzer

2025-10-01
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as generating realistic video and audio content with user likenesses, which can be used to create deepfakes. The use of the 'cameos' feature directly implicates risks of misuse, including consent violations and potential harm to individuals' reputations or privacy. Although OpenAI has implemented safeguards, the article does not describe any actual harm occurring yet. The concerns about deepfake misuse and ethical issues around data training represent credible potential harms that could plausibly lead to AI Incidents in the future. Hence, the event is best classified as an AI Hazard due to the plausible future harm from the AI system's use and capabilities.
Thumbnail Image

字节注定无眠,抖音的"AI社交游乐场"实验,让Sora抢先落了地

2025-10-01
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Sora) and its innovative social media application, which involves AI-generated content and avatars. While it mentions concerns about unauthorized use of videos for training, it does not report any actual harm, violation of rights, or incidents resulting from the AI system's use or malfunction. The discussion centers on the potential market impact and strategic competition, which is forward-looking and conceptual rather than reporting a specific AI Incident or Hazard. Therefore, it fits best as Complementary Information, providing context and updates on AI ecosystem developments and societal responses.
Thumbnail Image

Sora 2 : OpenAI fait un bond de géant dans la génération vidéo

2025-10-01
Enerzine
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates realistic videos, which can plausibly lead to harms such as misinformation, identity misuse, or content moderation challenges. However, the article does not report any realized harm or incident caused by the AI system. The concerns are about potential future risks and ethical issues, making this an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risks and ethical challenges posed by the AI system's capabilities and deployment.
Thumbnail Image

OpenAI ameaça abalar Hollywood com IA de vídeos

2025-10-01
O Antagonista
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora) used for generating videos, which can potentially infringe on copyright and intellectual property rights. While there are significant concerns and legal challenges discussed, no actual violation or harm has been reported as having occurred. The article focuses on the potential risks, legal debates, and preventive measures rather than a realized incident. Therefore, this qualifies as an AI Hazard because the development and use of the AI system could plausibly lead to violations of intellectual property rights and related harms in the future, but no direct or indirect harm has yet materialized.
Thumbnail Image

La herramienta de video de Openai presenta películas y programas de televisión. ¿Los estudios demandarán?

2025-10-01
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's video generation tool) that uses copyrighted content without explicit licenses, which is a violation of intellectual property rights. However, the article discusses potential legal challenges and ethical concerns rather than actual realized harm or ongoing lawsuits. Therefore, this situation represents a plausible future risk of harm (legal disputes and rights violations) rather than an incident where harm has already occurred. Hence, it qualifies as an AI Hazard.
Thumbnail Image

OpenAI突然发布Sora 2:好一个"AI版抖音"!_手机网易网

2025-10-01
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) used for generating videos and a social platform built around it. However, the article primarily presents a product launch and demonstration of AI capabilities without describing any direct or indirect harm caused by the AI system. Although it notes concerns about future misuse (e.g., videos being hard to distinguish from real ones), these are speculative and do not constitute realized harm. Therefore, the event does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides complementary information about a significant AI development and its ecosystem implications.
Thumbnail Image

Sora Soars

2025-10-01
Spyglass
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Sora) and its use but does not report any actual harm or incident resulting from its use. The mention of IP guardrails and potential legal debates about using public figures' likenesses points to possible future issues but does not describe any current incident or hazard. Therefore, the event is best classified as Complementary Information, providing context and commentary on the AI system's societal and ethical implications without reporting an AI Incident or AI Hazard.
Thumbnail Image

C'est quoi Sora, le TikTok du deepfake lancé par OpenAI ?

2025-10-01
L'Éclaireur Fnac
Why's our monitor labelling this an incident or hazard?
Sora is an AI system that generates deepfake videos based on user avatars. Although the platform includes safeguards and no actual harm is reported, the technology inherently carries risks of misuse, including misinformation, identity manipulation, and privacy breaches. These risks constitute plausible future harms linked to the AI system's use. Since no direct harm has yet occurred, but the potential is credible and significant, the event fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

OpenAI rolls out Sora app and Sora 2 model in first push into social video

2025-10-01
Techloy
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2 video and audio generator) and its deployment in a social app (Sora) that uses AI for content creation and sharing. However, the article does not report any direct or indirect harm caused by the AI system, nor does it describe a plausible imminent risk of harm. It discusses potential safety and trust issues and parental controls, but these are presented as precautionary measures rather than responses to actual incidents. The main focus is on the launch and features of the AI system and the social platform, along with considerations about safety and user control. This fits the definition of Complementary Information, as it provides supporting data and context about AI developments and governance without describing a new AI Incident or AI Hazard.
Thumbnail Image

OpenAI Sora app takes on TikTok and Instagram Reels with AI video generation

2025-10-01
News9live
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) used in a social media app for AI-generated video content. While the article acknowledges potential misuse risks (e.g., non-consensual videos), no actual harm or incident has occurred or is described. The focus is on the app's launch, features, and safety measures, which aligns with providing complementary information about AI developments and societal implications. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

OpenAI lance Sora 2 : l'IA de vidéo qui dépasse la réalité... et rend plus addict que TikTok

2025-10-01
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) explicitly described as generating hyperrealistic videos and powering a social media platform with AI-generated content. The article details realized and potential harms including addiction to the app, misinformation risks, privacy concerns with face and voice usage, and copyright infringement due to training data. These constitute violations of rights and harm to communities. The AI system's use is central to these harms, making this an AI Incident rather than a mere hazard or complementary information. The article does not only discuss potential risks but also reports on the system's deployment and societal impact, fulfilling the criteria for an AI Incident.
Thumbnail Image

Sora 2: Consent for Faces, Opt-Out for Copyright

2025-10-01
implicator.ai
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system (Sora 2) and its use, including the consent mechanism for faces and the opt-out policy for copyrighted content. It discusses potential legal and ethical risks, such as copyright friction and the possibility of content flooding, which could plausibly lead to harms like intellectual property violations or misinformation spread. However, no actual harm or incident is reported as having occurred yet. The focus is on the system's design, policies, and potential challenges, which aligns with the definition of an AI Hazard or Complementary Information. Since the article mainly provides context, policy details, and potential risks without describing a specific harmful event, it fits best as Complementary Information rather than an AI Hazard or Incident.
Thumbnail Image

Sora 2 abre la caja de Pandora: el modelo de video de OpenAI que ya preocupa a Disney, Nintendo y más

2025-10-01
es-us.vida-estilo.yahoo.com
Why's our monitor labelling this an incident or hazard?
Sora 2 is an AI system capable of generating video content including copyrighted characters and celebrity likenesses without authorization, which constitutes a violation of intellectual property rights. The article documents actual generation of such content and ongoing legal disputes, indicating realized harm rather than just potential risk. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system causing harm through IP rights violations.
Thumbnail Image

Open AI Sora 2太狂!黃牛炒價免費邀請碼、官方急禁止 - 自由電子報 3C科技

2025-10-02
自由時報
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (Sora 2) and discusses its capabilities and societal impact concerns. However, it does not report any realized harm or incident caused by the AI system. The resale of invitation codes is a violation of terms but does not itself constitute an AI Incident or Hazard. The mention of calls for regulation and the official response to the resale issue are governance and societal responses, fitting the definition of Complementary Information. There is no direct or plausible harm event described that would qualify as an AI Incident or AI Hazard.
Thumbnail Image

OpenAI's Sora 2 is putting safety and censorship to the test with stunningly real videos

2025-10-03
CNBC
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates realistic deepfake videos, which have already led to harms such as misinformation (viral deepfakes), potential copyright violations, and challenges in content moderation. The AI system's use and deployment have directly led to these harms, fulfilling the criteria for an AI Incident. Although the company is implementing safeguards, users have found ways to circumvent them, and the harms are materializing. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

This social app can put your face into fake movie scenes, memes and arrest videos

2025-10-02
Washington Post
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's Sora) that generates realistic fake videos, including the unauthorized use of individuals' likenesses, which has already resulted in harassment and potential reputational harm. The article provides concrete examples of such misuse and the challenges in controlling it, indicating realized harm rather than just potential risk. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities through misinformation and harassment. Although OpenAI has implemented some guardrails, these have been circumvented, and harm has occurred. Thus, the classification as an AI Incident is appropriate.
Thumbnail Image

Sora: novo aplicativo de vídeos da OpenAI é de cair o queixo, para o bem e para o mal. Confira o teste

2025-10-03
O Globo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora) used to generate videos and deepfakes, which could plausibly lead to harms such as misinformation, copyright violations, and identity misuse. However, the article focuses on the launch, user experiences, and concerns about possible future harms rather than describing any actual harm or incident that has occurred. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to incidents, but no direct or indirect harm has yet materialized according to the article.
Thumbnail Image

Sora 2 se salió de madre: Sam Altman está en todos los videos

2025-10-02
Digital Trends Español
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Sora 2) for generating videos with faces of real people, which is explicitly described. The misuse of the system to create embarrassing or misleading videos of Sam Altman constitutes a direct harm to his personal rights and potentially intellectual property rights, fitting the definition of an AI Incident under violations of human rights or breach of intellectual property rights. The article reports that these harms are occurring, not just potential, so it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Pikachu at war and Mario on the street: OpenAI's Sora 2 thrills and alarms the internet

2025-10-02
NBC News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Sora 2) used to generate videos that include copyrighted characters and real people in fabricated scenarios. The use of copyrighted material without proper opt-out mechanisms and the creation of deepfakes depicting real individuals committing crimes constitute violations of intellectual property rights and potential harm to individuals' rights and reputations. These harms are either occurring or have already led to legal actions, fulfilling the criteria for an AI Incident. The AI system's development and use directly contribute to these harms, and the article discusses both realized and ongoing impacts, not just potential future risks. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

Sora 2 impressiona pela criatividade e realismo, mas as violações de direitos de autor acumulam-se

2025-10-02
SAPO
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI system (Sora 2) was trained on copyrighted materials and generates content that infringes on copyright, such as characters from Sonic, Pokémon, Death Stranding, and others. The AI's outputs have directly caused violations of intellectual property rights, a recognized harm under the AI Incident definition. The involvement of the AI system in generating infringing content is clear and direct, and the harm is realized, not merely potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

How Safe Is Your Facial Data With OpenAI's Sora App?

2025-10-02
PCMag Australia
Why's our monitor labelling this an incident or hazard?
The article primarily provides information about OpenAI's data retention and account deletion policies related to the Sora app and its AI-generated video feature. While it acknowledges potential risks (e.g., creation of inappropriate deepfakes), it does not describe any realized harm or incident caused by the AI system. There is no direct or indirect evidence of injury, rights violations, or other harms occurring. The discussion of potential misuse and safeguards is general and does not indicate a specific AI Hazard event either. Therefore, this content fits best as Complementary Information, offering context and updates about AI system use and governance rather than reporting a new incident or hazard.
Thumbnail Image

OpenAI's Sora Bans Deepfakes of Public Figures, Except for Dead Celebrities

2025-10-02
PCMag UK
Why's our monitor labelling this an incident or hazard?
The article discusses the use of an AI system (Sora) to generate deepfake videos, which can plausibly lead to harms such as misinformation, reputational harm, and intellectual property violations. However, no specific incident of harm or violation has been reported as having occurred. The concerns are about potential misuse and future risks, making this an AI Hazard rather than an AI Incident. The presence of watermarks and permission requirements for living public figures are safeguards but do not eliminate the plausible risk of harm from misuse of AI-generated deepfakes of deceased or fictional figures.
Thumbnail Image

How Safe Is Your Facial Data With OpenAI's Sora App?

2025-10-02
PC Magazine
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (OpenAI's Sora app) that processes facial and audio data to generate AI videos, which fits the definition of an AI system. However, no actual harm or violation has been reported; the concerns are about potential misuse (e.g., sexual deepfakes) and data privacy risks. The company has policies for data deletion and control, and is working on improvements. Since no direct or indirect harm has occurred, but plausible future harm is discussed, this qualifies as an AI Hazard. It is not Complementary Information because the focus is on potential risks rather than updates or responses to past incidents. It is not unrelated because the AI system and its risks are central to the article.
Thumbnail Image

OpenAI's Sora Bans Deepfakes of Public Figures, Except for Dead Celebrities

2025-10-02
PC Magazine
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's Sora) generating deepfake videos, which fits the AI system definition. The use of AI-generated deepfakes of deceased public figures and fictional characters could plausibly lead to harms such as misinformation or intellectual property violations. However, the article does not report any actual harm or incidents resulting from this use, only potential risks and policy details. The presence of watermarks and permission requirements for living public figures indicate some mitigation efforts. The article also mentions a related lawsuit but does not link it to a specific incident caused by Sora. Thus, the article primarily provides complementary information about AI system use, policy, and potential implications rather than reporting a concrete AI Incident or Hazard.
Thumbnail Image

Avec le lancement de Sora, OpenAI ouvre grand la porte à la généralisation des deepfakes

2025-10-02
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The AI system (Sora) is explicitly described as generating deepfake videos, which are known to have potential for misuse causing harm to individuals and communities. Since the article focuses on the launch and capabilities without reporting actual incidents of harm, it fits the definition of an AI Hazard, as the system's use could plausibly lead to harms such as misinformation or identity misuse. There is no indication of realized harm yet, so it is not an AI Incident. It is more than just complementary information because the launch itself highlights a credible risk of harm.
Thumbnail Image

OpenAI staff split over new Sora app, packed with AI videos and Sam Altman deepfakes

2025-10-02
India Today
Why's our monitor labelling this an incident or hazard?
The Sora app is an AI system generating video content and deepfakes, which involves AI use. The article focuses on concerns and debates about potential negative impacts, including addictive behavior and misinformation risks, but does not describe any actual harm or incidents resulting from the app's deployment. The presence of deepfakes and AI-driven feeds implies plausible future risks of harm to communities or rights, fitting the definition of an AI Hazard. Since no harm has yet occurred, and the article centers on potential risks and internal concerns rather than a realized incident, the classification as AI Hazard is appropriate.
Thumbnail Image

Sam Altman "skibidi toilet" e Zuckerberg rettiliano: i video più assurdi (e problematici) creati con Sora

2025-10-02
Corriere della Sera
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Sora 2) to create deepfake videos and other hyper-realistic clips. The article states that some of these videos are already in clear violation of copyright laws, indicating realized harm related to intellectual property rights. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of intellectual property rights, a form of harm under category (c). The article does not merely discuss potential or future harm but reports that violations are already occurring. Therefore, this is classified as an AI Incident.
Thumbnail Image

Sam Altman beccato a rubare: il video (falso) più visto sul nuovo TikTok di OpenAI

2025-10-02
Video: ultime notizie - Corriere TV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) used to generate deepfake videos, which could plausibly lead to harms such as misinformation, reputational damage, or privacy violations. However, the article does not report any actual harm or legal violations occurring from these videos; rather, it discusses concerns and preventive measures. Therefore, this qualifies as an AI Hazard due to the plausible future harm from misuse of AI-generated deepfakes, but not an AI Incident since no harm has materialized yet.
Thumbnail Image

Sora 2 impressiona pela criatividade e realismo, mas as violações de direitos de autor acumulam-se - Tek Notícias

2025-10-02
SAPO Tek
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI system (Sora 2) was used to generate videos that replicate copyrighted characters and likenesses without authorization, leading to copyright infringement. The AI's training and use have directly caused these violations. The presence of mechanisms to manage privacy concerns does not negate the fact that harm has occurred. Therefore, this event meets the criteria for an AI Incident due to realized harm involving violations of intellectual property and privacy rights caused by the AI system's outputs.
Thumbnail Image

OpenAI's New Sora 2 AI Video Generator Still Uses Copyrighted Work by Default

2025-10-02
VICE
Why's our monitor labelling this an incident or hazard?
The AI system (Sora 2) is explicitly mentioned as generating video and audio content using copyrighted works by default, without explicit opt-in consent from copyright owners. This constitutes a violation of intellectual property rights, which is a breach of applicable law protecting such rights. Since the system's use directly leads to this violation, it qualifies as an AI Incident under the framework. The harm is realized (copyright infringement), not merely potential, and the AI system's development and use are central to this harm.
Thumbnail Image

"Il nuovo progetto di OpenAi è inquietante e senza anima": l'azienda di Sam Altman lancia una nuova app per generare video con intelligenza artificiale

2025-10-02
Il Fatto Quotidiano
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as generating realistic videos of people, including public figures, which can be misused to create false evidence or misinformation. Although the article highlights concerns about misuse and potential harms, it does not document any realized harm or incident resulting from the system's use. The focus is on the potential for harm and the company's preventive measures, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. Therefore, the event is best classified as an AI Hazard due to the plausible risk of harm from misuse of the AI-generated videos.
Thumbnail Image

El nuevo vertedero de internet: así te están convirtiendo en un yonqui de la basura artificial

2025-10-03
El Confidencial
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (generative AI models for video content) being used in social media platforms that produce addictive, low-quality content leading to digital addiction and wasted time, which are harms to individuals and communities. The AI's role in generating and curating this content is central, and the harms are ongoing and recognized by experts and insiders. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to significant harm (addiction, mental health issues, societal degradation). Although the article also discusses broader ecosystem and governance issues, the primary focus is on the realized harms caused by AI-generated content, not just potential or complementary information.
Thumbnail Image

OpenAI Boldly Uses Copyrighted Characters Like Pikachu In Sora 2

2025-10-02
Kotaku
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions OpenAI's generative AI system Sora 2, which is trained on copyrighted material and generates videos featuring copyrighted characters and deepfakes. This use directly leads to violations of intellectual property rights and potential harm to individuals' reputations, fulfilling the criteria for harm under the AI Incident definition (specifically, violation of intellectual property rights and harm to individuals). The harms are realized as the AI-generated content is already circulating, and legal actions are anticipated or ongoing. Thus, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sora 2: IA para vídeos da dona do ChaGPT vira terror dos direitos autorais

2025-10-02
TecMundo
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Sora 2) used to generate videos from text prompts, including unauthorized use of copyrighted material and realistic deepfakes. The harms include violations of intellectual property rights and potential privacy and reputational harms from deepfakes. These harms are occurring as the videos are already viral on social media. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm (copyright infringement and potential misuse of likeness).
Thumbnail Image

OpenAI's Sora: Fast track to a vacuous AI-video future

2025-10-02
Axios
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (OpenAI's Sora and Meta's Vibes) that generate synthetic videos and discusses multiple plausible harms that could arise from their use, such as misinformation (truth erosion), copyright infringement, personal harm (bullying, humiliation), and behavioral manipulation. However, it does not report any concrete incidents of harm having already occurred. Instead, it focuses on the potential for these harms to materialize as these AI video platforms grow in popularity and their content spreads beyond controlled environments. Therefore, the event fits the definition of an AI Hazard, as it plausibly could lead to AI Incidents in the future but does not describe realized harm yet.
Thumbnail Image

Sora Uygulaması OpenAI Araştırmacılarını İkiye Böldü

2025-10-02
Webtekno
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2 and the Sora app) designed to generate and stream AI deepfake videos. While no direct harm has occurred, the article emphasizes credible concerns from experts about potential misuse and societal harm from such a platform. Since the platform is not yet widely deployed and no harm has materialized, but plausible future harm is recognized, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

OpenAI, Neden TikTok Benzeri Bir Sora Uygulaması Çıkardı? İşte Yapay Zekâ Gerçekleri

2025-10-02
Webtekno
Why's our monitor labelling this an incident or hazard?
The article primarily provides complementary information about OpenAI's strategic decisions and the broader AI ecosystem challenges, including financial and computational resource constraints. It does not describe any realized harm (AI Incident) or a credible risk of harm (AI Hazard) stemming from the AI system's development or use. The focus is on explaining the context and rationale behind launching a new AI-powered social media app, along with industry-wide infrastructure issues, without detailing any direct or indirect harm or plausible future harm. Therefore, it fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

El nuevo TikTok con IA de OpenAI es un peligro: ya hay 'deepfakes' para todos en Sora

2025-10-02
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as generating hyperrealistic deepfake videos using biometric data and AI technology. The harms include violation of intellectual property rights (use of protected characters without permission), privacy violations (storage and use of biometric data without full informed consent), and harm to communities through misinformation and identity manipulation. These harms are occurring as the app is already in use and spreading content that can damage reputations, manipulate perceptions, and infringe on rights. Hence, the event meets the criteria for an AI Incident due to direct and indirect harms caused by the AI system's use.
Thumbnail Image

Sora 2 Is Creating Ads for "Epstein Island" Children's Toys

2025-10-02
Futurism
Why's our monitor labelling this an incident or hazard?
The AI system (Sora 2) is explicitly involved in generating the controversial video content. The content involves sensitive and potentially harmful themes (child abuse, disparagement of public figures) that could plausibly lead to harm to communities or legal violations if widely disseminated. However, the article does not describe any actual harm or incident resulting from the video, only public controversy and ethical concerns. The event thus represents a credible risk of harm due to AI-generated content moderation challenges, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system is central to the event.
Thumbnail Image

Public records show AWS is aggressively promoting AI surveillance tech to law enforcement agencies, partnering with companies such as Flock Safety and ZeroEyes

2025-10-02
Techmeme
Why's our monitor labelling this an incident or hazard?
AWS's promotion of AI surveillance technology to law enforcement agencies involves AI systems designed for monitoring and potentially controlling populations. Such systems have a well-documented risk of violating human rights, including privacy and freedom from unwarranted surveillance. The involvement of AI in these surveillance tools and their active promotion to law enforcement indicates a direct link to potential or realized harm. Although the article does not specify a particular incident of harm, the deployment and encouragement of these AI systems in law enforcement contexts constitute an AI Incident due to the direct connection to violations of human rights and legal obligations.
Thumbnail Image

OpenAI presenta un nuevo modelo de derechos de autor que permite el uso de obras con copyright

2025-10-03
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (OpenAI's Sora) that generate content using copyrighted works without prior consent, unless rights holders explicitly request exclusion. This use of copyrighted material without permission constitutes a breach of intellectual property rights, a recognized harm under the AI Incident definition. The article indicates that this practice is already occurring, not just a potential risk, and that rights holders are responding to it. Therefore, this is an AI Incident due to realized harm related to intellectual property rights violations caused by the AI system's use.
Thumbnail Image

Sora 2's Deepfake Safeguards Aren't Enough to Stop a Flurry of Sam Altman Videos

2025-10-02
ExtremeTech
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2.0) capable of generating deepfake videos, which inherently carries risks of harm such as misinformation or reputational damage. The article mentions the potential for misuse and the existence of safeguards that are insufficient to fully prevent unauthorized deepfakes. However, it does not describe any realized harm or incidents where the AI system's use has directly or indirectly caused injury, rights violations, or other harms. Therefore, this situation represents a plausible risk of harm but not an actual incident. It fits the definition of an AI Hazard, as the development and deployment of this AI system could plausibly lead to harms such as misinformation or reputational damage through deepfakes.
Thumbnail Image

OpenAI divisa sul nuovo approccio ai social media

2025-10-02
Tom's Hardware
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora) that generates and curates video content using AI, including deepfakes, which can influence user behavior and social dynamics. The article does not report any realized harm but raises credible concerns about potential harms such as addiction, misinformation, and social disruption, which align with harms to communities and individuals. The internal debate and external scrutiny underscore the plausible risk of future harm. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Inundan internet de vídeos con IA

2025-10-02
El Diario de Yucatán
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI models generating hyperrealistic videos) and their use in producing content that is flooding social media. However, the article does not describe any direct or indirect harm resulting from these AI systems, such as injury, rights violations, or disruption. The mention of legal actions pertains to intellectual property disputes but does not indicate that the AI system's use has yet caused a breach of rights or harm. The focus is on the emergence and societal response to AI-generated content rather than on a specific incident of harm or a credible risk of harm. Therefore, this is best classified as Complementary Information, providing context and updates on AI developments and their societal and legal implications without reporting a new AI Incident or AI Hazard.
Thumbnail Image

La nouvelle version du générateur vidéo Sora d'OpenAI crée des vidéos contenant du matériel protégé par des droits d'auteur~? à moins que les détenteurs de ces droits ne choisissent l'option de refus \

2025-10-02
Developpez.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Sora 2) used to generate videos that may include copyrighted content without prior consent, unless rights holders opt out. This directly implicates violations of intellectual property rights, which is a form of harm under the AI Incident definition. The harm is realized or ongoing because the AI system is actively producing such content, and the opt-out mechanism places the burden on rights holders rather than preventing the initial infringement. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Crypto Scams to Hit Next Level? OpenAI Releases Sora 2 - U.Today

2025-10-02
u.today
Why's our monitor labelling this an incident or hazard?
The event involves the development and release of an AI system (Sora 2) with capabilities that could plausibly lead to AI incidents involving harm to communities through deception and financial scams. Although no actual scams or harms are reported as having occurred yet, the credible risk of such harm is clearly articulated. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to significant harm in the near future.
Thumbnail Image

Users Flood Sora with Sam Altman Deepfakes | ForkLog

2025-10-02
ForkLog
Why's our monitor labelling this an incident or hazard?
The app Sora 2 uses AI video generation technology to create deepfakes, which is an AI system. The widespread creation and sharing of deepfakes, especially involving public figures, can plausibly lead to harm such as misinformation, reputational damage, and social disruption, which are harms to communities. Since the article does not report actual harm occurring but focuses on the potential risks and concerns, this qualifies as an AI Hazard rather than an AI Incident. The presence of controls like watermarks and cameo permissions are noted but do not eliminate the plausible risk of harm.
Thumbnail Image

Attrici virtuali e video AI: l'invasione delle intelligenze artificiali nel mondo reale

2025-10-03
Il Foglio
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (AI-generated virtual actors and video content generation models like Sora 2 and Vibes). The use of copyrighted content without opt-in permission raises potential intellectual property rights violations, which could lead to legal harm. However, the article does not report any actual legal rulings, injuries, or other harms having occurred yet. The concerns and potential for misuse or harm are plausible but remain prospective. Therefore, this event fits best as an AI Hazard, reflecting the credible risk of copyright infringement and social disruption due to AI-generated content, rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the event.
Thumbnail Image

Sora 2: violazione di copyright e inviti su eBay

2025-10-02
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that Sora 2 is an AI system generating videos that infringe on copyright by reproducing protected characters and media. The AI system was trained on copyrighted material without authorization, leading to large-scale copyright violations. The harm is realized as copyrighted content is being distributed without permission, constituting a breach of intellectual property rights. The sale of invitation codes on eBay further exacerbates unauthorized use. Hence, this qualifies as an AI Incident under the definition of violations of intellectual property rights caused directly or indirectly by the AI system's development and use.
Thumbnail Image

Sora 2 Can Generate Pikachu and Other Copyrighted Characters

2025-10-02
eWEEK
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Sora 2) generating content featuring copyrighted characters, which constitutes a violation of intellectual property rights (a breach of obligations under applicable law). Additionally, the misuse of realistic avatars and deepfakes of public figures and private individuals can lead to harassment, bullying, and disinformation, which are harms to individuals and communities. These harms are occurring or have occurred, not just potential, making this an AI Incident rather than a hazard or complementary information. The AI system's development and use are directly linked to these harms.
Thumbnail Image

Openai sora 2 desata avalancha de infracciones de derechos de autor creando Pikachu's y Sponge Bob Nazi

2025-10-02
DiarioBitcoin
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Sora 2) that generates videos using copyrighted characters without permission, which constitutes a violation of intellectual property rights (a breach of applicable law protecting intellectual property). The generated content includes offensive portrayals (e.g., SpongeBob as a Nazi leader), which can cause reputational and cultural harm to communities. The harms are realized and ongoing, not merely potential. The AI system's use directly leads to these harms by enabling rapid, mass production of infringing and harmful content. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI's Invite-Only Sora App Sparks AI Video Hype and Deepfake Concerns

2025-10-02
WebProNews
Why's our monitor labelling this an incident or hazard?
The Sora app is an AI system capable of generating realistic videos, which inherently carries risks of misuse such as deepfakes and misinformation. The article highlights these concerns and the controlled rollout to manage them. However, no direct or indirect harm has been reported as occurring from the app's use so far. The resale of invite codes and hype do not constitute harm. Thus, the event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms like misinformation or reputational damage in the future, but no incident has yet materialized.
Thumbnail Image

OpenAI Launches Sora 2: Hyper-Realistic Video AI Sparks Ethical Debates

2025-10-03
WebProNews
Why's our monitor labelling this an incident or hazard?
Sora 2 is an AI system explicitly described as generating hyper-realistic videos from text prompts, including deepfakes. The article details actual harms occurring, such as unauthorized replication of copyrighted content and likenesses, leading to lawsuits and regulatory concerns. The minimal censorship (opt-out) model has already resulted in misuse risks and ethical debates, with real-world consequences for intellectual property rights and potential misinformation during an election year. These factors meet the criteria for an AI Incident, as the AI system's use has directly and indirectly led to violations of intellectual property rights and potential harm to communities.
Thumbnail Image

OpenAI : la nouvelle application sociale envahie de deepfakes de Sam Altman

2025-10-02
Fredzone
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora) that generates deepfake videos using advanced AI technology. The article focuses on the potential for significant harms including misinformation, cyberharassment, and rights violations, which could plausibly arise from the app's use. While no concrete incident of harm is reported yet, the described circumstances and risks meet the criteria for an AI Hazard because the AI system's use could plausibly lead to harms such as harm to communities, violations of rights, and societal disruption. The article does not primarily report on a realized harm (incident) nor is it merely complementary information or unrelated news. Therefore, the classification as an AI Hazard is appropriate.
Thumbnail Image

OpenAI's Sora 2 Can Generate Realistic Videos of People Shoplifting

2025-10-02
Gadget Review
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Sora 2) capable of generating realistic synthetic videos that could be used maliciously to fabricate evidence of crimes. This misuse could plausibly lead to violations of human rights, harm to individuals' reputations, and disruption of legal processes, which are harms under the AI Incident definition. However, since the article discusses potential misuse and challenges rather than a realized harm event, it fits the definition of an AI Hazard. The article also discusses the broader societal and legal implications, but the primary focus is on the plausible future harm from the AI system's capabilities.
Thumbnail Image

Sora 2 OpenAI'yı İkiye Böldü - Donanım Günlüğü

2025-10-02
Donanım Günlüğü
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2 video generation model) and its use, with internal concerns about the nature of AI-generated content and its impact. However, the article does not describe any realized harm such as injury, rights violations, or disruption caused by the AI system. The concerns are anticipatory and reflect potential risks rather than actual incidents. Therefore, this qualifies as Complementary Information, providing context and internal reactions to the AI system's deployment without reporting an AI Incident or Hazard.
Thumbnail Image

From Pikachu going to war to Wednesday Addams lunching with the Family Guy, OpenAI's generative AI video platform Sora is a copyright nightmare - Mediaweek

2025-10-03
Mediaweek
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of OpenAI's generative AI system Sora to create videos featuring copyrighted characters without authorization, constituting a violation of intellectual property rights. This is a direct harm caused by the AI system's use, fulfilling the criteria for an AI Incident under violations of intellectual property rights. The harm is ongoing and widespread, with users actively generating infringing content. Although OpenAI has some opt-out mechanisms, these are limited and do not prevent the immediate harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

The Sora 2 AI video app has become flooded with Nintendo videos

2025-10-02
Nintendo Wire
Why's our monitor labelling this an incident or hazard?
The Sora 2 app is an AI system generating video content using copyrighted Nintendo IP without explicit permission, which constitutes a breach of intellectual property rights. The article indicates that this has already led to or will lead to lawsuits, showing realized harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of intellectual property rights.
Thumbnail Image

Sora 2, il nuovo social stile TikTok divide i dipendenti di OpenAI

2025-10-02
Key4biz
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates video content autonomously and uses AI-driven feeds similar to TikTok. The article highlights ethical and strategic concerns about the app's potential to cause harm through deepfakes, addictive content loops, and societal impacts, but does not report any realized harm yet. The presence of credible expert warnings and regulatory attention supports the classification as a plausible future risk. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

OpenAI's Sora 2 AI Video Generator Faces Backlash Over Copyright and Deepfake 'Cameos' - WinBuzzer

2025-10-02
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Sora 2) used to generate videos including copyrighted characters without permission, which constitutes a violation of intellectual property rights, a recognized harm under the framework. Additionally, the 'cameo' deepfake feature raises concerns about misuse and consent, which are linked to violations of rights and potential harm to individuals. These harms are occurring or have occurred as a direct result of the AI system's use. The presence of a consent framework does not negate the realized harms from copyright infringement and deepfake risks. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

科技资讯AI速递:昨夜今晨科技热点一览 丨2025年10月3日

2025-10-02
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The article provides general updates and announcements about AI and technology without describing any AI Incident or AI Hazard. There is no mention of realized harm or credible risk of harm caused by AI systems. The content is informational and contextual, fitting the definition of Complementary Information as it enhances understanding of the AI ecosystem but does not report new harms or plausible harms.
Thumbnail Image

Sora 2强化新叙事:AI吞噬APP,Meta应声下跌

2025-10-02
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2 video generation model) and its use in a popular app, which is causing market shifts and raising concerns about environmental costs and content quality. However, no actual harm or incident has occurred yet; the concerns are about plausible future risks such as environmental impact and content overload. Therefore, this qualifies as an AI Hazard because the AI system's development and use could plausibly lead to harms like environmental damage or community harm due to content saturation, but these harms are not yet realized. The article does not focus on responses or updates to past incidents, so it is not Complementary Information, nor is it unrelated as it clearly involves AI systems and their societal impact.
Thumbnail Image

Así Funciona Sora, La Nueva Herramienta De OpenAI Capaz De Generar Vídeos Realistas De Ti Mismo

2025-10-02
ElPeriodico.digital
Why's our monitor labelling this an incident or hazard?
The article focuses on the capabilities and potential ethical concerns of the AI system Sora, highlighting plausible future harms such as identity misuse and deepfake abuse. Since no actual incident of harm or misuse is reported, and the main narrative centers on the technology's functioning, societal implications, and regulatory considerations, this fits the definition of an AI Hazard. It plausibly could lead to harms like identity theft or misinformation but has not yet caused them.
Thumbnail Image

OpenAI's hot Sora video app is a copyright lawsuit waiting to happen

2025-10-02
Sherwood News
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (Sora 2) trained on data that includes copyrighted material, which raises plausible legal risks of copyright infringement lawsuits. However, no actual lawsuit or harm has yet materialized according to the article. Therefore, this situation represents a credible potential for harm related to intellectual property rights violations, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

OpenAI sfida TikTok, ecco Sora il social basato sull'intelligenza artificiale

2025-10-02
libero.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as generating video content autonomously. The article highlights potential risks related to copyright violations and misuse of digital identity, which could plausibly lead to harms such as intellectual property rights breaches and harm to individuals' digital identities. No direct or indirect harm has been reported so far, only potential future risks. Therefore, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Daily Tech News 2 October 2025

2025-10-02
acecomments.mu.nu
Why's our monitor labelling this an incident or hazard?
The article details an AI system (OpenAI's Sora app) generating content that includes realistic depictions of Sam Altman and copyrighted Pokémon characters. The platform's approach to copyright—requiring opt-out rather than opt-in—raises legal and rights concerns. This constitutes a violation of intellectual property rights due to the AI-generated use of copyrighted material without explicit permission, which is a harm under the AI Incident definition. Since the harm is occurring (users are generating and sharing such content), this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI's Sora 2 launches into copyright chaos

2025-10-02
implicator.ai
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates synthetic videos using copyrighted characters and likenesses, which directly implicates intellectual property rights violations. The use of an opt-out model for copyrighted content means that many rightsholders' works appear without permission, constituting a breach of intellectual property rights (harm category c). The article documents that this is happening now, with examples of copyrighted characters appearing in generated content, and ongoing legal challenges. Therefore, the AI system's use has directly led to harm, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI 影片生成器Sora爆紅 擊敗 Gemini 和 ChatGPT 登蘋果榜首

2025-10-04
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora video generation AI) whose use has directly led to the creation and dissemination of harmful AI-generated content (e.g., defamatory videos), which constitutes harm to communities and potential violations of rights. The article explicitly discusses realized harms and controversies, not just potential risks, and describes mitigation efforts as responses rather than the main event. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI's Sora Makes Disinformation Extremely Easy and Extremely Real

2025-10-03
The New York Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora) that generates hyperrealistic videos used to create and spread disinformation. The article explicitly states that users have already deployed the AI to produce false content that could harm democracy, public trust, and individuals' reputations. This constitutes harm to communities and a violation of rights through deception and misinformation. The AI system's use is directly linked to these harms, qualifying the event as an AI Incident rather than a hazard or complementary information. The presence of actual disinformation being created and disseminated confirms realized harm rather than just potential risk.
Thumbnail Image

Playing with Sora 2 is pure joy -- until you realize how dangerous it could be

2025-10-03
Business Insider
Why's our monitor labelling this an incident or hazard?
Sora 2 is an AI system that generates videos from user images, enabling realistic deepfake-like content. The article emphasizes the potential for misuse leading to harms like misinformation and personal harm but does not describe any actual harm or incident that has taken place. Therefore, the event is best classified as an AI Hazard because it plausibly could lead to AI Incidents involving harm, but no realized harm is reported yet.
Thumbnail Image

Sam Altman, CEO de OpenAI, aparece en un vídeo viral creado por IA robando en una tienda

2025-10-03
20 minutos
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) used to generate a realistic deepfake video depicting a false criminal act by a real person, which can mislead the public and cause reputational harm. This fits the definition of an AI Incident because the AI's use has directly led to harm to communities (misinformation) and potentially to the individual's rights (reputational harm). The presence of security measures does not negate the fact that harm has occurred. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Pesquisadores questionam como aplicativo do Sora se encaixa na missão da OpenAI

2025-10-03
Canaltech
Why's our monitor labelling this an incident or hazard?
The article primarily focuses on expert commentary and debate about the potential implications and alignment of the Sora app with OpenAI's mission. It does not describe any actual harm or incident caused by the AI system, nor does it report a specific event where the AI system's use or malfunction has led to harm. The concerns expressed are about plausible future risks and the societal impact of AI-generated video content and social media feeds, which fits the definition of Complementary Information as it provides context, expert opinions, and governance-related discussion without reporting a concrete AI Incident or AI Hazard. Therefore, the classification is Complementary Information.
Thumbnail Image

Sora 2: avanços em vídeos de IA levantam preocupações sobre desinformação

2025-10-03
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates realistic videos from text or images. The system's use has directly caused harm by producing and spreading false videos of significant social and political events that did not occur, thereby harming communities and democratic institutions. This constitutes a violation of rights related to truthful information and harms to communities, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Las tecnológicas apuestan por inundar las redes sociales con contenido generado por IA

2025-10-03
Gestión
Why's our monitor labelling this an incident or hazard?
The article describes the use and deployment of AI systems (video generation models like Sora 2) that are actively generating and flooding social media with AI-generated content. It also mentions ongoing lawsuits alleging copyright infringement, which constitutes a violation of intellectual property rights. Since these infringements have already led to legal actions, the harm (violation of rights) is realized. Therefore, this qualifies as an AI Incident due to the direct involvement of AI systems in causing harm through unauthorized use of copyrighted material and the resulting legal consequences. The article also discusses potential societal impacts but the primary harm is the violation of intellectual property rights already occurring.
Thumbnail Image

"Por favor, demandadlos hasta hundirlos": una IA acaba de recrear en vídeo una misión de Cyberpunk 2077 y el cabreo de los jugadores es monumental

2025-10-03
Vidaextra
Why's our monitor labelling this an incident or hazard?
An AI system (Sora 2) is explicitly involved in generating video content that replicates copyrighted material, which constitutes a violation of intellectual property rights. This harm is realized, not just potential, as the AI-generated video is publicly shared and has caused significant backlash from the original content creators and fans. Therefore, this qualifies as an AI Incident under the category of violations of intellectual property rights. The article focuses on the harm caused by the AI's use, not just on general AI developments or responses, so it is not Complementary Information. It is not merely a product launch or unrelated news, so it is not Unrelated. There is no indication that the AI system malfunctioned or caused physical harm, but the violation of rights is sufficient for classification as an AI Incident.
Thumbnail Image

Sora, el TikTok de OpenAI, desata el caos en las redes por culpa de sus deepfakes y memes

2025-10-03
SoftZone
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Sora) that generates hyperrealistic deepfake videos. The use of this AI system has directly led to harm, including reputational damage to public figures and potential political manipulation, which are harms to communities and violations of rights. The article describes actual dissemination and impact of these AI-generated deepfakes, not just potential risks. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI Sora 2: Videos hiperrealistas que desafían la verdad en la era de la IA

2025-10-03
WWWhat's new
Why's our monitor labelling this an incident or hazard?
Sora 2 is an AI system that generates realistic videos and audio from text inputs. The article details a concrete example where the system was used to create a fake video of a CEO committing theft, illustrating how such technology can directly cause harm by fabricating evidence and misleading judicial processes. The harm to communities and potential violation of legal rights are realized or imminent, fulfilling the criteria for an AI Incident. Although the company has safeguards, these are shown to be insufficient, and the misuse has already occurred, not just a potential risk. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El lado oscuro de OpenAI, esto descubrió una usuaria al eliminar su cuenta de Sora

2025-10-03
Merca2.0 Magazine
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as Sora 2 is an advanced AI video generation model and ChatGPT is an AI language model. The incident stems from the use and management of these AI systems, specifically the account deletion process. The harm here is indirect but significant: the user loses access to multiple AI services and associated data unexpectedly, which can be considered a violation of user rights and trust, potentially breaching obligations related to data protection and user consent. Although no physical harm or direct legal violation is explicitly stated, the unexpected and opaque account deletion consequences constitute a harm to user rights and trust, fitting within the scope of an AI Incident. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI systems' account management and user data handling.
Thumbnail Image

Sora 2 di OpenAI rivoluziona il text to video: boom di download e timori sui deepfake

2025-10-03
IlSoftware.it
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Sora 2) that generates video content from text prompts, fitting the definition of an AI system. Although no concrete harm has been documented yet, the mention of reported cases of security circumvention and the real risk of deepfake misuse and disinformation indicate plausible future harms. These potential harms align with violations of rights and harm to communities. Since the harms are not yet realized but plausible, this event qualifies as an AI Hazard rather than an AI Incident. The article also discusses the broader ecosystem and regulatory concerns, but the primary focus is on the potential risks posed by the AI system's deployment and misuse.
Thumbnail Image

Sora 2 è in grado di creare video fake iper realistici. La scommessa di OpenAI

2025-10-04
Il Foglio
Why's our monitor labelling this an incident or hazard?
Sora 2 is an AI system capable of generating realistic fake videos, which can be used maliciously for disinformation and impersonation. The article mentions ongoing legal challenges related to copyright infringement and the risk of spreading false information, but does not report any concrete incidents of harm having occurred yet. The concerns about misuse and the potential for harm to communities and rights are credible and plausible. Hence, the event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident, but no direct or indirect harm has been confirmed at this stage.
Thumbnail Image

Sora 2: controlli per il copyright e revenue sharing

2025-10-04
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Sora 2) that generates content potentially infringing copyright, which is a violation of intellectual property rights. However, the harm (copyright violation) is ongoing but no formal legal complaints or enforcement actions have yet occurred. The main focus is on upcoming policy changes and controls to mitigate this issue and introduce revenue sharing. Since the harm is currently occurring (unauthorized use of copyrighted characters) but no legal or formal complaints have been made, and the article mainly discusses responses and policy updates, this fits best as Complementary Information rather than a new AI Incident or AI Hazard.
Thumbnail Image

OpenAI's viral AI video App Sora triggers Deepfake debate

2025-10-03
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) that generates deepfake videos, which is explicitly AI-based video generation technology. The article reports actual harms occurring, such as the creation and spread of deepfake videos including explicit content and unauthorized use of likenesses, which can cause reputational harm and violate intellectual property rights. The AI system's use and the failure of its safeguards to fully prevent harmful content directly contribute to these harms. Hence, the event meets the criteria for an AI Incident due to realized harm linked to the AI system's use and malfunction of content moderation.
Thumbnail Image

OpenAI bouscule son image avec le lancement de Sora

2025-10-03
Fredzone
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Sora) that generates deepfake videos and uses AI-driven engagement techniques, which implies AI system involvement. However, it does not report any realized harm such as injury, rights violations, or community harm. The concerns expressed are anticipatory and ethical debates rather than documented incidents or imminent risks. Therefore, the event does not qualify as an AI Incident or AI Hazard. Instead, it fits the category of Complementary Information as it provides context, internal perspectives, and regulatory attention related to the AI system's launch and potential implications.
Thumbnail Image

Sora: OpenAI's new video app is jaw-dropping

2025-10-03
Oman Observer
Why's our monitor labelling this an incident or hazard?
Sora is an AI system that generates realistic videos using AI models. The article does not report any actual harm or incident caused by the app so far, but it clearly outlines plausible future harms such as misinformation, scams, and copyright infringement due to the ease of creating deepfakes and unauthorized likenesses. These risks constitute a credible potential for harm, fitting the definition of an AI Hazard. Since no realized harm or incident is described, and the focus is on potential risks and societal concerns, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Sora 2 de OpenAI lleva la generación de vídeos con IA un paso más allá: ahora tú eres el protagonista

2025-10-03
Urban Tecno
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora 2) capable of generating realistic videos that could be used for misinformation or disinformation, which is a recognized harm to communities. However, the article does not describe any actual incidents of harm occurring from the use of Sora 2. The focus is on the capabilities, improvements, and safety measures, indicating potential future risks rather than realized harm. Hence, it fits the definition of an AI Hazard, as the development and deployment of this powerful AI video generation tool could plausibly lead to incidents of misinformation or other harms in the future.
Thumbnail Image

Così OpenAI e Meta lanciano la sfida a TikTok | MilanoFinanza News

2025-10-03
Milano Finanza
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (Sora) that generates synthetic video content. The article discusses the plausible future harms that could arise from this technology, such as misinformation, intellectual property infringement, and social harms including bullying and defamation. However, it does not report any actual harm or incident that has already occurred. Therefore, the event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harms but no direct or indirect harm has yet materialized according to the article.
Thumbnail Image

OpenAI"限量版"Sora人气高,上线第四天拿下苹果美国App头名

2025-10-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora, an AI video generation app) and discusses its use and potential safety challenges. However, there is no indication that the AI system has directly or indirectly caused any harm (such as injury, rights violations, or community harm). The mention of controversial content and safety concerns is speculative and relates to potential risks rather than realized incidents. Therefore, this article is best classified as Complementary Information, providing context on AI system deployment, user reception, and governance considerations without reporting an AI Incident or Hazard.
Thumbnail Image

首位 AI 女演员签约出道,好莱坞炸锅,同行阴阳:谢谢你抢走我的饭碗

2025-10-03
新浪财经
Why's our monitor labelling this an incident or hazard?
The AI system (virtual actress) is explicitly described as being developed and used to replace human actors, leading to social and economic harm to real actors who lose job opportunities. This harm is indirect but real and significant, as it affects labor rights and livelihoods. The AI system's use in generating realistic performances and social media presence directly contributes to this harm. Therefore, this qualifies as an AI Incident due to violation of labor rights and harm to communities caused by the AI system's deployment.
Thumbnail Image

首位 AI 女演员签约出道,好莱坞炸锅,同行阴阳:谢谢你抢走我的饭碗

2025-10-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (AI-generated virtual actors and video generation models) that could plausibly lead to significant harm in the form of job displacement and economic harm to human actors, as well as cultural and social impacts on the entertainment industry. Since no direct harm has yet materialized but the potential for harm is credible and discussed extensively, this qualifies as an AI Hazard. It is not an AI Incident because no actual injury, rights violation, or other harm has been reported as having occurred. It is not Complementary Information because the article is not primarily about responses or updates to a prior incident but about the emergence of a new AI actor and its implications. It is not Unrelated because the AI system's involvement and potential impacts are central to the article.
Thumbnail Image

OpenAI的视频生成应用Sora已冲上苹果App Store榜首 - cnBeta.COM 移动版

2025-10-03
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (Sora) used for generating video and audio content. The concerns raised about misuse and potential risks indicate plausible future harms related to misinformation, reputational damage, or legal violations. However, the article does not report any actual harm or incident resulting from the AI system's use so far. The focus is on the product launch, user engagement, and the company's efforts to address safety and ethical considerations. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harms but no incident has yet materialized.
Thumbnail Image

A OpenAI oferecerá aos detentores de direitos autorais mecanismos para monetizar o uso de seus personagens pelos usuários do Sora na geração de vídeos.

2025-10-04
avalanchenoticias.com.br
Why's our monitor labelling this an incident or hazard?
The article discusses OpenAI's plans to implement mechanisms for copyright holders to regulate and monetize the use of their intellectual property in AI-generated video content. While it involves an AI system (Sora, an AI video creation tool), the event focuses on the introduction of rights management and monetization features rather than any realized harm or direct risk of harm. There is no indication of an incident or hazard occurring or imminent. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI-related copyright concerns.
Thumbnail Image

OpenAI的视频生成应用Sora已冲上苹果App Store榜首_手机网易网

2025-10-03
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora) used for video generation, which has led to public concern about potential misuse and risks. However, the article does not report any realized harm or incident caused by the AI system, only potential risks and ongoing debates. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm (e.g., misinformation, reputational damage, legal issues) but no direct or indirect harm has been documented yet. It is not Complementary Information because the main focus is on the app's launch and its potential risks, not on responses to a past incident. It is not an AI Incident as no harm has materialized, and it is not Unrelated because the event clearly involves an AI system and its societal implications.
Thumbnail Image

OpenAI视频生成模型Sora 2上线 大量热门角色视频出现 - cnBeta.COM 移动版

2025-10-03
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Sora 2) that generates videos using copyrighted characters without consent, which constitutes a violation of intellectual property rights (a breach of applicable law protecting IP rights). It also allows generation of deepfake videos of public figures, which can cause reputational harm and misinformation, harming communities. These harms are occurring or imminent, making this an AI Incident. The involvement of the AI system in development (training on copyrighted data) and use (generating infringing and deepfake content) directly leads to these harms. Legal actions and artist complaints further confirm realized harm.
Thumbnail Image

Derrière les vidéos générées par IA de Sora, la nouvelle application d'OpenAI, des craintes bien réelles

2025-10-03
Challenges
Why's our monitor labelling this an incident or hazard?
Sora is an AI system generating hyperrealistic videos including deepfakes, which are already being used to create misleading content and infringe on copyright and personal image rights. The article documents actual circulation of such videos causing confusion and concern, which constitutes realized harm. The AI system's use is directly linked to these harms, fulfilling the criteria for an AI Incident. Although the article also discusses potential future harms, the presence of actual harm takes precedence in classification.
Thumbnail Image

Sora 2 est le TikTok des vidéos deepfake : OpenAI veut conquérir le public des réseaux sociaux

2025-10-03
Informaticien.be
Why's our monitor labelling this an incident or hazard?
Sora 2 is an AI system generating deepfake videos, which inherently carries risks of misuse leading to harms like misinformation or violation of rights. The article does not describe any actual harm or incidents caused by the system so far, only its launch and features. Given the potential for such AI-generated deepfake content to cause significant harm in the future, this qualifies as an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because the system is clearly AI-based and the potential for harm is credible and foreseeable.
Thumbnail Image

2025-09-30
Next
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly, namely generative AI models creating video content. The development and use of these AI systems could plausibly lead to harms such as copyright violations and misuse of deepfakes, which are recognized concerns in AI-generated media. However, no direct or indirect harm has been reported as having occurred so far. The article mainly discusses the launch, features, and potential issues, making it a credible AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI systems are central to the described events.
Thumbnail Image

OpenAI va lancer son propre TikTok ! Avec 100% de vidéos créées par IA

2025-09-30
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (Sora 2) that generates video content autonomously. While the article discusses potential ethical issues, copyright disputes, and societal implications, it does not report any realized harm or incidents caused by the AI system. The concerns about misuse of personal images and copyright conflicts are plausible future risks but have not materialized as harms yet. Therefore, this event represents a credible AI Hazard due to the plausible future harms related to privacy, rights violations, and societal impact from the deployment of this AI-generated video platform.
Thumbnail Image

OpenAI, nouveau concurrent potentiellement révolutionnaire dans la vidéo IA face à TikTok - Nanoblog

2025-10-01
Nanoblog
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (AI-generated video content and recommendation algorithms) in development and planned use. However, no actual harm or incident is reported; the article discusses potential regulatory challenges and competition but does not describe any realized injury, rights violation, or other harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context on AI developments and governance challenges in the ecosystem.
Thumbnail Image

Sora 2 is drawing a huge crowd of teenage boys. This doesn't bode well -- trust me.

2025-10-10
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's Sora 2) that generates AI-driven video content and avatars. The article details the use of this AI system in ways that produce offensive, juvenile, and potentially harmful content, including unauthorized use of likenesses of deceased celebrities and real individuals. The harms described relate to social and community harm, including a hostile environment for women and potential harassment, which align with harm to communities and possible rights violations. However, the article does not report direct or confirmed incidents of injury, legal violations, or severe harm but rather highlights emerging risks and moderation challenges. The AI system's development and use are central to these risks. Given the plausible and credible risk of harm escalating, but without confirmed realized harm, the classification as an AI Hazard is appropriate. The article also serves as a warning about potential future harms rather than documenting a fully materialized AI Incident.
Thumbnail Image

How to control your cameos: OpenAI's Sora 2 update adds privacy, watermarks, and safety features | Mint

2025-10-07
mint
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (OpenAI's Sora) and focuses on improvements to privacy controls, content moderation, and safety mechanisms. However, there is no indication that any harm has occurred or that there is a plausible risk of harm resulting from the update itself. Instead, the update is a response to user feedback and aims to mitigate potential misuse and enhance user control. Therefore, this is complementary information providing an update on the AI system's development and governance, rather than reporting an AI incident or hazard.
Thumbnail Image

We tested OpenAI's Sora 2 video generator to find out why Hollywood is freaking out

2025-10-10
CNBC
Why's our monitor labelling this an incident or hazard?
The article focuses on the launch of an AI video generation system and the resulting concerns from Hollywood about copyright and exploitation. While these concerns relate to potential legal and rights issues, the article does not describe any actual harm or violation that has occurred due to the AI system's use. The changes in the model's prompt handling indicate a response to these concerns. Since no direct or indirect harm is reported, and the main focus is on the societal reaction and policy adjustments, this fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Sora gives deepfakes 'a publicist and a distribution deal.' It could change the internet

2025-10-10
NPR
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's Sora app) that generates deepfake videos, which are flooding social media platforms and reshaping public perception of truth. The article details realized harms such as misinformation, erosion of trust, and the potential for malicious uses including scams and propaganda. These constitute harm to communities and violations of rights. The AI system's use is central to these harms, fulfilling the criteria for an AI Incident. Although safety measures exist, they are being circumvented, and the harms are ongoing and significant.
Thumbnail Image

Sora imposters run amok on the App Store - here's how to find the real one by OpenAI

2025-10-10
Digital Trends
Why's our monitor labelling this an incident or hazard?
The event involves AI systems because the apps in question are AI-powered or AI-related (e.g., AI video generators). The harm described includes financial harm to users (harm to property) and potential security risks (harm to users' devices or data). The fake apps' misuse of the 'Sora' brand and their misleading nature have directly led to harm to users. Therefore, this qualifies as an AI Incident due to realized harm caused by the misuse and impersonation of AI systems leading to financial and security harms to users.
Thumbnail Image

I tried Sora 2 - I love it and it's about to ruin everything

2025-10-10
TechRadar
Why's our monitor labelling this an incident or hazard?
The Sora 2 app is an AI system generating realistic AI videos and avatars. The article details how it is being used to create videos of celebrities and others without clear consent, raising concerns about misinformation, reputational harm, and overwhelming real content. Although no specific incident of harm is documented, the potential for harm is credible and significant, including violations of rights and harm to communities through misinformation and impersonation. The article also notes the current limitations in content control and the rapid growth of AI-generated content. Since harm is plausible but not yet realized, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Fake Sora apps flood App Store as OpenAI Sora hits record downloads, faster than ChatGPT launch

2025-10-10
India Today
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (OpenAI's Sora video generator) and the misuse of its brand by fake apps, but no harm caused by the AI system's development, use, or malfunction is described. The harm here is related to fraudulent app listings and potential consumer deception, which is not explicitly covered under the AI Incident definitions unless it leads to significant harm such as rights violations or health harm, which is not stated. There is no indication that the AI system itself caused or could plausibly cause harm. The article mainly provides contextual information about the AI ecosystem, app store challenges, and responses, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Will Any Athletes Follow Jake Paul's Deepfake Footsteps?

2025-10-10
Bleacher Report
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Sora 2) that generates deepfake videos using celebrity likenesses, which qualifies as an AI system. However, it does not report any realized harm such as injury, rights violations, or disruption caused by the AI system. The discussion is speculative about future adoption and cultural impact, without evidence of direct or indirect harm. Therefore, it fits the definition of Complementary Information, providing context and societal implications without reporting an AI Incident or AI Hazard.
Thumbnail Image

Fake 'Sora' apps take over Apple's app store: How to check and download the real OpenAI Sora?

2025-10-10
Mashable ME
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (OpenAI's Sora app) and the presence of fake apps impersonating it, which have caused direct harm to users through scams and financial loss. The AI system's development and use are central to the incident, as the fake apps exploit the AI app's popularity. The harm is realized, not just potential, and includes violation of user trust and financial harm, fitting the definition of an AI Incident. The event is not merely a product launch or general news, nor is it a future risk without realized harm, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Watch Out for Fake Sora Apps

2025-10-10
Lifehacker
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of the legitimate Sora app, which uses AI for hyper-realistic video generation. The harm arises from the malicious use of fake apps impersonating this AI system, leading to fraud and potential user harm such as malware infection or data theft. Since the AI system's use (or misuse) has directly led to harm through these fraudulent apps, this qualifies as an AI Incident. The article describes realized harm (fraud, potential malware, and data theft) linked to the AI system's misuse, not just potential future harm or general information.
Thumbnail Image

OpenAI introduces cameo controls in Sora amid deepfake concerns: What's new

2025-10-07
Business Standard
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Sora) that generates video and audio content, including deepfakes. The introduction of cameo controls is a response to concerns about harms caused by misuse of AI-generated deepfake content, which can lead to violations of rights and harm to individuals and communities. However, the article describes these controls as preventive measures to curb misuse and enhance safety, rather than reporting an actual harm or incident caused by the AI system. Therefore, this is a governance and safety improvement update that complements understanding of AI risks and responses, without describing a new AI Incident or AI Hazard.
Thumbnail Image

OpenAI's New Sora App Lets Users Generate AI Videos -- And Star in Them

2025-10-10
Scientific American
Why's our monitor labelling this an incident or hazard?
While the app involves AI systems generating video content and includes features that could lead to misuse (e.g., unauthorized use of likenesses), the article focuses on the launch, user experimentation, and early copyright complaints leading to platform restrictions. There is no indication that these issues have resulted in realized harm such as legal violations, health or safety incidents, or significant community harm. The discussion of copyright complaints and platform responses is about managing potential or emerging issues rather than describing an AI Incident or Hazard. Therefore, this is best classified as Complementary Information, providing context on societal and governance responses to AI-generated content and its challenges.
Thumbnail Image

Fake 'Sora' Apps Flood App Store as OpenAI's Video Generator Breaks Download Records

2025-10-10
The Hans India
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (OpenAI's Sora) and describes harm (consumer deception, financial loss) linked to fake apps impersonating the AI product. However, the harm is caused by fraudulent third-party apps exploiting the AI system's popularity, not by the AI system's development, use, or malfunction itself. The AI system is not directly or indirectly causing the harm through its outputs or behavior. The event focuses on the ecosystem and governance challenges (app store moderation failures, fraud) rather than a direct AI Incident or a plausible AI Hazard. Hence, it fits the definition of Complementary Information, providing context and updates on societal responses and risks related to AI product proliferation.
Thumbnail Image

OpenAI's Sora Is in Serious Trouble

2025-10-10
Futurism
Why's our monitor labelling this an incident or hazard?
The event clearly describes an AI system (Sora 2) whose use has directly led to violations of intellectual property rights, a form of harm under the AI Incident definition. The generation and dissemination of copyrighted characters and content without authorization constitute a breach of applicable law protecting intellectual property rights. The legal actions and public statements from rightsholders confirm that harm has materialized. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is ongoing and directly linked to the AI system's use.
Thumbnail Image

Trevor Noah says AI-powered video generators like OpenAI's Sora could be 'disastrous'

2025-10-10
GeekWire
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI video generators like Sora) and discusses their use and potential misuse, particularly unauthorized use of human likenesses. While it outlines significant concerns about harms to individuals' rights and societal trust, it does not report a concrete AI Incident where harm has already occurred due to the AI system's use or malfunction. Instead, it emphasizes the potential for harm, legal challenges, and the need for regulatory responses. This aligns with the definition of an AI Hazard, as the AI system's development and use could plausibly lead to incidents involving rights violations and societal harm in the future.
Thumbnail Image

Sora hits 1M downloads in 5 days, beats ChatGPT's record

2025-10-10
NewsBytes
Why's our monitor labelling this an incident or hazard?
The Sora app is an AI system involved in generating or curating short videos, including those featuring copyrighted characters without authorization. This use directly leads to a violation of intellectual property rights, a recognized harm under the AI Incident definition. The controversy and OpenAI's planned response confirm that the harm is realized and linked to the AI system's use. Hence, this event is classified as an AI Incident.
Thumbnail Image

AI app Sora has been forced to make changes after *those* viral HSTikkyTokky videos

2025-10-10
The Tab
Why's our monitor labelling this an incident or hazard?
Sora is an AI system generating realistic videos from text prompts, which qualifies as AI involvement. The viral AI-generated videos have caused reputational harm and defamation to individuals, constituting a violation of personal rights (a form of human rights). The harm is realized, not just potential, as the streamer publicly complains and threatens legal action. The company's response to change the opt-out system to an opt-in system further confirms the recognition of harm caused. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

OpenAI Sora App Surpasses 1 Million Downloads, Outpacing ChatGPT

2025-10-10
Android Headlines
Why's our monitor labelling this an incident or hazard?
The Sora app is an AI system that generates realistic videos, including deepfakes, which have already been misused to create copyrighted content without permission. This constitutes a violation of intellectual property rights, a form of harm under the AI Incident definition. Additionally, the potential for fraud, harassment, and misinformation from such AI-generated content is a direct harm to communities. Since these harms are occurring or have occurred, this event qualifies as an AI Incident rather than a hazard or complementary information. The article's focus on realized misuse and harm supports this classification.
Thumbnail Image

OpenAI's Sora Tops One Million Downloads in Record Time, Sparking Deepfake and Copyright Concerns - The Global Herald

2025-10-10
The Global Herald
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (Sora) that generates realistic videos, including deepfakes and copyrighted content. While there are concerns about harms such as emotional distress to families and intellectual property violations, these harms are potential and have not been confirmed as realized incidents. The company is adapting policies and engaging with stakeholders, indicating ongoing management of risks. Thus, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms like violation of rights and harm to communities, but no direct or indirect harm has yet been reported.
Thumbnail Image

Sora gives deepfakes 'a publicist and a distribution deal.' It could change the internet

2025-10-10
bpr.org
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (OpenAI's Sora app) that generates deepfake videos, which are flooding social media platforms and reshaping perceptions of truth. The article details actual harms occurring, such as misinformation, erosion of trust in media, and potential for harmful synthetic content, which are harms to communities and violations of rights. The AI system's development and use are central to these harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harms are already materializing and linked to the AI system's outputs.
Thumbnail Image

OpenAI's New Video App Tops 1M Downloads -- Even Faster Than ChatGPT

2025-10-10
Maginative
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora's generative AI video model) whose use has directly led to harms including copyright violations and the spread of harmful or manipulated content. These constitute violations of intellectual property rights and harm to communities, fulfilling the criteria for an AI Incident. Although the app is new, the harms are already occurring, not just potential. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Will a Celebrity Tsunami Hit Sora?

2025-10-10
Spyglass
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Sora) that generates hyperrealistic videos of individuals, including celebrities, which are being widely disseminated and causing reputational harm and personal distress. Jake Paul explicitly states that the AI-generated false narratives are affecting his relationships and businesses, indicating realized harm. The AI system's use has directly led to these harms, fulfilling the criteria for an AI Incident. The article does not merely discuss potential future harm or general AI developments but reports on actual harm occurring due to the AI system's outputs.
Thumbnail Image

OpenAI Sora 2 update lets users ontrol cameos, adds watermarks and safety features

2025-10-07
News9live
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (Sora) and focuses on improvements to user control, safety, and transparency to prevent misuse and abuse. However, the article does not report any actual harm or incident caused by the AI system, nor does it describe a plausible future harm scenario. Instead, it details enhancements and safety measures implemented in response to user feedback and concerns. Therefore, this is best classified as Complementary Information, as it provides updates on governance and safety improvements related to an AI system without describing a new AI Incident or AI Hazard.