OpenAI Removes ChatGPT Sharing Feature After Private Conversations Indexed by Google

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI's ChatGPT sharing feature allowed users' conversations, including personal information, to be indexed by Google and other search engines, leading to privacy breaches. After public outcry and reports of thousands of private chats becoming searchable, OpenAI quickly discontinued the feature to prevent further unintended data exposure.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (ChatGPT) whose outputs (user chats) are shared and then indexed by a search engine, leading to exposure of private and sensitive information. This exposure constitutes a violation of privacy rights, a form of harm to individuals. The harm is directly linked to the use of the AI system and the sharing mechanism that makes the data publicly accessible. The article highlights that users are often unaware that sharing a link makes their chats public and searchable, which indicates a failure in user understanding and possibly in the system's design or communication. This meets the criteria for an AI Incident as the AI system's use has directly led to harm (privacy violations).[AI generated]
AI principles
Privacy & data governanceRespect of human rightsRobustness & digital securityTransparency & explainabilityAccountability

Industries
Consumer servicesDigital securityIT infrastructure and hostingMedia, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsReputationalPsychological

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Google "зливає" ваші чати з ChatGPT у пошукову видачу - що потрібно знати

2025-07-31
unian
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose outputs (user chats) are shared and then indexed by a search engine, leading to exposure of private and sensitive information. This exposure constitutes a violation of privacy rights, a form of harm to individuals. The harm is directly linked to the use of the AI system and the sharing mechanism that makes the data publicly accessible. The article highlights that users are often unaware that sharing a link makes their chats public and searchable, which indicates a failure in user understanding and possibly in the system's design or communication. This meets the criteria for an AI Incident as the AI system's use has directly led to harm (privacy violations).
Thumbnail Image

ChatGPT removes feature allowing search engines to find, display people's conversations

2025-08-01
Washington Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the management of user data and its discoverability via search engines. While no direct harm is reported, the previous feature posed a plausible risk of privacy violations and unintended information disclosure, which could be harmful. The removal of this feature is a response to mitigate such risks. Therefore, this event is best classified as Complementary Information, as it describes a governance and safety response to a potential AI-related harm rather than a new incident or hazard.
Thumbnail Image

ChatGPT chats are showing up in Google Search -- how to find and delete yours

2025-07-31
Tom's Guide
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose shared outputs are publicly accessible and indexed by search engines, leading to indirect harm through exposure of personal and sensitive information. This constitutes a violation of privacy rights, a form of harm to individuals. Since the harm is realized (exposure of sensitive data), this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI quickly rolled back a new feature that allowed users to make private conversations with ChatGPT searchable

2025-08-01
Business Insider
Why's our monitor labelling this an incident or hazard?
The event centers on an AI system (ChatGPT) and a feature that made private conversations searchable, which led to accidental sharing of private information. This constitutes a violation of privacy rights, a form of human rights violation under the framework. The harm is realized as users' private data was exposed without full informed consent, even if anonymized. The AI system's use and feature design directly contributed to this harm, making it an AI Incident rather than a hazard or complementary information. The rollback and removal of the feature is a response but does not negate the incident classification.
Thumbnail Image

Be Careful What You Tell ChatGPT: Your Chats Could Show Up on Google Search

2025-07-31
PCMag Australia
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) whose feature for sharing conversations has led to private chats becoming publicly accessible via search engines. This exposure constitutes a violation of privacy, a recognized harm under human rights and data protection frameworks. The harm is realized, not just potential, as users' personal and sensitive information is already publicly searchable. The AI system's design and use are directly linked to this harm, fulfilling the criteria for an AI Incident. The event is not merely a general AI news update or a potential risk but a concrete case of harm caused by the AI system's use and feature design.
Thumbnail Image

ChatGPT chats are now appearing in Google search - here's how to stop Google from spying on your conversations

2025-07-31
TechRadar
Why's our monitor labelling this an incident or hazard?
The article explains that ChatGPT conversations become publicly accessible and indexed by Google only when users explicitly use the Share feature, which generates a public URL. This is a use-related issue involving an AI system (ChatGPT) and concerns privacy and information exposure risks. However, no actual harm such as privacy violations or rights breaches is reported as having occurred. The event mainly serves to inform users about the consequences of sharing chats and how to avoid accidental public exposure. It does not describe an AI Incident or AI Hazard but provides complementary information about AI system use and its implications for user privacy and data exposure.
Thumbnail Image

OpenAI kills ChatGPT feature that exposed personal chats on Google: All you need to know | Mint

2025-08-01
mint
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose feature caused direct harm by exposing private user conversations publicly, violating user privacy and potentially endangering users. The harm is realized as private data was indexed and accessible, constituting a violation of privacy rights and safety concerns. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use and its impact on users' privacy and safety.
Thumbnail Image

Your public ChatGPT queries are getting indexed by Google and other search engines | TechCrunch

2025-07-31
TechCrunch
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the sharing of AI-generated conversation links that become publicly accessible and indexed by search engines. While this raises privacy and information exposure concerns, there is no indication that the AI system malfunctioned, was misused maliciously, or caused harm directly or indirectly. The issue stems from user actions (sharing links) and search engine indexing policies rather than AI system failure or misuse. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. The article provides contextual information about AI system use and its societal implications, fitting the definition of Complementary Information.
Thumbnail Image

ChatGPT chats will now show up in Google search, which is alarming -- but there's an easy way to stop it from happening

2025-07-31
Windows Central
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) is involved as the source of the chat content that can be shared publicly. The event centers on the use of the share function, which generates URLs that Google indexes, potentially exposing user conversations. No direct harm such as privacy breaches or misuse is reported, but the possibility of privacy violations exists if users share sensitive information unknowingly. This fits the definition of an AI Hazard because it plausibly could lead to harm (privacy violations) due to the AI system's design and user interaction, but no harm has yet been documented. The article also notes a lack of clear communication from OpenAI, which contributes to the risk. Therefore, the event is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Exclusive: Google could be reading your ChatGPT conversations. Concerned? You should be

2025-07-30
Fast Company
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved as the source of the conversations, and the sharing mechanism combined with Google's indexing has led to the exposure of private user data. This constitutes a violation of privacy rights, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. The harm is realized as private, sensitive information is publicly accessible, potentially causing harm to individuals' privacy and dignity. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the harm caused.
Thumbnail Image

ChatGPT Users May Be Inadvertently Sharing Conversations in Search Results | PYMNTS.com

2025-07-31
PYMNTS.com
Why's our monitor labelling this an incident or hazard?
The event describes actual realized harm where personal and private conversations generated or held within AI systems have become publicly accessible without users' full understanding or consent. This constitutes a violation of privacy rights, which falls under human rights violations as per the framework. The AI systems' sharing and indexing features are central to this harm, as they enable the conversations to be exposed. Although the AI systems function as intended, the combination of their design and user interaction has led to privacy breaches. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Your ChatGPT conversations may be visible in Google Search

2025-07-31
Search Engine Land
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and the sharing feature that allows conversations to be publicly accessible. The indexing by Google Search leads to unintended exposure of sensitive business and personal information, which constitutes a violation of privacy and potentially intellectual property rights. Since the harm (exposure of confidential data) has already occurred due to the AI system's use and sharing, this qualifies as an AI Incident under the framework, as it involves violations of rights and harm to individuals and organizations caused directly or indirectly by the AI system's use and sharing functionality.
Thumbnail Image

Your Shared ChatGPT Conversations Are Being Indexed By Google

2025-08-01
WeRSM - We are Social Media
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose sharing feature leads to user-generated content being publicly indexed by search engines, exposing potentially personal and sensitive information. This exposure is not the intended use by users and results in privacy violations, which fall under violations of human rights or legal obligations protecting personal data. The harm is indirect but real, stemming from the AI system's use and the platform's design choices. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Think your ChatGPT chats are private? Google disagrees - Phandroid

2025-08-01
Phandroid - Android News and Reviews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns privacy risks related to the sharing feature. However, the harm is not caused by the AI system malfunctioning or being misused in a way that leads to injury, rights violations, or other harms defined in the framework. Instead, it is a user behavior and web indexing issue. Since no actual harm has occurred and the risk is related to user sharing and indexing, this is best classified as Complementary Information providing context on privacy implications and user caution rather than an AI Incident or Hazard.
Thumbnail Image

OpenAI removes ChatGPT feature after private conversations leak to Google search - RocketNews

2025-08-01
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to unintended exposure of private user data, constituting a violation of privacy and potentially human rights related to data protection. The harm (privacy breach) has already occurred due to the AI system's design and deployment. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (privacy violations).
Thumbnail Image

Your Shared ChatGPT Conversations Are Google-Searchable. Here's Why That Matters.

2025-08-01
Shelly Palmer
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT) and concerns the use of a feature (sharing conversations) that leads to privacy risks. However, the article does not describe any actual harm occurring due to this exposure, only the potential for harm if sensitive information is shared publicly. The article advises caution and policy measures to mitigate risks but does not report a realized incident of harm or violation. Therefore, this is best classified as Complementary Information, as it provides important context and guidance about AI usage and privacy implications without describing a specific AI Incident or AI Hazard.
Thumbnail Image

ChatGPT users shocked to find private therapy sessions exposed in Google searches

2025-08-02
Boing Boing
Why's our monitor labelling this an incident or hazard?
The incident directly involves an AI system (ChatGPT) and its sharing feature that caused private user conversations to be publicly accessible, leading to harm in the form of privacy violations and potential identification of individuals from sensitive data. This constitutes a breach of obligations intended to protect fundamental rights, specifically privacy rights. Therefore, it qualifies as an AI Incident under the framework because the AI system's use directly led to harm related to human rights violations.
Thumbnail Image

OpenAI disables ChatGPT 'experiment' that allowed users make exchanges available on search engines

2025-08-01
The Irish Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose configuration allowed private user data to be inadvertently exposed on the internet, causing harm to privacy and confidentiality. This constitutes a violation of rights and harm to individuals and communities. The harm has already occurred as sensitive information was made accessible via search engines. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and realized harm.
Thumbnail Image

ChatGPT exposes personal chats of users on Google Search, OpenAI reacts

2025-08-01
MoneyControl
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved in the development and use of a feature that led to the exposure of personal user data through search engine indexing. This exposure caused a violation of privacy rights, a form of harm to individuals. The harm has already occurred as personal chats were publicly accessible and viewed by strangers. OpenAI's response to fix the issue is noted but does not negate the fact that harm took place. Therefore, this event qualifies as an AI Incident due to the realized harm stemming from the AI system's use and feature design.
Thumbnail Image

ChatGPT chats were showing up on Google, but OpenAI says its all good now

2025-08-01
The Indian Express
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used to generate conversations that users shared via a feature that made them discoverable by search engines. Many users unintentionally exposed sensitive personal information because the default or user interface design led to accidental sharing. This resulted in a breach of privacy and potential violation of rights, fulfilling the criteria for harm under human rights or privacy violations. The harm is realized, not just potential, and the AI system's design and use directly contributed to this outcome. Hence, this is an AI Incident.
Thumbnail Image

OpenAI pulls chat sharing tool after Google search privacy scare

2025-08-01
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose sharing feature caused private conversations to be indexed by search engines, leading to direct harm through exposure of sensitive information such as mental health disclosures, criminal confessions, and proprietary data. This constitutes a violation of privacy rights and harm to individuals and communities. The harm has already occurred, and the AI system's design and use directly contributed to it. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI is removing ChatGPT conversations from Google

2025-08-01
engadget
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the use of a feature that made AI-generated conversations publicly discoverable via search engines. Although no direct harm such as data breach or explicit privacy violation occurred, the feature posed a plausible risk of privacy harm by exposing potentially sensitive user-generated content unintentionally. The removal of the feature is a mitigation response to this risk. Since no actual harm has been reported but a credible risk existed, this qualifies as Complementary Information about a governance and safety response to a potential AI-related privacy hazard rather than an AI Incident or AI Hazard itself.
Thumbnail Image

ChatGPT Personal Chats Leaked On Google: How It Happened, OpenAI CEO Responds, And What Users Should Do

2025-08-02
Zee News
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved, and its use (sharing feature) indirectly led to harm in the form of privacy violations and exposure of sensitive personal data. This constitutes a violation of users' rights to privacy, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. Since the harm has already occurred (private chats exposed publicly), this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGpt, migliaia di conversazioni visibili su Google: cosa è successo e come rimediare

2025-08-01
La Repubblica.it
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved as the conversations are generated by it. However, the incident does not stem from a malfunction or misuse of the AI system itself but from user-enabled sharing settings that made conversations publicly accessible and indexable by search engines. The harm is indirect and relates to privacy risks from publicly shared content, but no unauthorized data leak or AI failure caused the exposure. OpenAI's response to disable the feature and remove indexed content is a mitigation step. Since no direct or indirect harm caused by AI malfunction or misuse occurred, and the event mainly concerns privacy and data sharing settings with user consent, this is best classified as Complementary Information about a privacy-related issue and OpenAI's response, rather than an AI Incident or Hazard.
Thumbnail Image

Feature, not a bug: OpenAI kills ChatGPT public chat search after users overshared weird, personal stuff

2025-08-01
India Today
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved, specifically its feature enabling public sharing of conversations. The harm realized is a violation of user privacy, a breach of rights related to personal data protection, as private information was exposed publicly without full user awareness. This constitutes a violation of human rights and privacy obligations, thus meeting the criteria for an AI Incident. The event describes actual harm that occurred due to the AI system's use and the company's response to mitigate it.
Thumbnail Image

OpenAI pulls ChatGPT feature that let user chats appear in Google Search results

2025-08-01
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (ChatGPT) and its feature that led to the direct exposure of private user conversations, including sensitive topics, in public search results. This exposure constitutes harm to users' privacy and potentially breaches legal obligations regarding data protection and user consent. The harm has materialized, not just a potential risk, as thousands of conversations were indexed and publicly accessible. The AI system's design and use (the 'Share' feature with discoverability option) directly led to this harm. Hence, it meets the criteria for an AI Incident under violations of rights and harm to communities.
Thumbnail Image

Private ChatGPT conversations leak onto Google search results

2025-08-01
The Telegraph
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use has directly led to the exposure of private conversations containing sensitive information, which constitutes a violation of privacy and potentially breaches legal confidentiality and fundamental rights. The harm is realized as private data has been leaked and made publicly accessible, which fits the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The company's response to disable the feature and remove indexed content is complementary information but does not negate the incident classification.
Thumbnail Image

ChatGPT Conversations Will No Longer Appear On Google Search, Thanks To Users Oversharing Personal Stuff In Chats

2025-08-01
TimesNow
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and concerns about privacy harms due to personal information being publicly accessible via search engines. However, the article describes a mitigation action (removal of the feature) in response to these concerns rather than a specific incident of harm occurring or a plausible future harm event. Therefore, this is best classified as Complementary Information, as it provides an update on governance and privacy-related responses to AI system use rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

You Might Have Sent Your ChatGPT Conversations to Google

2025-08-01
Lifehacker
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to direct harm in the form of privacy breaches when users' shared conversations were indexed by Google and became publicly searchable. This constitutes a violation of privacy rights, a form of harm to individuals. The harm is realized, not just potential, as the conversations were accessible and included sensitive content. The incident stems from the AI system's use and the design of the sharing feature. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

ChatGPT sohbetleri Google'a düştü: Tepkiler üzerine kaldırıldı

2025-08-01
NTV
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved in enabling the sharing of user conversations that were then indexed by search engines, leading to privacy violations and exposure of sensitive personal information. This constitutes a violation of user privacy rights, a breach of obligations to protect fundamental rights, and harm to individuals' confidentiality. The harm has already occurred as private conversations became publicly accessible. Therefore, this qualifies as an AI Incident due to the direct role of the AI system's feature in causing realized harm to users' privacy.
Thumbnail Image

ChatGPT chats were appearing on Google; OpenAI says, "We have just removed a feature that...

2025-08-01
The Financial Express
Why's our monitor labelling this an incident or hazard?
The article describes a feature in ChatGPT that allowed public sharing of conversations, which were then indexed by search engines, potentially exposing user data unintentionally. However, it does not report any actual harm occurring from this exposure, only the risk and the subsequent removal of the feature. The focus is on OpenAI's mitigation action and collaboration with search engines to remove indexed content. This fits the definition of Complementary Information, as it provides an update on a response to a previously identified risk rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

ChatGPT Removes The Option To Make Your Conversation Discoverable In Search Engines

2025-08-01
Lowyat.NET
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved, specifically its feature allowing conversation sharing and discoverability. The event involves the use of the AI system and its impact on user privacy, as private conversations were inadvertently exposed through search engine indexing. This exposure constitutes a violation of privacy rights, a form of harm to users. However, the feature was quickly removed, and indexed content was pulled, indicating mitigation. Since harm occurred (privacy breaches) due to the AI system's use and sharing features, this qualifies as an AI Incident under violations of rights. The event is not merely a product update or general news, but concerns realized harm from AI system use.
Thumbnail Image

OpenAI removes ChatGPT share tool after privacy risks, Google indexing

2025-08-01
Business Standard
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (ChatGPT) and its feature for sharing conversations. The feature's use directly led to harm in the form of privacy violations, as sensitive personal information was unintentionally exposed and indexed publicly. This constitutes a violation of privacy rights, a form of harm to individuals. The removal of the feature and delisting efforts are responses to this realized harm. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm to users' privacy.
Thumbnail Image

OpenAI recua após mensagens de usuários do ChatGPT aparecerem em pesquisas no Google

2025-08-01
Terra
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved, specifically its conversation sharing feature. The use of this feature led to the direct exposure of private user conversations, some containing sensitive or confidential information, which were then indexed by Google and became publicly searchable. This constitutes a violation of user privacy and potentially breaches confidentiality rights, which falls under harm to human rights and privacy. The harm has already occurred as users' private data was exposed without their full understanding of the consequences. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use and its malfunction in privacy protection.
Thumbnail Image

Nos discussions sur ChatGPT pouvaient se retrouver dans les recherches Google: OpenAI y met fin

2025-08-01
BFMTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the accidental exposure of personal and sensitive user data through public sharing and indexing by search engines. This constitutes a violation of privacy rights, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. Since harm to users' privacy has occurred due to the AI system's use, this qualifies as an AI Incident. The article focuses on the harm caused and the company's response, not just a general update or future risk, so it is not merely Complementary Information.
Thumbnail Image

OpenAI pulls ChatGPT feature that showed personal chats on Google

2025-08-01
Fast Company
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature design and use led to unintended exposure of private user data, causing harm to user privacy and potentially violating rights related to data protection. The harm (privacy breach) has already occurred, and the AI system's design and use directly contributed to it. Therefore, this qualifies as an AI Incident involving violation of rights and harm to individuals' privacy.
Thumbnail Image

ChatGPT users shocked to learn their chats were in Google search results

2025-08-01
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event describes a situation where the use of an AI system (ChatGPT) directly led to the exposure of private, sensitive user conversations in public search results, which constitutes a violation of privacy and potentially human rights related to data protection. The harm is realized as users' personal information was made publicly accessible without clear informed consent, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The AI system's feature design and deployment are central to this harm, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Links de conversas com ChatGPT estão aparecendo no Google; saiba como evitar * Tecnoblog

2025-07-31
Tecnoblog
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT) and the sharing of AI-generated conversation links that have been indexed by a search engine, leading to exposure of private information. This constitutes a violation of privacy rights, a form of harm to individuals' rights and potentially to communities. The harm has already occurred as private conversations are publicly accessible and indexed, which fits the definition of an AI Incident. The AI system's use (sharing conversations) and the resulting indexing have directly led to this harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

ChatGPT remove recurso que mostrava conversas privadas no Google

2025-08-01
TecMundo
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved, specifically its feature allowing sharing of conversation links that became publicly accessible and indexed by search engines. The event concerns the use of the AI system and its impact on user privacy, which is a violation of privacy rights (a human rights violation). Since private conversations were made publicly accessible without adequate user awareness, this constitutes harm to individuals' privacy. The removal of the feature is a response to this harm. Therefore, this event qualifies as an AI Incident because the AI system's use directly led to a violation of rights through unintended public exposure of private data.
Thumbnail Image

ChatGPT conversations appearing in Google Search -- here's how to locate and remove them

2025-08-01
The Hans India
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its feature that allows sharing conversations publicly. The sharing and subsequent indexing by search engines have directly led to harm through exposure of private and sensitive data, which can be considered harm to individuals' privacy and reputations. This fits the definition of an AI Incident because the AI system's use (the Share feature) has directly led to violations of privacy and potential reputational harm, which are forms of harm to persons and communities. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Le conversazioni con ChatGPT indicizzate su Google: ecco l'AI che ci "spia"

2025-08-01
Il Sole 24 ORE
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved, specifically its use and the sharing feature that allowed conversations to be publicly indexed. However, the harm described is indirect and relates to privacy risks from user-enabled sharing rather than a malfunction or misuse of the AI system itself. The article reports on a privacy concern that has materialized but does not describe a direct AI-caused harm such as injury, rights violation by the AI system itself, or other significant harms. The main focus is on the privacy implications and OpenAI's response to mitigate the issue. Therefore, this is best classified as Complementary Information, as it provides an update on a privacy-related issue and the company's mitigation efforts rather than reporting a new AI Incident or Hazard.
Thumbnail Image

ChatGpt, migliaia di conversazioni visibili su Google: cosa è successo e come rimediare

2025-08-01
lastampa.it
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved, specifically its feature allowing users to share conversations via links that could be indexed by search engines if users explicitly consented. The event involves the use of the AI system and its design choices leading to the exposure of some sensitive information publicly. However, this exposure was due to user consent and sharing, not a malfunction or unauthorized data leak by OpenAI. The harm (privacy concerns) has occurred but is limited and mitigated by user control and OpenAI's corrective actions. Since the event involves realized privacy concerns linked to the AI system's use and design, it qualifies as an AI Incident related to violations of privacy rights (a subset of human rights).
Thumbnail Image

Google indexing exposed: Is it safe to use ChatGPT? 7 things to know

2025-08-01
GULF NEWS
Why's our monitor labelling this an incident or hazard?
The article focuses on the exposure of ChatGPT shared conversations through search engine indexing, which is a privacy and trust issue related to how content is managed and made accessible. While this raises important concerns about privacy and regulatory scrutiny, it does not describe an AI system causing harm or a plausible future harm event. The AI system (ChatGPT) itself did not malfunction or cause harm; rather, the issue arose from the public sharing and indexing policies. Hence, this is best classified as Complementary Information providing context on AI ecosystem challenges and responses.
Thumbnail Image

Conversas privadas com o ChatGPT aparecem no Google e provocam alerta global

2025-08-01
InfoMoney
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (ChatGPT) whose use and a specific feature ('Make this chat discoverable') caused private, sensitive user data to be publicly accessible, leading to harm in terms of violation of privacy and potentially human rights. The harm has already occurred as sensitive personal information was exposed. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the realized harm (privacy breach and exposure of sensitive data).
Thumbnail Image

OpenAI Disables ChatGPT Search Sharing Feature Amid Privacy Concerns

2025-08-01
The Hans India
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature allowed users to share AI-generated conversations publicly. Despite opt-in design, sensitive personal information was unintentionally exposed through search engine indexing, leading to privacy harms. This exposure is a violation of user privacy rights and constitutes harm to individuals. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The company's removal of the feature and coordination with search engines is a response to this realized harm, but the incident itself is the exposure caused by the AI system's sharing feature.
Thumbnail Image

Deeply personal ChatGPT conversations leaked into Google searches

2025-08-01
PCWorld
Why's our monitor labelling this an incident or hazard?
The event describes how shared ChatGPT conversations, which can contain deeply personal information, were inadvertently made searchable on the open web due to indexing by Google. This exposure of sensitive personal data constitutes a violation of privacy and can be considered harm to individuals. The AI system's sharing feature and its interaction with search engines directly led to this harm. Although users had to manually share the conversations, the design and implementation of the feature allowed for accidental widespread exposure. OpenAI's subsequent removal of the indexing option and efforts to remove content from search engines indicate recognition of the harm caused. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use and feature design.
Thumbnail Image

OpenAI Flips Off Switch After Google Indexes Private Chats In Search

2025-08-01
MediaPost
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) whose feature allowed private conversations to be indexed by search engines, leading to the exposure of sensitive personal and business information. This exposure constitutes a violation of privacy rights and potentially other legal protections, fulfilling the criteria for harm under AI Incident definition (c) regarding violations of human rights or breach of obligations protecting fundamental rights. The harm has already occurred as private data became publicly searchable. OpenAI's removal of the feature is a response to this incident but does not negate the fact that harm took place. Hence, the event is classified as an AI Incident.
Thumbnail Image

Cualquiera ha podido entrar a tus conversaciones con ChatGPT a través de Google, pero solo si las tenías 'públicas'

2025-08-01
Genbeta
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, as the conversations are generated by it. The harm is a violation of user privacy rights due to the indexing and public exposure of conversations that users shared publicly, which is a breach of obligations to protect fundamental rights. This harm has already occurred as users' sensitive information was accessible via search engines. The event is not merely a potential risk but a realized incident, and OpenAI's response is a mitigation measure. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Worrying About Your Chats With ChatGPT? Make Your ChatGPT Conversations Private THIS Way And Keep Your Data Safe

2025-08-01
Techlusive
Why's our monitor labelling this an incident or hazard?
The article focuses on user privacy and data management related to ChatGPT, an AI system, but does not describe any realized harm or incident caused by the AI system. It also does not describe a plausible future harm scenario beyond general privacy concerns already addressed by OpenAI. Therefore, it is best classified as Complementary Information, as it provides context and advice on managing AI-related privacy risks without reporting a new AI Incident or AI Hazard.
Thumbnail Image

OpenAI recua após mensagens de usuários do ChatGPT aparecerem em pesquisas no Google

2025-08-01
Estadão
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its feature for sharing conversations, which led to user messages being publicly indexed by Google. This caused direct harm to users' privacy and confidentiality, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized, not just potential, as sensitive data was exposed and searchable. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT removes feature that let shared chats to be indexed by search engines - gHacks Tech News

2025-08-01
gHacks Technology News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the use of a feature that allowed shared AI-generated chat content to be discoverable via search engines, potentially leading to privacy harms. Although no direct harm is reported, the feature's existence posed a plausible risk of privacy violations (harm to individuals' privacy and potentially human rights). The removal of the feature and the company's response indicate mitigation of this risk. Since the harm was potential and the article focuses on the response and mitigation rather than an actual realized harm, this event is best classified as Complementary Information.
Thumbnail Image

Tus conversaciones con ChatGPT pueden estar en Google: todo es culpa de esta función

2025-07-31
xataka.com.mx
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use (sharing conversations) has directly led to harm in the form of privacy violations when private data is included in shared chats that become indexed by search engines. The harm is realized, not just potential, as the indexing and public accessibility of these conversations is occurring. This fits the definition of an AI Incident due to violation of rights (privacy) caused by the AI system's use and its sharing functionality. The event is not merely a hazard or complementary information, as the harm is ongoing and directly linked to the AI system's operation and user interaction with its features.
Thumbnail Image

Un experimento filtró tus conversaciones con ChatGPT a Google: cualquiera puede leerlas

2025-08-01
SoftZone
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature malfunction or misconfiguration caused users' private conversations to be indexed publicly by search engines, leading to a violation of privacy rights. This is a direct harm linked to the AI system's use and its development of the sharing feature. The harm is realized, not just potential, as personal conversations were exposed. Therefore, this qualifies as an AI Incident due to violation of rights (privacy breach) caused by the AI system's malfunctioning feature.
Thumbnail Image

Conversaciones con ChatGPT aparecieron en el buscador de Google

2025-08-01
Diario El Telégrafo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and concerns the sharing and indexing of AI-generated conversation data. Although no breach or unauthorized data leak occurred, the indexing of publicly shared chats led to potential privacy harms by exposing user content and possibly personal data in search engines. This constitutes a violation of privacy rights, a subset of human rights, due to the AI system's design and use allowing unintended exposure. Since the harm has occurred (exposure of personal data via search engines), this qualifies as an AI Incident. The company's response to disable indexing is a mitigation but does not negate the incident classification.
Thumbnail Image

Private Chats with ChatGPT Exposed in Yet Another Privacy Blunder

2025-08-01
Analytics Insight
Why's our monitor labelling this an incident or hazard?
The event describes a privacy blunder involving an AI system (ChatGPT) where private conversations were unintentionally made public due to user misunderstanding of a sharing feature. While this led to exposure of sensitive data, the incident stems from user error and feature design rather than a direct malfunction or malicious use of the AI system. The company has taken steps to mitigate the issue. This fits the definition of Complementary Information as it provides context and updates on an AI-related privacy concern and the response, rather than describing a new AI Incident or Hazard.
Thumbnail Image

ChatGPT Privacy Feature Pulled After Data Leak Hits Google Search

2025-08-01
Analytics Insight
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use of a privacy feature directly led to the exposure of personal user conversations publicly indexed by a search engine, constituting a data leak and harm to user privacy. This is a violation of user rights and harms individuals' privacy, fitting the definition of an AI Incident due to realized harm from the AI system's use and its consequences.
Thumbnail Image

OpenAI removes ChatGPT self-doxing option

2025-08-01
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature allowed users to make their conversations publicly indexable by search engines. Despite warnings, users shared sensitive information that became publicly discoverable, causing privacy harms. The harm is realized, not just potential, as personal information was found in search results. The AI system's design and use directly contributed to this harm. The company's removal of the feature and efforts to remove indexed content are responses to this incident. Therefore, this is an AI Incident involving violation of privacy rights due to the AI system's use.
Thumbnail Image

¿Qué está pasando con ChatGPT y Google?

2025-08-01
MuyComputer
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use, specifically the sharing feature that led to unintended indexing by search engines. However, the harm is limited to potential privacy exposure of user-shared conversations, with no evidence of unauthorized access or direct harm to users. OpenAI has taken corrective action promptly. Since no realized harm or violation has occurred, and the event mainly serves as a cautionary note and reflection on privacy implications, it fits best as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

ChatGPT kills Google-indexable chats over privacy fears

2025-08-01
Search Engine Land
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (ChatGPT) and concerns privacy risks related to accidental data exposure through a feature that made chats publicly indexable. However, no actual harm or data breach is reported as having occurred; the feature was removed proactively to prevent potential harm. This constitutes a plausible risk of harm (privacy/data leaks) due to the AI system's use, but no realized harm is described. Therefore, this is best classified as Complementary Information about a governance and product response to a potential AI-related privacy hazard, rather than an AI Incident or AI Hazard itself.
Thumbnail Image

Entenda o que fez conversas de usuários do ChatGPT aparecerem por engano no Google

2025-08-01
nsctotal.com.br
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature caused private user conversations to be indexed and exposed publicly on Google, leading to harm through privacy violations and exposure of sensitive personal data. This constitutes a violation of users' rights and harm to individuals, fitting the definition of an AI Incident. The harm is realized, not just potential, as private information was publicly accessible. The company's response to remove the feature and delist content is a mitigation step but does not negate the incident classification.
Thumbnail Image

OpenAI removes ChatGPT feature exposing private chats on Google Search

2025-08-01
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to exposure of private user data on public search engines, constituting a violation of privacy rights, a form of harm to individuals. Since the harm (privacy exposure) has already occurred and the company is removing the feature to mitigate it, this qualifies as an AI Incident due to the realized harm from the AI system's use.
Thumbnail Image

Filtración de chats de ChatGPT: la advertencia de un uruguayo y la reacción de OpenAI

2025-08-01
El Observador
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved, specifically its conversation sharing feature. The issue stems from the use of the AI system where users manually share conversations that then become publicly indexable, leading to privacy violations. Although the harm is indirect and results from user behavior combined with system design, the exposure of sensitive information constitutes a violation of privacy rights, a form of harm to individuals. OpenAI's removal of the feature and efforts to remove indexed content are responses to this issue. Since the harm is realized (privacy breaches) and directly linked to the AI system's use, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Vos conversations avec ChatGPT pourraient être lues par n'importe qui et ça devrait vous inquiéter

2025-07-31
Slate.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how the use of AI chatbots leads to thousands of private conversations being publicly accessible, exposing sensitive personal data. The AI system's use (sharing conversation URLs) directly leads to harm in the form of privacy violations and potential identification of individuals from their data. This harm is realized and ongoing, not merely potential. Therefore, it meets the criteria for an AI Incident due to violations of human rights and privacy obligations caused by the AI system's use and the platform's sharing mechanism.
Thumbnail Image

Conversas do ChatGPT vazam no Google, expõem dados pessoais e até perguntas íntimas; saiba como se proteger - Hugo Gloss

2025-08-01
Hugo Gloss
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature led to the unintended public exposure of private conversations containing sensitive personal information. This exposure constitutes a violation of privacy rights and potentially harms individuals' well-being, fitting the definition of an AI Incident under violations of human rights or harm to individuals. The harm has materialized, not just a potential risk, and the AI system's use and design directly contributed to this harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Ces conversations avec ChatGPT étaient visibles sur Google, mais ça va changer

2025-08-01
Numerama.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use (sharing conversations with an indexable option). The feature's deployment led to indirect harm by exposing personal and confidential data publicly, which constitutes a violation of privacy and potentially breaches data protection rights. Although OpenAI has reversed the feature, the harm occurred while the feature was active. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and its impact on users' privacy and data confidentiality.
Thumbnail Image

Attenzione a condividere le chat di ChatGPT, potrebbero finire su Google

2025-08-01
Wired
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the sharing and indexing of AI-generated conversations. However, the potential privacy risk arises from user decisions to make chats publicly shareable and searchable, not from the AI system malfunctioning or being misused by the provider. There is no indication of realized harm such as injury, rights violations, or disruption caused by the AI system itself. The article primarily provides information and a warning about privacy implications, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

ChatGPT: miles de conversaciones privadas aparecen en Google

2025-08-01
www.expreso.ec
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature led to the direct exposure of private user data, causing harm through privacy violations and potential legal breaches. The harm is realized, not just potential, as sensitive information was publicly accessible. This fits the definition of an AI Incident because the AI system's use and malfunction directly led to harm to individuals' rights and privacy. The article also details mitigation efforts, but the primary event is the incident itself, not just a complementary update.
Thumbnail Image

ChatGPT : OpenAI réagit après des conversations affichées sur Google

2025-08-01
Génération-NT
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the unintended public exposure of private user conversations. This exposure constitutes a violation of users' privacy rights, a form of harm to individuals. The harm is direct and realized, as private data was accessible publicly. The incident stems from a human factor in the use of the AI system's sharing feature, which was not properly understood by users, leading to the breach. Hence, it meets the criteria for an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

ChatGPT, le conversazioni pubbliche finivano su Google

2025-08-01
Punto Informatico
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved, specifically its feature allowing public sharing of conversations. The use of this feature led to indirect harm to users' privacy and potential violations of their rights, as sensitive personal information became publicly accessible and searchable. The harm occurred due to the AI system's use and the unintended indexing by search engines, which was not clearly communicated to users. This fits the definition of an AI Incident because the AI system's use directly led to harm to individuals' privacy and potentially their rights.
Thumbnail Image

OpenAI Halts ChatGPT Sharing After Search Engines Expose Sensitive Data

2025-08-01
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the unintended public exposure of sensitive personal and business information through search engine indexing. This exposure constitutes a violation of privacy rights, a form of harm to individuals and communities. The harm has already occurred, as thousands of chats were indexed and accessible publicly. OpenAI's disabling of the sharing feature is a mitigation response to this realized harm. Hence, the event meets the criteria for an AI Incident because the AI system's use directly led to violations of rights and harm to users.
Thumbnail Image

Les conversations ChatGPT publiques infiltrent désormais les résultats de recherche Google...

2025-08-01
Fredzone
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose publicly shared conversations are indexed by Google, leading to exposure of personal and sensitive information. This exposure constitutes a violation of privacy, a human right, and thus a breach of obligations intended to protect fundamental rights. The harm is realized, not just potential, as users' identities can be inferred from the shared content. The AI system's role is pivotal because the conversations are generated by ChatGPT and shared via its platform, and the indexing by Google amplifies the harm. Although sharing is user-initiated, the lack of adequate safeguards and the automatic indexing by Google contribute to the harm. Hence, this is an AI Incident.
Thumbnail Image

Les conversations avec ChatGPT apparaissent dans Google, voilà pourquoi c'est alarmant

2025-08-01
Tom’s Hardware : actualités matériels et jeux vidéo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns privacy risks due to user-shared URLs being indexed by Google. However, the harm is not caused by the AI system malfunctioning or being misused by the system itself; rather, it is a consequence of user actions and search engine indexing. There is no indication of realized harm such as privacy violations caused by the AI system's operation, nor is there a plausible future harm directly caused by the AI system beyond the existing user behavior. The article serves as a warning and advice on privacy practices, which fits the category of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

OpenAI a laissé Google indexer des conversations, avant de vite faire machine arrière

2025-08-01
MacGeneration
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose deployment led to the inadvertent exposure of personal and confidential information through search engine indexing. This exposure constitutes a violation of privacy rights and potentially breaches confidentiality agreements, which falls under violations of human rights or breach of obligations under applicable law. Since the harm (privacy breach) has already occurred due to the AI system's use and the indexing feature, this qualifies as an AI Incident.
Thumbnail Image

ChatGPT Chats "Leaked" in Google Search After Discoverable Feature Misfires

2025-08-01
Reclaim The Net
Why's our monitor labelling this an incident or hazard?
The event describes how an AI system's feature malfunction or design choice (the discoverable chats setting) caused private, sensitive user conversations to be publicly indexed and searchable, leading to privacy breaches and potential emotional harm. The AI system's involvement is explicit, and the harm (privacy violation and exposure of sensitive personal information) has already occurred. Therefore, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights (privacy) and harm to communities.
Thumbnail Image

ChatGPT: conversas públicas podem aparecer em buscas do Google - Portal Em Tempo

2025-08-01
Em Tempo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use of a public sharing feature caused direct harm by exposing sensitive user information through search engine indexing. This is a violation of privacy rights and thus fits the definition of an AI Incident. The harm is realized, not just potential, as users' private data was accessible publicly without their full consent. Therefore, this is classified as an AI Incident.
Thumbnail Image

ChatGPT Leaks Private Chats On Google - TechRound

2025-08-01
TechRound
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature for sharing conversations led directly to the exposure of private and sensitive information on the internet, constituting harm to individuals' privacy and potentially violating rights related to confidentiality and data protection. The harm has already occurred as personal and company details were publicly accessible. The AI system's design and use (the Share feature) were central to this harm, fulfilling the criteria for an AI Incident. The company's response to remove the feature and address the indexed content is complementary but does not negate the incident classification.
Thumbnail Image

OpenAI Removes Controversial ChatGPT Search Indexing Feature After Private Chats Surface on Google

2025-08-01
https://www.outlookbusiness.com/
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to a direct harm: the unintended public exposure of private user conversations containing personal and identifying information. This constitutes a violation of privacy rights, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use and its feature design.
Thumbnail Image

OpenAI'den skandal! ChatGPT sohbetleri Google'a düştü

2025-08-02
TV100
Why's our monitor labelling this an incident or hazard?
The event describes how an AI system (ChatGPT) had a feature that allowed user conversations to be indexed by search engines, leading to private and sensitive information becoming publicly accessible. This exposure of private data is a violation of privacy rights, a form of harm to individuals. The AI system's design and use directly led to this harm. The feature was removed after public backlash, but the harm had already materialized. Hence, this is an AI Incident due to realized harm caused by the AI system's use and design.
Thumbnail Image

Search Engines are Indexing ChatGPT Conversations! - Here is our OSINT Research - IT Security News

2025-08-01
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT) and their outputs being indexed by search engines, leading to privacy concerns. However, there is no explicit or implicit description of harm caused by the AI system's development, use, or malfunction leading to injury, rights violations, or other harms. Nor is there a clear plausible future harm scenario described. The main focus is on reporting the discovery and research findings, which fits the definition of Complementary Information as it provides context and understanding about AI's societal impact without describing a new incident or hazard.
Thumbnail Image

Search Engines Are Indexing ChatGPT Chats -- Here's What Our OSINT Found - IT Security News

2025-08-01
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose outputs (private conversations) have been inadvertently made public through indexing by search engines. This exposure leads to harm in terms of privacy violations and potential breaches of confidentiality, which fall under violations of human rights or legal protections. Therefore, this is an AI Incident as the AI system's use has directly led to harm through privacy breaches.
Thumbnail Image

OpenAI disables chat discoverability after private conversations found in Google Search - Tech Digest

2025-08-01
Tech Digest
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and its feature that enabled conversations to be discoverable publicly. Although users had to opt-in, many were unaware of the privacy implications, leading to thousands of private and sensitive chats being exposed. This exposure constitutes a violation of privacy rights and harm to individuals' personal data, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. The harm has already occurred as private information was publicly accessible, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI, sohbeti silerek gizliliği artırdı

2025-08-01
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the inadvertent public exposure of private user conversations through search engine indexing. This exposure caused realized harm to users' privacy and confidentiality, which falls under violations of human rights and privacy protections. The harm is directly linked to the AI system's use and the design of the sharing feature. OpenAI's removal of the feature and cleanup efforts are responses to this incident. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT Sohbetleri Ara Motorlarından Kaldırıldı

2025-08-01
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature for sharing conversations led to private and sensitive user information being publicly accessible via search engines, constituting a violation of privacy and potentially human rights. This is a direct harm caused by the use of the AI system's sharing functionality. The removal of the feature and cleanup efforts are responses to this realized harm. Therefore, this qualifies as an AI Incident due to the direct harm to users' privacy and potential exposure of personal information through the AI system's use.
Thumbnail Image

Sohbetler, Kullanıcı Bilgileri İçin Risk

2025-08-01
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (ChatGPT) whose use led to the exposure of private user conversations on public search engines, causing harm to users' privacy and potentially violating legal obligations. The harm has already occurred as users' sensitive information was accessible publicly. OpenAI's removal of the feature and cleanup efforts are responses to this incident. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and its impact on user privacy rights.
Thumbnail Image

OpenAI, Google'da sohbetlerin görünmesini engelledi

2025-08-01
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to private user conversations being indexed by search engines, causing privacy harms (a form of violation of rights). Although harm occurred previously when the feature was active, the article focuses on OpenAI's removal of the indexing option and cleanup efforts, which are responses to the prior issue. There is no new harm described here, but rather a mitigation step. Hence, it fits the definition of Complementary Information, providing an update on a past AI Incident and the governance response.
Thumbnail Image

ChatGPT Sohbetleri Arama Motorlarına Kaldırıldı

2025-08-01
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and discusses harms related to privacy and exposure of sensitive user data through AI-generated content shared publicly. These harms have already occurred, as users' private conversations were indexed and accessible via search engines, constituting a violation of privacy rights. However, the main focus of the article is on OpenAI's decision to remove the indexing feature and clean existing data, which are mitigation and governance responses to the prior harms. There is no new incident of harm described; rather, the article details the company's corrective actions and similar responses by other companies. Thus, the event is Complementary Information, not a new AI Incident or AI Hazard.
Thumbnail Image

ChatGPT Sohbetleri Google'dan Kaldırıldı

2025-08-01
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the inadvertent public exposure of private user conversations through search engine indexing, constituting a violation of privacy rights, a form of human rights violation. This harm has already occurred as users' sensitive information was accessible publicly. The article focuses on the mitigation steps taken by OpenAI to remove the feature and clean up indexed content, but the primary issue is the realized harm from the AI system's use. Therefore, this qualifies as an AI Incident due to the direct or indirect harm caused by the AI system's use.
Thumbnail Image

Protect Your Privacy: ChatGPT Conversations May Appear in Google Results - czechjournal.cz

2025-08-01
The Czech Journal
Why's our monitor labelling this an incident or hazard?
The article centers on the potential for AI chatbot conversations to be exposed publicly due to indexing by search engines, which is a plausible risk but no actual harm or incident is described. It emphasizes privacy concerns and the need for protective measures but does not document a realized AI Incident or a specific AI Hazard event. Therefore, it fits best as Complementary Information, providing context and guidance related to AI privacy issues without reporting a concrete incident or hazard.
Thumbnail Image

Conversas pessoais de usuários com o ChatGPT vazam na internet por engano | CNN Brasil

2025-08-02
CNN Brasil
Why's our monitor labelling this an incident or hazard?
The event directly involves an AI system (ChatGPT) whose use led to the unintended exposure of sensitive personal data, constituting harm to individuals' privacy and potentially violating rights related to confidentiality and data protection. The harm has already occurred as private conversations were publicly accessible via search engines. The AI system's feature design and deployment (the 'Make this chat discoverable' option) caused this incident, making it an AI Incident. The company's mitigation efforts are responses to this incident, not the primary focus of the article, so the classification remains AI Incident rather than Complementary Information.
Thumbnail Image

Google exibe interações privadas de usuários com o ChatGPT

2025-08-01
O Antagonista
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the direct exposure of private, sensitive user data through a feature that allowed sharing conversations publicly. This exposure caused harm by violating users' privacy rights and potentially causing emotional and reputational harm. The harm is realized, not just potential, as thousands of conversations were indexed and accessible. The AI system's design and deployment decisions contributed to this harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The OpenAI response to disable the feature and remediate the issue is a reaction to the incident, not the main event itself.
Thumbnail Image

Shelly Palmer: Your shared ChatGPT conversations are Google-searchable

2025-08-01
SaskToday.ca
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its sharing feature. The harm is related to privacy violations and potential exposure of sensitive or confidential information through indexed shared conversations. Although no specific harm event is reported, the risk of harm to individuals' privacy and organizational confidentiality is credible and plausible. Therefore, this situation fits the definition of an AI Hazard, as the development and use of the AI system's sharing feature could plausibly lead to incidents of privacy harm. It is not an AI Incident because no direct or indirect harm has yet occurred or been documented in the article. It is not Complementary Information because the article is not providing updates or responses to a prior incident but raising awareness of a potential risk. It is not Unrelated because the AI system and its use are central to the issue.
Thumbnail Image

ChatGPT sohbetleri Google'a düştü - Olay Gazetesi Bursa

2025-08-01
Olay Gazetesi Bursa
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature to share conversations publicly led to direct harm by exposing private user data on the internet, violating privacy rights and potentially causing harm to individuals. The harm has materialized as private conversations became publicly searchable, which fits the definition of an AI Incident due to violation of rights and harm to communities. The removal of the feature and cleanup are responses but do not negate the incident classification.
Thumbnail Image

ChatGPT Conversations are Being Indexed by Search Engines! - Here Is our OSINT Research

2025-08-01
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use (sharing conversations with the discoverable option) directly led to harm in the form of privacy violations and exposure of sensitive personal and business information. This constitutes a violation of rights and harm to individuals and communities. The harm is realized, not just potential, and the AI system's design and feature enabled this exposure. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Your Shared ChatGPT Conversations Could Have Been Listed on Google Search

2025-08-01
Gadgets 360
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved in the development and use of a feature that led to the unintended exposure of personal data through search engine indexing. This exposure constitutes a violation of privacy rights, a form of harm to individuals' rights under applicable law. Since the harm (privacy breach) has already occurred due to the AI system's use, this qualifies as an AI Incident. The event involves direct harm caused by the AI system's feature and its malfunction in protecting user privacy.
Thumbnail Image

Thousands Of ChatGPT Conversations Leaked On Google Search: How To Check If You Were Affected, How To Delete ChatGPT Chats, How To Stay Safe

2025-08-01
Techlusive
Why's our monitor labelling this an incident or hazard?
The event describes a direct harm caused by the use of an AI system (ChatGPT) where private conversations containing sensitive information were made publicly accessible due to a feature that allowed indexing by search engines. This is a clear violation of privacy rights, a fundamental human right, and the harm has materialized as users' personal data was exposed without proper consent or awareness. The AI system's design and use directly led to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The incident is not merely potential or a response update but a realized privacy breach caused by the AI system's functionality.
Thumbnail Image

OpenAI Kills ChatGPT Feature After Privacy Leak

2025-08-01
implicator.ai
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature allowed user conversations to be indexed by search engines, leading to direct harm through privacy violations and exposure of sensitive information. The harm is realized, not just potential, as private conversations including personal, therapy-related, and business-sensitive content were publicly accessible. This constitutes a violation of privacy rights and harm to individuals and communities. The AI system's design (checkbox for discoverability) and its use directly led to this harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI Ends ChatGPT Sharing Feature After Privacy Concerns

2025-08-01
The Crypto Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns privacy risks arising from the use of a sharing feature. However, there is no indication that actual harm (such as injury, rights violations, or other significant harms) has occurred, only that there was a potential for unintended privacy exposure. The feature was removed as a precaution to prevent such harms. Therefore, this event describes a plausible risk of harm due to the AI system's use, but no realized harm is reported. This fits the definition of an AI Hazard rather than an AI Incident or Complementary Information, as the main focus is on the potential privacy risks and the removal of the feature to mitigate them.
Thumbnail Image

OpenAI retire la fonctionnalité "Découvrable" de ChatGPT en raison de préoccupations concernant la vie privée : "Trop d'occasions... de partager accidentellement des choses"

2025-08-01
Benzinga France
Why's our monitor labelling this an incident or hazard?
The article focuses on the removal of a feature to prevent potential privacy violations and accidental exposure of private conversations. While the feature's existence posed a plausible risk of harm to user privacy, no actual harm or incident is reported. The event is primarily about OpenAI's response to these concerns and their mitigation efforts, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

ChatGPT conversations are being indexed by Google

2025-08-01
cyberdaily.au
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use and sharing features have directly led to the exposure of personal data, including sensitive information such as medical and financial details. This constitutes a violation of privacy and potentially breaches data protection laws, which falls under violations of human rights or legal obligations protecting fundamental rights. The harm is realized as users' private conversations become publicly accessible without their informed consent. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the harm caused.
Thumbnail Image

How to Make ChatGPT Chats Private: Disable History & Manage Shared Links

2025-08-01
Bangla news
Why's our monitor labelling this an incident or hazard?
The article does not describe any specific event where an AI system caused harm or where harm is plausibly imminent. Instead, it offers instructions and explanations about privacy controls and data management related to ChatGPT usage. There is no mention of an incident or hazard involving AI malfunction, misuse, or development leading to harm. The focus is on user guidance and awareness, which fits the definition of Complementary Information as it supports understanding of AI system use and privacy implications without reporting a new incident or hazard.
Thumbnail Image

ChatGPT sohbetleri Google'a düştü: Tepkiler üzerine kaldırıldı

2025-08-01
F5Haber
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature allowing shared conversations to be indexed by search engines led to the unintended public exposure of private user data, which constitutes a violation of privacy rights (a breach of obligations under applicable law protecting fundamental rights). This harm has already occurred, as evidenced by the public availability of sensitive conversations. OpenAI's removal of the feature and cleanup efforts are responses to this incident. Therefore, this qualifies as an AI Incident due to realized harm linked directly to the AI system's use and its impact on users' privacy.
Thumbnail Image

OpenAI elimina una característica de ChatGPT que revelaba conversaciones de usuarios en motores de búsqueda

2025-08-01
NoticiasDe.es
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its feature that enabled sharing conversations publicly. Although sharing was user-initiated, the design led to inadvertent exposure of sensitive personal information indexed by search engines, causing harm to users' privacy rights. This is a violation of human rights (privacy) due to the AI system's use and feature design. Therefore, this qualifies as an AI Incident under the framework because the AI system's use directly led to harm (privacy violations).
Thumbnail Image

Massive ChatGPT Leak: Private Mental Health, Abuse Talks Exposed on Google Search

2025-08-01
Bangla news
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use (the shared links feature) directly led to the exposure of sensitive personal data, causing harm to individuals' privacy, psychological well-being, and potentially violating legal protections. The harm is realized and significant, including retraumatization and harassment. The AI system's malfunction in data governance and improper tagging was the root cause. OpenAI's subsequent mitigation efforts do not negate the fact that harm occurred. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Your chats with Meta's AI might end up on Google -- just like ChatGPT until it turned them off

2025-08-01
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Meta AI chatbot) whose shared chat content is publicly indexed by Google, leading to potential violations of privacy and possibly human rights related to data protection. The harm arises indirectly from the AI system's use and the sharing feature, which can expose sensitive personal information. Although users opt to share, the article suggests many may not fully understand the consequences, indicating a risk of harm to individuals' privacy rights. Since the harm (privacy violations) is occurring due to the AI system's use and sharing mechanism, this qualifies as an AI Incident under violations of human rights or breach of obligations to protect fundamental rights.
Thumbnail Image

AI Can't Keep a Secret: Sensitive Conversations with ChatGPT Show Up on Google Searches

2025-08-02
Breitbart
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) is explicitly involved as the source of the shared conversations. The harm arises from the use of the 'Share' feature, which makes sensitive conversations publicly accessible and searchable, leading to privacy violations and potential reputational harm. This is a direct consequence of the AI system's use and its sharing functionality. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and design.
Thumbnail Image

Your ChatGPT chats may have leaked on Google: Here's what happened

2025-08-02
India Today
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose 'Share' feature creates public links to conversations. These links have been indexed by Google, exposing sensitive personal and confidential information. This exposure constitutes a violation of privacy rights and can cause harm to individuals and organizations. The harm is direct and realized, as the chats are already publicly accessible. Hence, it meets the criteria for an AI Incident because the AI system's use has directly led to harm through privacy breaches.
Thumbnail Image

Thousands of private ChatGPT conversations found via Google search after feature mishap

2025-08-02
TechSpot
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature malfunction or poor design led to the unintended public exposure of private user data, including sensitive personal information. This exposure directly harms users' privacy rights, a form of human rights violation. The harm has already occurred as thousands of conversations were found publicly. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use and feature design flaws.
Thumbnail Image

OpenAI ends ChatGPT users' option to index chats on search engines - UPI.com

2025-08-02
UPI
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use (sharing and indexing of user chats) created a plausible risk of harm to users' privacy and rights due to unintended public exposure of private conversations. Although no direct harm is explicitly reported, the potential for harm was credible and significant, justifying classification as an AI Hazard. The article primarily reports on the mitigation of this risk by OpenAI, but since the risk was real and the indexing was active, it is not merely complementary information. There is no indication that harm has already occurred or that a legal violation has been adjudicated, so it is not an AI Incident. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

OpenAI kills feature that exposed ChatGPT chats via Google Search

2025-08-02
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the exposure of user conversations via search engines, which could lead to privacy harms. However, the article does not report any actual harm occurring, only the removal of the feature to prevent such exposure. Therefore, this is a precautionary measure addressing a potential risk rather than a realized harm. This fits the definition of Complementary Information as it updates on a mitigation action related to AI system use and its implications, without describing a new AI Incident or Hazard.
Thumbnail Image

Des utilisateurs de ChatGPT sous le choc après la découverte de leurs conversations dans les résultats de divers moteurs de recherche parmi lesquels Google et Bing, OpenAI peine à les y retirer

2025-08-02
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves ChatGPT, an AI system, whose development and use led to direct harm: private conversations were exposed publicly via search engines, and a caching bug caused users to see other users' private data including payment details. These are clear violations of privacy rights and data protection laws, constituting harm to individuals. The AI system's malfunction and design choices (such as the feature allowing conversations to be made visible) directly contributed to these harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI ha retirado rápidamente una nueva función de ChatGPT que permitía a los usuarios realizar búsquedas en conversaciones privadas

2025-08-03
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use, specifically a feature that enabled sharing of private conversations publicly via search engine indexing. The feature's deployment led to indirect harm by exposing users' private and sensitive information without their full informed consent, which constitutes a violation of privacy rights and could be considered a breach of obligations to protect user data. The harm is realized as private conversations, including sensitive personal topics, were made publicly accessible. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and harm to users' privacy and potential violation of rights.
Thumbnail Image

Exponen miles de chats de ChatGPT con datos personales en búsquedas de Google - PasionMóvil

2025-08-02
PasionMovil
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose conversations containing sensitive personal data have been made publicly accessible and indexed by Google, leading to direct harm through privacy violations. The harm is realized, not just potential, as sensitive data is exposed and accessible to anyone via search. The AI system's use and sharing features are central to the incident, as users share links that become publicly indexed. This exposure breaches privacy and data protection rights, which are human rights under applicable law. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT Stops Search Engine Indexing of Shared Chats - News Directory 3

2025-08-02
News Directory 3
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the use of an AI feature (sharing conversations with optional search engine indexing). The harm relates to privacy risks and potential exposure of personal data, which can be considered a violation of user rights and harm to individuals. Since the harm has occurred (personal details were found via search), this qualifies as an AI Incident. The event focuses on the removal of the feature and mitigation efforts, but the primary issue is the realized privacy harm caused by the AI system's feature.
Thumbnail Image

OpenAI Removes ChatGPT Feature That Made Conversations Public On Google Search

2025-08-02
ETV Bharat News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose optional feature caused private user conversations, including sensitive personal information, to be publicly accessible via search engines. This exposure constitutes a violation of privacy rights and harms users by breaching confidentiality. The harm is realized, not just potential, as thousands of chats were publicly searchable, including some with sensitive data. The feature was removed in response, but the incident itself meets the criteria for an AI Incident due to direct harm caused by the AI system's use and its design allowing unintended public exposure of private data.
Thumbnail Image

Attention : des milliers de conversations privées avec ChatGPT accidentellement indexées par Google - alloforfait.fr

2025-08-02
alloforfait.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the accidental public exposure of private conversations containing personal and sensitive information. This exposure constitutes a violation of privacy rights and harms individuals and organizations by compromising confidential data. The harm is realized, not just potential, as the conversations were indexed and accessible publicly. The incident stems from the AI system's use and a feature that enabled sharing and indexing, which users misunderstood, leading to direct harm. Therefore, this is an AI Incident due to realized violations of rights and harm to communities caused by the AI system's use and the resulting data exposure.
Thumbnail Image

OpenAI retira conversas com IA da busca do Google após pessoas divulgarem links com dados pessoais

2025-08-03
Valor Econômico
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the exposure of personal and confidential data through publicly indexed conversation links, constituting a violation of privacy and potential harm to individuals and companies. Although the exposure was voluntary, the AI system's design allowed for this risk, and the indexing by search engines made the harm more widespread. OpenAI's removal of these links and disabling the public visibility feature is a mitigation response. Since actual harm (privacy exposure) occurred due to the AI system's use and design, this qualifies as an AI Incident under the definitions provided, specifically under harm to persons (privacy and data exposure).
Thumbnail Image

OpenAI tira chats do Google após exposição de dado pessoal - 03/08/2025 - Tec - Folha

2025-08-03
Folha de S.Paulo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the exposure of personal and confidential data through publicly shared conversation links indexed by search engines. Although users voluntarily made the data public, the AI system's feature enabling shareable links contributed to the harm by making sensitive information easily discoverable. This constitutes a violation of privacy rights, a form of harm under the framework. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use. The company's response to remove the feature and delist links is a mitigating action but does not negate the incident classification.
Thumbnail Image

Migliaia di conversazioni con ChatGpt finiscono su Google: OpenAi cancella le chat (ma oltre 100 mila sono ancora online)

2025-08-03
Corriere della Sera
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (ChatGPT) whose use has led to the public exposure of private conversations containing sensitive and ethically concerning content, including plans to exploit indigenous peoples. This constitutes indirect harm through violation of privacy and potential human rights breaches. The persistence of these conversations online despite mitigation efforts further exacerbates the harm. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly or indirectly led to harm involving violations of rights and harm to communities.
Thumbnail Image

ChatGPT conversations found on Google? OpenAI removes feature amid privacy concerns

2025-08-03
India TV News
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved in a feature that led to unintended exposure of personal conversations, which constitutes a violation of privacy and potentially human rights related to data protection. The exposure of sensitive user data in public search results is a realized harm caused indirectly by the AI system's use and design. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and harm to users' privacy and personal data.
Thumbnail Image

Private ChatGPT chats went public: Why OpenAI needs to be more careful

2025-08-03
Digit
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (ChatGPT) and its feature that caused private conversations to be publicly indexed, leading to direct harm in the form of privacy violations and exposure of sensitive personal information. This harm is a breach of user rights and data protection obligations, fitting the definition of an AI Incident. The harm has already occurred, and the article describes the company's response to mitigate it. The incident is not merely a potential hazard or complementary information, but a realized harm caused by the AI system's use and design.
Thumbnail Image

Attention : vos conversations privées avec ChatGPT ont pu se...

2025-08-03
Futura
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the sharing and indexing of conversations, which is related to privacy. However, the article states that the problematic indexing feature has been disabled and that the conversations are no longer accessible via search engines. There is no indication of realized harm such as injury, rights violations, or other significant harms caused by the AI system's use or malfunction. The article focuses on the mitigation of a privacy risk and the current status of the feature, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

OpenAI tira chats do Google, após pessoas divulgarem links com dados pessoais

2025-08-03
Notícias ao Minuto Brasil
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) and discusses the voluntary sharing of AI-generated chat links that exposed personal and confidential information. However, the harm arises from user actions (voluntary sharing) rather than AI malfunction or misuse by the system itself. OpenAI's removal of these links from search engines and disabling the feature is a response to mitigate privacy risks. Since no direct or indirect harm caused by the AI system's development, use, or malfunction is reported, and the event focuses on the company's response to a privacy issue, this fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

ChatGPT : Ce que vous lui dites pourrait un jour se retourner contre vous, prévient son PDG

2025-08-03
Tribunal Du Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions ChatGPT, an AI system, and discusses the use of data generated through its use. However, it does not describe any actual harm or incident resulting from this data use, nor does it report any realized privacy violations or legal cases where harm occurred. Instead, it focuses on potential risks and user concerns about privacy and data security, which could plausibly lead to harm if data were misused or disclosed. Therefore, this event is best classified as an AI Hazard, as it highlights plausible future harm related to the use of an AI system but does not report a realized incident.
Thumbnail Image

OpenAI tira chats do Google, após pessoas divulgarem links com dados pessoais

2025-08-03
TNH1
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature allowed users to share conversation links publicly, leading to exposure of personal and confidential data. This constitutes a privacy-related harm, which falls under violations of rights. However, the exposure was voluntary by users, and the company has responded by removing the indexing and disabling the feature that caused the issue. There is no indication of a malfunction or misuse by the AI system itself, nor a direct AI-driven harm without user consent. Therefore, this is not a new AI Incident but rather a Complementary Information event describing the company's response to a privacy concern and mitigation measures.
Thumbnail Image

How to keep your ChatGPT conversations private [Guide]

2025-08-03
MobiGyaan
Why's our monitor labelling this an incident or hazard?
The article centers on privacy concerns arising from the use of an AI system (ChatGPT) and the sharing of its conversation outputs. However, it does not describe any specific incident where harm has occurred due to the AI system's development, use, or malfunction. Instead, it provides information about a change in feature settings, ongoing mitigation efforts, and user guidance to prevent potential privacy harms. Therefore, it qualifies as Complementary Information, as it updates on responses to previously identified privacy risks and informs users about managing their data privacy with AI tools.
Thumbnail Image

Google indexing shared ChatGPT links, exposing sensitive user data

2025-08-03
MobiGyaan
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) whose shared link feature caused sensitive user data to be publicly indexed by Google, leading to privacy and security harms. The harm is realized, not just potential, as confidential information including personal and business data has been exposed. This fits the definition of an AI Incident because the AI system's use directly led to violations of privacy and intellectual property rights, harming individuals and organizations. The event is not merely a hazard or complementary information, but a clear incident of harm caused by AI system use.
Thumbnail Image

OpenAI pulls shared ChatGPT conversations from Google after privacy concerns

2025-08-03
MobiGyaan
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the use of a feature that led to unintended exposure of personal data, which constitutes a violation of privacy rights, a form of harm to individuals. The harm has already occurred as sensitive information was indexed and publicly accessible. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a breach of privacy and potential harm to users. The company's response and removal of the feature is complementary but the core event is an incident of realized harm.
Thumbnail Image

ChatGPT Private Chats Exposed on Google in Privacy Breach

2025-08-03
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use and data management led directly to the exposure of private user conversations, including sensitive information such as medical advice and confidential business discussions. This exposure constitutes a violation of privacy rights and harms individuals and communities by breaching confidentiality and trust. The harm has already occurred, not just a potential risk, making this an AI Incident. The involvement of the AI system is explicit, and the harm is directly linked to its use and the misconfiguration of its data sharing feature.
Thumbnail Image

ChatGPT Privacy Scandal: Shared Links Exposed in Google Search

2025-08-03
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose sharing feature caused private conversations to be publicly accessible via search engines, leading to realized harm in the form of privacy violations and potential intellectual property breaches. The harm is direct and materialized, as sensitive data was exposed and indexed. The article details the cause (design flaw in sharing links), the scope of harm (over 100,000 chats exposed), and responses by OpenAI, fitting the definition of an AI Incident. It is not merely a potential risk or complementary information but a concrete incident of harm caused by AI system use and design.
Thumbnail Image

Google Indexed Private ChatGPT Conversations Shared by Users

2025-08-03
Baller Alert
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used to generate and store private conversations. Due to the 'Make this chat discoverable' option, these private chats were indexed by Google and became publicly searchable, exposing sensitive personal data such as admissions of addiction, abuse, and mental health struggles. This constitutes a violation of privacy and potentially breaches data protection rights, which falls under violations of human rights or breach of obligations under applicable law. The harm has already occurred as private information was exposed without users' full understanding or consent. OpenAI's removal of the feature and efforts to remove indexed links are responses to this incident. Therefore, this event qualifies as an AI Incident because the AI system's use directly led to harm through privacy violations.
Thumbnail Image

Ce que vous avez dit à ChatGPT pourrait maintenant être public ... et permanent - alloforfait.fr

2025-08-03
alloforfait.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use (sharing conversations publicly) has directly led to the exposure of private, sensitive data archived permanently on a public platform (Wayback Machine). This exposure causes harm by violating privacy and potentially other rights, as evidenced by examples of sensitive content including unethical plans, political criticism risking user safety, and academic dishonesty. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the violation of rights and harm to individuals and communities.
Thumbnail Image

Meta AI ativa alerta e revela que suas conversas podem estar públicas

2025-08-03
Perfil Brasil
Why's our monitor labelling this an incident or hazard?
An AI system (Meta AI chatbot) is involved, and the event concerns the use of this system's feature that allows public sharing of conversations. The sharing and indexing of these conversations can lead to privacy harms, such as exposure of personal data, which is a violation of privacy rights and can be considered harm to individuals. Since the harm is occurring due to the use of the AI system's sharing feature and the indexing by Google, this qualifies as an AI Incident. The article also mentions the system's alerts to users, but the primary focus is on the realized privacy harm from public exposure of conversations.
Thumbnail Image

Your Public ChatGPT Queries Are Getting Indexed By Google And Other Search Engines

2025-08-04
Wonderful Engineering
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its feature for sharing conversations. Although users had to opt in to make links discoverable, many did not fully understand the implications, leading to unintended exposure of personal information, which constitutes a violation of privacy rights (a human rights violation). This harm has already occurred as personal data was indexed and accessible publicly. Therefore, this qualifies as an AI Incident due to indirect harm caused by the AI system's use and design of the sharing feature.
Thumbnail Image

Be Careful What You Tell ChatGPT: Your Chats Could Show Up on Google Search

2025-07-31
PC Magazine
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use (sharing conversations with indexing enabled) has directly led to harm in the form of privacy violations and exposure of personal, sensitive information. The harm is realized, not just potential, as private chats are publicly searchable. This fits the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The AI system's design and user interaction with its sharing feature are pivotal to the harm. Therefore, the classification is AI Incident.
Thumbnail Image

After Backlash, ChatGPT Removes Option to Have Private Chats Indexed by Google

2025-08-01
PC Magazine
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature allowed private conversations to be indexed by search engines, leading to unintended public exposure of personal and sensitive information. This constitutes a violation of privacy rights and harm to individuals, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as private chats were accessible publicly. The subsequent removal of the feature is a response but does not negate the incident itself. Hence, the classification is AI Incident.
Thumbnail Image

ChatGPT : vos discussions sont sur Google ! Retirez-les avant qu'elles soient lues

2025-08-01
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose sharing feature caused thousands of private conversations to be publicly accessible and indexed by search engines, exposing sensitive personal and confidential information. This exposure directly harms users by violating their privacy and potentially breaching confidentiality and legal protections. The harm is realized, not just potential, as private data is already publicly accessible. The incident arises from the use and configuration of the AI system, fulfilling the criteria for an AI Incident. The article also describes OpenAI's mitigation efforts, but the primary event is the harm caused by the exposure. Thus, the classification is AI Incident.
Thumbnail Image

Conversas privadas do ChatGPT são afetadas pela pesquisa do Google e os usuários estão se culpando

2025-08-01
avalanchenoticias.com.br
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature to share conversations with an option to make them publicly discoverable led to private user chats being indexed by search engines and exposed publicly. This exposure constitutes a violation of users' privacy rights and can cause harm to individuals, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as private conversations including sensitive personal information have become publicly accessible. The AI system's use and design are directly linked to this harm, as the sharing feature enabled this exposure. Hence, the classification is AI Incident.
Thumbnail Image

OpenAI Removes Public Chat Sharing Feature After Private Conversations Appear In Search Results

2025-08-02
english
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature allowed private conversations to be indexed and publicly accessible, leading to unintended exposure of sensitive personal information. This exposure constitutes harm to individuals' privacy, a fundamental right, and thus meets the criteria for an AI Incident. The harm is realized, not just potential, as private data appeared in search results. The removal of the feature and efforts to mitigate the exposure are responses to this incident but do not negate the fact that harm occurred. Hence, the classification is AI Incident.
Thumbnail Image

خدعة خفية فى ChatGPT تكشف أسرارك على العلن - اليوم السابع

2025-08-02
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use leading to indirect harm to users' privacy and potential violation of personal data protection, as private conversations containing personal details were exposed publicly without full user awareness. Although the feature required users to opt-in to share links, the discoverability via search engines led to unintended exposure of sensitive information, constituting harm to individuals' privacy rights. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and feature design.
Thumbnail Image

محادثاتك مع ChatGPT تظهر على بحث جوجل و OpenAI تصفها بأنها تجربة - اليوم السابع

2025-08-02
اليوم السابع
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved in the development and use of a feature that led to the unintended exposure of personal user data through public sharing and indexing by search engines. This exposure constitutes a violation of privacy rights, a form of human rights violation under the framework. Since the harm (privacy breach) has already occurred and is directly linked to the AI system's use, this qualifies as an AI Incident. The event also includes the company's response, but the primary focus is the realized harm from the AI system's feature.
Thumbnail Image

OpenAI تُنهي خيار فهرسة محادثات ChatGPT على محركات البحث

2025-08-02
بوابة اخبار اليوم
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and concerns the handling of user data and privacy risks arising from the AI system's feature that made conversations publicly searchable. Although no direct harm is reported, the feature's operation led to unintended exposure of potentially sensitive personal information, which constitutes a privacy and data protection issue. The company’s action to stop the feature and remove indexed content is a response to this risk. Since the event describes realized privacy risks linked to the AI system's use and legal challenges related to data retention, it qualifies as Complementary Information rather than an AI Incident or AI Hazard, because no explicit harm such as rights violations or injury is reported as having occurred, but the focus is on mitigation and legal response.
Thumbnail Image

هل محادثاتك مع "ChatGPT" أصبحت عامة دون علمك؟ - الشروق أونلاين

2025-08-02
الشروق أونلاين
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use, specifically the sharing feature that led to unintended public exposure of private conversations containing sensitive data. This exposure constitutes a violation of user privacy, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. The harm has already occurred as private information was made publicly accessible, thus qualifying as an AI Incident rather than a potential hazard or complementary information. The AI system's design and feature implementation directly contributed to this harm by enabling unintended public access to sensitive data.
Thumbnail Image

ميزة مشاركة مُحادثات ChatGPT تختفي من نتائج البحث بعد غضب المستخدمين

2025-08-02
صدى البلد
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its feature for sharing conversations publicly. However, the harm described is indirect and relates to privacy concerns from user-shared content being indexed by search engines. There is no direct or indirect harm caused by AI malfunction or misuse leading to injury, rights violations, or other harms as defined. The event is primarily about a company response to user concerns and feature removal, which is a governance and mitigation action. Therefore, it fits best as Complementary Information, providing context on AI system use and responses to privacy issues, rather than an AI Incident or Hazard.
Thumbnail Image

الخصوصية في خطر.. محادثاتك مع الذكاء الاصطناعي قد تكون على العلن!

2025-08-02
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT and Meta AI chatbots) whose use has led to the exposure of sensitive personal information through public sharing and indexing by search engines. This constitutes a violation of privacy, a fundamental human right, and thus a breach of obligations intended to protect rights. The harm is realized as personal data and sensitive information have become publicly accessible, which can lead to harm to individuals' privacy and potentially other harms. Therefore, this qualifies as an AI Incident due to the direct link between AI system use and realized harm to privacy rights.
Thumbnail Image

ChatGPT يسحب محادثاته من غوغل بسبب خاصية مثيرة للجدل

2025-08-02
البيان
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the unintended public exposure of user conversations containing potentially sensitive information, which is a privacy risk. Although some users' privacy was compromised by the indexing of conversations, the article frames the issue as a privacy concern and a response to it, rather than documenting a concrete incident of harm such as identity theft or legal violations. The removal of the indexing feature and disabling of discoverability is a mitigation action. Hence, this is best classified as Complementary Information, as it provides an update on a response to a previously identified AI-related privacy risk rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

OpenAI تلغي ميزة جعل محادثات ChatGPT قابلة للبحث على جوجل - الوئام

2025-08-02
صحيفة الوئام الالكترونية
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and concerns the sharing and indexing of AI-generated conversation content. However, there is no direct or indirect harm reported as a result of this feature's use; rather, the company is responding to potential privacy risks and user concerns by removing the feature. Since no realized harm or incident has occurred, and the main focus is on the company's response to a privacy issue related to AI content sharing, this qualifies as Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

جوجل يبدأ بفهرسة محادثات ChatGPT.. الخصوصية في خطر! - قناة العالم الاخبارية

2025-08-02
قناة العالم الاخبارية
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) whose user-generated conversations are being indexed by Google, making private and sensitive information publicly accessible. This leads to violations of privacy, a form of harm to individuals and communities, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as thousands of conversations containing sensitive data are already indexed and accessible. The issue stems from the use of the AI system and the sharing mechanism, combined with user unawareness, leading to direct harm. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

بسبب خطر تسريب المعلومات.. "OpenAI" تلغي ميزة جعل المحادثات قابلة للفهرسة | صحيفة الخليج

2025-08-01
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its feature that allowed conversations to be indexed by search engines, leading to the exposure of sensitive personal information. This constitutes a violation of privacy, which is a breach of fundamental rights and obligations under applicable law protecting personal data and privacy. The harm has already occurred as users' private data was exposed publicly without their full understanding or consent. Therefore, this qualifies as an AI Incident due to realized harm caused by the use of an AI system.
Thumbnail Image

تقرير: محادثات مع ChatGPT تظهر في نتائج البحث على جوجل

2025-08-02
القدس
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved as the source of the conversations. The indexing and public exposure of these conversations by Google has directly led to a violation of user privacy, which is a breach of fundamental rights. The harm is realized as sensitive personal data is accessible publicly, potentially causing harm to individuals. Therefore, this qualifies as an AI Incident due to violation of rights and harm to individuals stemming from the AI system's use and data handling.
Thumbnail Image

تقرير: محادثات مع ChatGPT تظهر في نتائج البحث

2025-07-31
مانكيش نت
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved as the source of the conversations. The indexing and public exposure of these conversations by Google has directly led to harm in terms of privacy violations and potential identification of users, which constitutes harm to individuals and communities. This fits the definition of an AI Incident because the development and use of the AI system (ChatGPT) has indirectly led to harm through the exposure of sensitive data that was generated in interactions with the AI. The harm is realized, not just potential, as sensitive information is already publicly accessible.
Thumbnail Image

تقرير: محادثات مع ChatGPT تظهر في نتائج البحث على جوجل

2025-07-31
Asharq News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose user-generated content is being indexed by Google and made publicly accessible, leading to direct harm in the form of privacy violations and exposure of sensitive personal information. This meets the definition of an AI Incident because the AI system's use has directly led to harm to individuals' rights and communities through unauthorized or unintended public exposure of private conversations. The involvement of AI is explicit, and the harm is realized, not just potential. The lack of user awareness and control on some platforms exacerbates the issue, confirming the incident classification rather than a hazard or complementary information.
Thumbnail Image

تقرير يكشف: محادثات مع ChatGPT تظهر في نتائج البحث على غوغل!

2025-08-02
elsiyasa.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose user-generated content, originally intended to be private or shared within limited circles, has been indexed by another system (Google search) and made publicly accessible. This exposure of sensitive personal data constitutes a violation of privacy rights, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. The harm is realized as private, sensitive information is now publicly accessible, potentially leading to identification and harm to individuals. Therefore, this qualifies as an AI Incident due to the direct or indirect role of AI systems in causing harm to rights and privacy.
Thumbnail Image

قرار مفاجئ من OpenAI بعد تسريب آلاف المحادثات في نتائج البحث.. ماذا حدث بـ ChatGPT؟

2025-08-02
المشهد اليمني
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved, specifically its feature allowing sharing of conversation links with an option to appear in search engine results. The harm realized is the exposure of sensitive personal data, which constitutes a violation of privacy rights, a form of harm to individuals. Although the exposure was due to user action enabling public sharing, the AI platform's design and feature enabled this harm. OpenAI's removal of the feature and commitment to improve privacy are responses to this incident. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and feature design.
Thumbnail Image

التفاصيل الكاملة لأزمة تسريب محادثات ChatGPT على محرك بحث جوجل - اليوم السابع

2025-08-04
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves ChatGPT, an AI system, whose 'Share' feature caused thousands of private conversations containing sensitive personal and business information to be publicly accessible and indexed by Google. This exposure directly harms individuals' privacy and potentially breaches data protection rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the conversations are already publicly searchable. The incident arises from the AI system's use and design, not just a hypothetical risk, thus it is not merely a hazard or complementary information.
Thumbnail Image

OpenAI تُزيل خيار فهرسة محادثات ChatGPT بعد مخاوف تتعلق بالخصوصية

2025-08-03
الوفد
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the use of a feature related to sharing AI-generated conversation content. The harm relates to privacy violations, as some users' conversations were publicly discoverable and contained information that could potentially identify them, despite no direct identifiers. Although the harm is indirect and stems from user misunderstanding and the design of the feature, it constitutes a violation of privacy rights, which falls under violations of human rights or breach of obligations to protect fundamental rights. The event describes a realized harm (privacy exposure) caused by the AI system's use and design, and the company's response to mitigate it. Therefore, this qualifies as an AI Incident.
Thumbnail Image

بعد واقعة ChatGPT.. احذر مشاركة معلوماتك الحساسة على تطبيقات الذكاء ال

2025-08-03
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was used with a feature that made conversations publicly searchable, leading to the unintended disclosure of sensitive personal information. This directly caused harm in the form of privacy violations, which is a breach of fundamental rights. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm to individuals' rights and privacy.
Thumbnail Image

"أسرارك على المشاع".. محادثات مع "ChatGPT" تظهر في نتائج البحث على جوجل | المصري اليوم

2025-08-03
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose user-generated content, when shared online, has been indexed by another AI-powered system (Google search). This indexing has directly led to the exposure of sensitive personal information, constituting a violation of privacy and potentially human rights. The harm is realized as private conversations are publicly accessible, which can cause emotional distress and reputational damage. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the harm caused by privacy breaches.
Thumbnail Image

محادثاتك على ChatGPT قد تكون مسربة على جوجل لهذا السبب

2025-08-03
صدى البلد
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use (sharing feature) has directly led to the exposure of sensitive personal information to the public via Google indexing. This exposure constitutes a violation of privacy rights and harm to individuals and communities. The harm is realized, not just potential, as thousands of conversations are already indexed and accessible. Hence, it meets the criteria for an AI Incident due to direct harm caused by the AI system's use and its features.
Thumbnail Image

عاجل.. تهدد خصوصية المستخدمين.. OpenAI توقف ميزة مثيرة للجدل في شات جي بي تي

2025-08-04
اليوم الإلكتروني
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the unintended exposure of sensitive personal data, constituting a violation of user privacy rights, a form of harm to individuals. Although the feature required user activation, the risk of unintentional activation and the resulting exposure of private information directly caused harm. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use and its impact on user privacy.
Thumbnail Image

روابط محادثات ChatGPT العامة كانت قابلة للفهرسة على جوجل ومحركات البحث - الوطن

2025-08-03
جريدة الوطن
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use (sharing conversations via a public link) led to unintended exposure of private user data through search engine indexing. This exposure constitutes a violation of user privacy, which falls under harm to rights and potentially harm to communities. The harm has already occurred as users' private information was accessible to strangers. OpenAI's subsequent removal of the feature is a response but does not negate the incident. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use and its design of the sharing feature.
Thumbnail Image

احذر من مشاركة معلوماتك الحساسة على تطبيقات الذكاء الاصطناعي بعد واقعة ChatGPT - نبأ العرب

2025-08-03
نبأ العرب
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (ChatGPT) whose feature caused indirect harm by exposing users' sensitive personal data, violating privacy rights. This constitutes a violation of fundamental rights and data protection obligations, thus qualifying as an AI Incident. The harm is realized as private conversations became publicly accessible, raising significant privacy and safety concerns.
Thumbnail Image

بعد تحفظات.. OpenAI تتراجع عن إظهار محادثات ChatGPT في محركات البحث

2025-08-03
Asharq News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use, specifically a feature that enabled sharing AI-generated conversations publicly. The harm realized is the exposure of sensitive personal information (privacy harm), which can be considered a violation of user rights and a breach of privacy protections. Since the harm has already occurred (thousands of conversations with personal data were indexed and accessible), this qualifies as an AI Incident. The company's response to remove the feature and content is a mitigation step but does not negate the incident classification.
Thumbnail Image

تهدد خصوصية المستخدمين.. OpenAI توقف ميزة مثيرة للجدل في شات جي بي تي

2025-08-04
الخليج 365
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature allowed user conversations to be indexed by search engines, leading to exposure of sensitive personal information. This exposure constitutes a violation of privacy rights, a form of harm to individuals. The harm has already occurred as users' sensitive conversations became publicly accessible. OpenAI's removal of the feature is a response to this harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a breach of privacy rights and harm to users.
Thumbnail Image

بعد الانتقادات.. "OpenAI" تتراجع عن إتاحة محادثات "ChatGPT" في نتائج محركات البحث | المصري اليوم

2025-08-04
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose feature for sharing conversations led to direct harm in the form of privacy violations and exposure of sensitive personal information. Although the sharing was voluntary, the design and implementation of the feature allowed for unintended public dissemination of private data, constituting a breach of user rights and potential harm to individuals. Therefore, this qualifies as an AI Incident under the category of violations of human rights and harm to individuals due to the AI system's use and its consequences.
Thumbnail Image

أزمة تسريب المحادثات عبر غوغل.. "OpenAI" تتراجع عن ميزة مثيرة للجدل

2025-08-04
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the unintended public exposure of private user conversations, constituting a violation of privacy rights, a form of harm to individuals. Although the identities were anonymized, the exposure of sensitive content without user intent is a direct harm linked to the AI system's feature. The company responded by removing the feature and retracting indexed content, but the harm had already occurred. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's use.
Thumbnail Image

بعد جدل واسع: "تشات جي بي تي" تلغي ميزة فهرسة المحادثات

2025-08-04
موقع عرب 48
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved, specifically its feature for indexing conversations. The event stems from the use of the AI system and its feature design, which indirectly led to privacy risks and potential harm to users' personal data confidentiality. Although no explicit harm such as data breaches or identity exposure is confirmed, the risk of sensitive data exposure was realized to some extent as conversations appeared in search results. The company's removal of the feature and data indicates recognition of this harm. Therefore, this qualifies as an AI Incident due to indirect harm to users' privacy rights and potential violation of data protection principles.
Thumbnail Image

مفاجأة للمستخدمين.. هل محادثات ChatGPT تظهر على جوجل؟ إجابة صادمة | صحيفة الخليج

2025-08-04
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature for sharing conversations led to thousands of private chats being publicly accessible and indexed by search engines, exposing sensitive personal data. This constitutes a violation of privacy rights, a form of harm to individuals and communities. The harm has already occurred, making this an AI Incident. The involvement stems from the AI system's use and design, which directly led to the privacy breach. The company's response and removal of the feature are complementary but do not negate the incident classification.
Thumbnail Image

محادثاتك الشخصية مع ChatGPT تظهر على محرك البحث جوجل

2025-08-04
موقع بكرا
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved as the conversations are generated and stored by it. The harm arises from the use of the AI system where users' private chats became publicly accessible and indexed by search engines, leading to violations of privacy rights, which is a breach of fundamental rights. The harm has already occurred as some users' personal information was exposed. Therefore, this qualifies as an AI Incident due to realized harm to users' privacy stemming from the AI system's use and its sharing feature malfunction or design flaw.
Thumbnail Image

ChatGPT對話內容被看光光! OpenAI急關「分享連結」選項 - 國際 - 自由時報電子報

2025-08-02
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature malfunction or design flaw (public sharing of conversations indexed by search engines) directly led to harm—exposure of sensitive user data and privacy violations. The harm is realized, not just potential, as users' private conversations were accessible publicly. OpenAI's removal of the feature and requests to delete data are responses to this incident. Therefore, this qualifies as an AI Incident due to violation of user privacy rights caused by the AI system's use and malfunction.
Thumbnail Image

內容全被看光光!ChatGPT驚爆對話外洩 OpenAI急救火 | 聯合新聞網

2025-08-02
UDN
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use has directly led to the exposure of private and sensitive user conversations online, constituting a breach of privacy and potentially violating legal protections for personal data. The harm is realized as private information has been publicly accessible, which fits the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. OpenAI's response to block indexing and remove sharing options is a mitigation effort but does not negate the occurrence of harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

ChatGPT分享對話竟遭Google搜尋曝光!OpenAI急撤實驗功能防隱私外洩 | 鉅亨網 - 科技

2025-08-02
Anue鉅亨
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose sharing feature caused personal conversation data to be publicly accessible via search engines, leading to privacy harm. The harm is indirect but real, as users' identifiable information was exposed without their full awareness. This constitutes a violation of privacy rights, fitting the definition of harm to human rights or breach of obligations to protect fundamental rights. The AI system's use and feature design directly contributed to this harm, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT對話遭外流!OpenAI終止搜尋引擎索引實驗止血 | yam News

2025-08-01
蕃新聞
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use and an experimental feature led to the unintended public exposure of private user conversations via search engine indexing. This exposure can cause harm to users' privacy and potentially violate data protection rights, which fits within the scope of harm to persons or groups and violations of rights. The harm is realized as the data has already been indexed and publicly accessible. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

內容全被看光光!ChatGPT驚爆對話外洩 OpenAI急救火 | udn科技玩家

2025-08-02
udn科技玩家
Why's our monitor labelling this an incident or hazard?
The event involves ChatGPT, an AI system, whose user conversations were unintentionally exposed publicly via search engine indexing. This exposure led to the leakage of sensitive personal and confidential information, including discussions of illegal activities and private consultations, which harms users' privacy rights. The harm is realized, not just potential, as private data is accessible publicly. OpenAI's mitigation actions confirm the incident's seriousness. Hence, this qualifies as an AI Incident due to the direct harm caused by the AI system's use and the resulting privacy violations.
Thumbnail Image

內容全看光!ChatGPT爆對話外洩 OpenAI急撤回1功能 | 國際 | 三立新聞網 SETN.COM

2025-08-03
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use of a sharing feature directly led to the exposure of sensitive private data, constituting a violation of privacy and potentially human rights. The harm has already occurred as private conversations were publicly accessible and indexed by search engines, leading to significant privacy and security risks. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use and its feature's malfunction or design flaw.
Thumbnail Image

ChatGPT洩用戶對話|經搜尋器曝光 - EJ Tech

2025-08-04
EJ Tech
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use feature (sharing conversations via a '/share' URL). The exposure of private user conversations through search engine indexing constitutes a violation of user privacy, which can be considered a breach of rights under applicable law protecting fundamental rights. Since the harm (privacy violation) has already occurred due to the AI system's use and its interaction with search engines, this qualifies as an AI Incident. The mention of Google's content removal tool misuse is unrelated to AI system harm and does not affect the classification.
Thumbnail Image

個人 AI 對話不經意洩露給搜尋引擎,OpenAI 移除 ChatGPT 小功能

2025-08-04
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the use of a feature that led to unintended personal data exposure, which is a form of harm to individuals' privacy. Although the harm is indirect and stems from the sharing and indexing of AI-generated conversation links, it constitutes a violation of privacy rights, which falls under violations of human rights or breach of obligations to protect fundamental rights. Since the harm has occurred (personal data was inadvertently exposed), this qualifies as an AI Incident. The article also describes OpenAI's mitigation steps, but the primary focus is on the realized harm and the system's role in it.
Thumbnail Image

小心與 ChatGPT 的隱私對話外流!3 步驟刪除已公開連結 - 自由電子報 3C科技

2025-08-05
自由時報
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and concerns the potential privacy harm caused by the exposure of user conversations. However, the article describes a past issue that has been addressed by OpenAI removing the problematic feature, and the current risk is related to user-shared links rather than an ongoing or new incident. There is no direct or indirect harm currently occurring as described, but a plausible risk of privacy harm exists if shared links are not deleted. Since the article mainly provides guidance on managing this risk and reports on a resolved issue, it fits best as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

چت جی پی تی برنامه ریزی برای حمله به حماس را فاش کرد

2025-08-02
خبرگزاری مهر | اخبار ایران و جهان | Mehr News Agency
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use and feature led to the unintended public exposure of private conversations, including harmful content such as cyberattack planning. This exposure constitutes a violation of privacy and potentially human rights, as private and sensitive information was disclosed without consent. Although the harm is indirect and stems from the AI system's feature allowing sharing and indexing of conversations, the realized harm of privacy breach and potential misuse of sensitive information qualifies this as an AI Incident. The company's response to disable the feature and collaborate on content removal is complementary but does not negate the incident classification.
Thumbnail Image

چت جی پی برنامه حمله به حماس را فاش کرد

2025-08-02
فردانیوز
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use and a feature related to sharing conversations led to the direct exposure of private and sensitive information, including discussions about cyberattack planning. This exposure constitutes a violation of privacy and potentially human rights related to confidentiality and data protection. Although no direct physical harm is reported, the privacy breach and the facilitation of malicious planning (cyberattack) represent significant harms caused by the AI system's use and its feature malfunction or misconfiguration. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's use and malfunction.
Thumbnail Image

چت جی پی تی برنامه ریزی برای حمله به حماس را فاش کرد

2025-08-02
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of an AI system (ChatGPT) that led to the direct exposure of private user conversations containing sensitive information. This exposure constitutes a violation of privacy rights and could facilitate further harms such as cyberattacks or fraud. The harm has already occurred as the conversations were publicly accessible, fulfilling the criteria for an AI Incident. The company's response to disable the feature and collaborate on content removal is a mitigation effort but does not negate the incident classification.
Thumbnail Image

چت جی پی تی برنامه ریزی برای حمله به حماس را فاش کرد

2025-08-02
قدس آنلاین | پایگاه خبری - تحلیلی
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use and the sharing of its conversation logs led to the exposure of sensitive and potentially harmful information, including plans for a cyberattack. This exposure constitutes a violation of privacy and could facilitate harm, fulfilling the criteria for an AI Incident. The direct involvement of the AI system in generating the content and the subsequent public exposure of these conversations through search engines led to realized harm in terms of privacy breaches and potential misuse. The company's response to disable the feature and remove content is a mitigation step but does not negate the incident classification.
Thumbnail Image

چت جی پی تی برنامه ریزی برای حمله به حماس را فاش کرد

2025-08-02
جوان‌آنلاين
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the unintended exposure of private conversations containing sensitive information, including plans for cyberattacks. This exposure constitutes a violation of privacy and potentially facilitates harm, as private and potentially dangerous content became publicly accessible. Although the harm is indirect and stems from the AI system's feature allowing sharing and indexing of conversations, the incident has already occurred and involves realized harm related to privacy breaches and potential security risks. Therefore, it qualifies as an AI Incident.
Thumbnail Image

یک ترفند پنهان در ChatGPT که اسرار شما را برای عموم فاش می‌کند

2025-08-02
خبرگزاری باشگاه خبرنگاران | آخرین اخبار ایران و جهان | YJC
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved in the development and use of a feature that inadvertently exposed private user conversations to the public, leading to violations of user privacy and potential breaches of personal data protection rights. This constitutes a violation of fundamental rights related to privacy and data protection, which falls under harm category (c). Since the harm has already occurred and is directly linked to the AI system's use, this qualifies as an AI Incident.
Thumbnail Image

افشای مکالمات کاربران ChatGPT در گوگل/ OpenAI عقب‌نشینی کرد؟

2025-08-01
خبرآنلاین
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved as the platform generating the conversations. The harm relates to violations of user privacy due to the public exposure of shared conversations, which can include sensitive personal information. Although the sharing was user-initiated, the indexing by search engines led to unintended public access, constituting a breach of privacy rights. This harm has already occurred as users' private conversations became publicly accessible. Therefore, this qualifies as an AI Incident because the AI system's use (the sharing feature) directly led to harm (privacy violations).
Thumbnail Image

هشدار: برخی از مکالمه‌های خصوصی با ChatGPT در گوگل نمایش داده می‌شوند

2025-08-01
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the sharing of AI-generated conversation data. However, the harm (privacy exposure) results from user actions (manual sharing) and search engine indexing, not from the AI system malfunctioning or being misused in a way that directly or indirectly causes harm. The AI system itself is functioning as intended, and the exposure is a consequence of user behavior and external indexing. This fits the definition of Complementary Information, as it provides important context about privacy implications and ecosystem responses but does not describe a new AI Incident or AI Hazard.
Thumbnail Image

Chat gpt مکالمات و اطلاعات کاربران را فاش می‌کند

2025-08-01
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved as the platform generating the conversations. The harm arises indirectly from the use of the AI system's sharing feature, which users activated, leading to privacy violations (a form of harm to individuals). Although the AI did not malfunction or leak data autonomously, the event resulted in realized harm to user privacy due to the public exposure of sensitive information. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (privacy breaches).
Thumbnail Image

OpenAI ایندکس شدن چت‌های ChatGPT را متوقف کرد

2025-08-01
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns about privacy risks from shared chat links being indexed by search engines, potentially exposing sensitive user information. Although this could lead to violations of privacy rights (a form of harm), the event reports that OpenAI has removed the indexing feature to prevent such harm. There is no report of actual harm occurring due to AI malfunction or misuse, but rather a mitigation action taken in response to a potential privacy issue. This fits the definition of Complementary Information, as it updates on a governance and operational response to a known AI-related privacy risk, rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

قابلیت آزمایشی دردسرساز؛ مکالمات کاربران با ChatGPT برای مدتی در گوگل قابل مشاهده بود

2025-08-01
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to a direct harm: violation of user privacy and exposure of sensitive personal data, which constitutes a breach of fundamental rights. The harm has already occurred as private conversations were publicly accessible and indexed. The AI system's design and feature implementation directly contributed to this harm. Therefore, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

مالک چت GPT برای نقض حریم خصوصی کاربران بهانه تراشی کرد

2025-08-02
ana.ir
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and its feature that enabled conversations to be searchable on the web, leading to the exposure of personally identifiable information and confidential content. This constitutes a violation of user privacy, which falls under harm to human rights and privacy protections. Since the harm (privacy breach) has already occurred due to the AI system's use, this qualifies as an AI Incident. The company's response to disable the feature and remove indexed content is a mitigation measure but does not change the classification of the original harm.
Thumbnail Image

جمهور - نمایش مکالمات ChatGPT از نتایج گوگل حذف می‌شود

2025-08-02
خبرگزاری جمهور
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its feature for sharing conversations publicly. However, the issue was about the potential for accidental public exposure of user chats via search engines, not a confirmed incident of harm or violation. The company removed the feature to mitigate this risk. Since no actual harm or violation has been reported, and the event is about a company response to a privacy concern, this fits best as Complementary Information, providing context and updates on AI system use and governance rather than an AI Incident or Hazard.
Thumbnail Image

جنجال امنیتی هوش مصنوعی؛ پس از ChatGPT، پای گراک هم به ماجرا باز شد

2025-08-03
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT and Grak) whose user conversations were exposed publicly, leading to a clear violation of privacy rights, a recognized human right. The harm is realized as users' private data has been exposed without consent. The AI systems' use and data handling practices are directly linked to this harm. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and legal obligations to protect privacy.
Thumbnail Image

ايتنا - برچسب اشتراک‌گذاری عمومی گفتگوهای ChatGPT - صفحه 1

2025-08-03
ايتنا - سایت خبری تحلیلی فناوری اطلاعات و ارتباطات
Why's our monitor labelling this an incident or hazard?
The event involves AI system use (ChatGPT) and the indexing of its publicly shared conversations by search engines, leading to potential privacy harm through exposure of private information. This constitutes a realized harm related to privacy and data protection, which falls under violations of rights. Therefore, this qualifies as an AI Incident due to the direct link between AI system use and harm to user privacy.
Thumbnail Image

گفت‌وگوی خصوصی عمومی شد

2025-08-04
ana.ir
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) whose feature malfunction or poor design caused private user conversations to be publicly exposed via search engine indexing. This exposure constitutes a violation of users' privacy rights, a breach of obligations to protect fundamental rights, and harm to individuals. The harm has already occurred, making this an AI Incident. The company's response and mitigation efforts are complementary information but do not negate the incident classification.
Thumbnail Image

فاش شدن مکالمات خصوصی ChatGPT در گوگل؛ چگونه از حریم شخصی خود محافظت کنیم؟

2025-08-03
رکنا
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature for sharing conversation links has led to private user data being publicly accessible due to indexing by search engines. This exposure has caused violations of privacy, which is a breach of fundamental rights and harms users' personal security. The harm has already occurred as users' private information is accessible publicly, fulfilling the criteria for an AI Incident involving violation of rights and harm to individuals' privacy.
Thumbnail Image

مکالمات کاربران با ChatGPT همچنان در اینترنت قابل دسترس هستند

2025-08-03
دیجیاتو
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT and Grok) whose user-generated conversations have been exposed publicly, leading to a violation of user privacy—a human rights concern. The AI systems' use (the generation and storage of conversations) has indirectly led to harm by exposing sensitive personal information. Therefore, this constitutes an AI Incident due to the realized harm of privacy violations stemming from the AI systems' outputs being publicly accessible without proper safeguards.
Thumbnail Image

مکالمات کاربران با "گروک" لو رفت!

2025-08-04
tabnak.ir
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Grok and ChatGPT) whose user conversations have been leaked and indexed publicly, leading to a direct harm to users' privacy—a fundamental human right. The indexing and public accessibility of these conversations represent a breach of privacy and potential misuse of AI-generated data. The harm is realized, not just potential, as users' private data is accessible to others without consent. Hence, this is an AI Incident due to direct harm caused by the AI systems' use and data handling.
Thumbnail Image

افشای گسترده مکالمات کاربران با چت‌بات‌ها: ChatGPT و Grok در مرکز بحران جدید حریم خصوصی

2025-08-04
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and Grok chatbots) whose user conversations have been exposed publicly, leading to a clear violation of privacy rights, a form of harm to individuals. The exposure is ongoing and affects sensitive personal and legal information, which meets the criteria for harm under violations of human rights and legal protections. The AI systems' use and data handling practices are central to the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google पर लीक हुईं ChatGPT की निजी चैट्स! कैसे हुआ ये डेटा कांड, क्या बोले OpenAI CEO और यूज़र्स अब क्या करें?

2025-08-03
hindi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature caused private user data to be publicly exposed, leading to harm in the form of privacy violations and potential breaches of user rights. The harm has already occurred as thousands of private chats became publicly searchable. The AI system's use and design (the discoverable chat feature) directly caused this harm. Therefore, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights or breach of obligations intended to protect fundamental rights (privacy).
Thumbnail Image

एक फीचर्स से ChatGPT चैट्स लीक होने का खतरा, Google सर्च में दिख रही हैं प्राइवेट बातें? - ChatGPT Privacy Alert Private Chats Indexed on Google Exposing User Data

2025-08-03
दैनिक जागरण (Dainik Jagran)
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved, and its use (the chat share feature) has directly led to harm in the form of privacy violations and exposure of personal data, which constitutes a violation of user rights and harm to individuals' privacy. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use and its features leading to data exposure.
Thumbnail Image

ChatGPT पर की गई बातचीत Google पर लीक, इस फीचर ने बढ़ाई OpenAI की टेंशन - India TV Hindi

2025-08-03
India TV Hindi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the direct leakage of private user conversations, causing harm to users' privacy and potentially violating their rights. The leak was due to a feature enabled by OpenAI that made conversations publicly accessible via Google search. Although the feature was later disabled, the harm from the leak had already occurred. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a violation of fundamental rights (privacy) and harm to users.
Thumbnail Image

चैटजीपीटी शेयरिंग सुविधा बंद, प्राइवेसी खतरे को देखते हुए लिया बड़ा फैसला

2025-08-01
हरिभूमि
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature led to unintended exposure of private user data, constituting a violation of privacy rights, which falls under harm category (c) - violations of human rights or breach of obligations protecting fundamental rights. Since the privacy breach has already occurred and the company is taking remediation steps, this qualifies as an AI Incident rather than a hazard or complementary information. The harm is directly linked to the AI system's use and feature design, and the event describes realized harm and response measures.
Thumbnail Image

ChatGPT Chats Leaked on Google? Major Privacy Warning| ChatGPT chats leak on google| ChatGPT users chats leaked on Google| ChatGPT conversations on Google| ChatGPT data exposure | 'बड़े खतरे' में प्राइवेसी! Google पर लीक हुईं ChatGPT यूज़र्स की चैट्स, अब अगला नंबर आपका? | News Track in Hindi

2025-08-03
Newstrack
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved in the development and use of an experimental feature that led to the public exposure of private user conversations, causing a direct violation of privacy rights (a form of human rights violation). The harm has materialized as users' private data became publicly accessible, fulfilling the criteria for an AI Incident. The company's response to remove the feature and work with search engines is complementary information but does not negate the incident classification.
Thumbnail Image

क्या आप भी ChatGPT इस्तेमाल करते हैं? Google पर दिखी AI से की गई बातचीत, यूजर्स की प्राइवेसी पर उठे सवाल | 📲 LatestLY हिन्दी

2025-08-02
LatestLY हिन्दी
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and its feature that directly led to harm in the form of privacy violations, exposing sensitive personal data of users to the public and search engines. This constitutes a violation of user privacy rights, which falls under harm category (c) - violations of human rights or breach of obligations protecting fundamental rights. Since the harm has occurred and the AI system's use is the direct cause, this qualifies as an AI Incident.
Thumbnail Image

खतरे में प्राइवेसी! ChatGPT के इस फीचर से Google पर हजारों चैट हुईं 'लीक'

2025-08-03
TV9 Bharatvarsh
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved, specifically a feature that allowed sharing of chat content publicly. The harm (privacy violation) has already occurred as thousands of chats became publicly accessible via Google Search. Although the sharing required user opt-in, the feature's design led to unintended exposure, constituting an AI Incident due to violation of privacy rights (a form of human rights violation). The event is not merely a product update or general news but involves realized harm linked to the AI system's use, thus qualifying as an AI Incident.
Thumbnail Image

ChatGPT के इस फीचर पर लगी रोक, Google सर्च में पर्सनल चैट्स दिखने के बाद OpenAI ने लिया फैसला

2025-08-03
patrika.com
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (ChatGPT) whose use led to a violation of user privacy, which can be considered a breach of rights under applicable law. The leak of personal conversations constitutes harm to individuals' privacy and rights. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (privacy violation).
Thumbnail Image

Hati-hati dengan Fitur ChatGPT Ini, Obrolan Pribadi Bisa Muncul di Google

2025-08-02
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature 'make this chat discoverable' led to private user conversations being exposed publicly via Google search. This exposure includes sensitive personal data such as mental health, drug use, and traumatic experiences, which can be linked back to users, thus violating privacy rights. The harm has already occurred as thousands of chats were found publicly accessible. The AI system's design and use directly led to this harm, qualifying it as an AI Incident under violations of human rights and harm to individuals. The subsequent removal of the feature is a response but does not negate the incident itself.
Thumbnail Image

Hati-hati Chat dengan ChatGPT, Obrolan bisa Muncul di Google

2025-08-04
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use (specifically the sharing feature) led to the indirect exposure of private user data, which constitutes a violation of privacy and potentially a breach of user rights. The harm here is realized as private conversations containing personal details became publicly searchable, which can be considered a violation of human rights related to privacy. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm through unintended data exposure. The company's removal of the feature is a response but does not negate the incident itself.
Thumbnail Image

OpenAI hapus percakapan ChatGPT dari pencarian Google

2025-08-02
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the use of a feature that allowed AI-generated conversations to be indexed by search engines. While this posed a risk of unintended exposure of user content, the event does not report actual harm such as privacy breaches or violations of rights occurring. Instead, it describes OpenAI's mitigation action to remove the feature after public complaints. This fits the definition of Complementary Information, as it updates on a governance and operational response to a potential AI-related privacy issue without documenting an AI Incident or AI Hazard.
Thumbnail Image

OpenAI Hapus Fitur ChatGPT yang Izinkan Pengguna Bagikan Percakapan ke Google

2025-08-03
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the management of a feature that could lead to privacy risks if users accidentally share sensitive information. However, there is no indication that any harm has actually occurred. The removal of the feature and efforts to delete indexed content are precautionary and aimed at mitigating potential future harm. Therefore, this is best classified as Complementary Information, as it provides an update on governance and safety measures related to AI without describing a specific AI Incident or AI Hazard.
Thumbnail Image

ChatGPT Chat Appears on Google, OpenAI Removes This Feature Immediately

2025-08-04
Daily Voice
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the use of a feature that caused user conversations to be publicly discoverable on Google, leading to privacy risks. Although no direct harm such as identity theft or data breaches is explicitly reported, the exposure of sensitive personal information through search results constitutes a violation of privacy rights, which is a form of harm under the framework. Since the harm has occurred (user conversations were accessible publicly), this qualifies as an AI Incident. The article focuses on the realized privacy harm and the removal of the feature to mitigate it, not merely a potential risk or a general update, so it is not a hazard or complementary information.
Thumbnail Image

Curhatanmu dengan ChatGPT Bisa Muncul di Google Search, Begini Cara Hapusnya

2025-08-04
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature led to the direct exposure of private user conversations, causing harm to users' privacy and potentially violating their rights. The harm has already occurred as thousands of chats were publicly accessible. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use and design. The subsequent removal of the feature is a response but does not negate the incident classification.
Thumbnail Image

Heboh, Ribuan Percakapan Privat Pengguna ChatGPT Muncul di Hasil Pencarian Google!

2025-08-05
gadget.viva.co.id
Why's our monitor labelling this an incident or hazard?
The incident involves the use of an AI system (ChatGPT) and its feature that allowed private conversations to be indexed publicly, leading to the exposure of sensitive personal data. This exposure directly harms users' privacy and could lead to identification of individuals, which is a violation of fundamental rights. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use and its feature malfunction or misconfiguration.
Thumbnail Image

Curtas | Empresas | Valor Econômico

2025-08-04
Valor Econômico
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the exposure of personal and confidential data due to users voluntarily making conversations public. Although the exposure was voluntary, the AI system's design allowed for this risk, leading to potential violations of privacy and confidentiality, which are harms related to human rights and data protection. The company's removal of the feature is a response to mitigate these harms. Since harm has occurred or is plausible due to the AI system's use, this qualifies as an AI Incident.
Thumbnail Image

OpenAI elimina función clave para la privacidad de usuarios - Fortuna Web

2025-08-04
FORTUNA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the use of a feature that allowed sharing AI-generated conversations publicly. However, the sharing was user-initiated and not automatic, and the harm arises from user action rather than a malfunction or misuse by the AI system itself. The article does not report any realized harm such as privacy breaches caused directly by the AI system malfunctioning or violating rights without user consent. Instead, it reports a company response to potential privacy risks and mitigation measures. Therefore, this is best classified as Complementary Information, as it provides context and updates on privacy management related to an AI system rather than describing a direct AI Incident or a plausible AI Hazard.
Thumbnail Image

El diseño poco claro de una función de ChatGPT expone conversaciones sensibles en Google

2025-08-04
Diario de Arousa
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature design and user interaction led directly to the exposure of sensitive personal data, causing harm to users' privacy and potentially violating data protection rights. The harm is realized, not just potential, as thousands of conversations with sensitive content were indexed publicly. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the harm caused. The article also discusses mitigation efforts, but the primary focus is on the incident itself.
Thumbnail Image

ChatGPT : des milliers de conversations privées retrouvées sur Google à cause d'une fonction mal comprise

2025-08-04
Les Numériques
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved in the collection and sharing of private user conversations. The sharing of these conversations with Google, making them accessible via search, constitutes a violation of user privacy and security, which is a harm to individuals' rights and potentially their safety. This harm has already occurred as private data was exposed. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system's feature leading to a breach of privacy and security.
Thumbnail Image

ChatGPT conversations are showing up on Google: Reports

2025-08-04
The Hindu
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose design and use led to user conversations being publicly accessible and indexed by search engines, causing privacy harms to users. This constitutes a violation of user privacy rights, which falls under harm category (c) - violations of human rights or breach of obligations intended to protect fundamental rights. The harm has already occurred as conversations were found on search engines, and OpenAI's response is a mitigation effort. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI just pulled a controversial ChatGPT feature -- what you need to know

2025-08-04
Tom's Guide
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature malfunction or poor design led to the direct exposure of sensitive personal and confidential information, constituting a violation of privacy rights, which falls under harm to human rights and fundamental rights. The harm has materialized as private conversations were publicly accessible and indexed by search engines. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use and design.
Thumbnail Image

ChatGPT removes the ability for conversations to be displayed by search engines as 'nearly 4,500 conversations' indexed by Google

2025-08-04
pcgamer
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved in the development and use of a feature that allowed user conversations to be indexed by search engines, leading to privacy harms through unintended exposure of sensitive personal information. Although the sharing was opt-in, the design of the consent mechanism was insufficiently clear, resulting in indirect harm to users' privacy and potential violations of data protection rights. The event involves realized harm (privacy violations) caused by the AI system's use and design, thus qualifying as an AI Incident.
Thumbnail Image

Thousands of ChatGPT chats leaked on Google, OpenAI removes 'Share' feature amid backlash

2025-08-04
Mashable ME
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used in a way that directly led to harm by exposing personal and sensitive user data publicly through indexed shared links. This exposure constitutes a breach of privacy rights and can be considered a violation of human rights or legal obligations protecting personal data. The harm has already occurred as thousands of chats with personal details were accessible publicly. Therefore, this qualifies as an AI Incident due to the realized harm linked to the AI system's use and its feature design.
Thumbnail Image

ChatGPT chats won't show up on Google anymore -- here's how to remove your shared links

2025-08-04
Windows Central
Why's our monitor labelling this an incident or hazard?
The article details a change in OpenAI's ChatGPT system that removes the option for users to make their conversations publicly discoverable via search engines. While the prior feature led to personal data exposure (a form of harm), this article focuses on the removal of that feature and the ongoing process to remove indexed content. It does not report a new AI Incident or AI Hazard but rather a response to mitigate past harms. Therefore, it fits the definition of Complementary Information as it provides an update and governance response to a prior AI-related issue.
Thumbnail Image

Chat con IA finiscono su Google, OpenAI blocca condivisione - Future Tech - Ansa.it

2025-08-04
ANSA.it
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT, a large language model) is involved as the platform where conversations occurred. The harm arises from the exposure of potentially sensitive personal data through public indexing, which can lead to privacy violations and risks to individuals' rights. Although no hacking occurred, the AI system's design allowed user-generated content to be publicly accessible and indexed, indirectly causing harm to users' privacy and potentially violating data protection rights. OpenAI's disabling of the indexing feature is a response to this harm. Therefore, this event constitutes an AI Incident due to realized harm related to violations of privacy and potentially human rights.
Thumbnail Image

ChatGPT Briefly Made Chat Logs Accessible on Google. Yikes.

2025-08-04
VICE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature design and use caused private user data to be exposed publicly, leading to a breach of privacy and potential violation of users' rights. The harm is realized as sensitive personal information was accessible via Google search, which fits the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The incident is directly linked to the AI system's use and its feature implementation, not merely a potential risk or complementary information.
Thumbnail Image

Private chats can be found on Google: OpenAI withdraws function

2025-08-05
heise online
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved, specifically its chat sharing and discoverability feature. The issue stems from the use (misuse or misunderstanding) of this feature by users, leading to unintended public exposure of private conversations containing sensitive information. This exposure constitutes a violation of privacy, which is a human rights concern. Although it is not a system malfunction or data breach, the AI system's design and feature implementation directly contributed to the harm. Therefore, this qualifies as an AI Incident due to realized harm (privacy violations) caused indirectly by the AI system's use and feature design.
Thumbnail Image

Private Chats bei Google auffindbar: OpenAI nimmt Funktion zurück

2025-08-04
heise online
Why's our monitor labelling this an incident or hazard?
The AI system involved is ChatGPT, an AI chatbot. The issue arises from the use of a feature that made private chats publicly discoverable, leading to exposure of sensitive personal data including discussions about murder and sexual preferences. This exposure constitutes a violation of privacy rights, a form of harm to individuals. The harm is realized, not just potential, as the chats were indexed by search engines and accessible publicly. The event is not a malfunction or a leak but a misuse or misunderstanding of the AI system's feature, which still counts as harm caused by the AI system's use. OpenAI's response to remove the feature is a mitigation step but does not negate the incident. Hence, this is an AI Incident involving violation of rights due to the AI system's use.
Thumbnail Image

Privatsphäre-Risiko: Vertrauliche ChatGPT-Chats online auffindbar - trotz Löschung

2025-08-04
20 Minuten
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature to share conversations with a setting to make them searchable led to the unintended public exposure of sensitive personal and corporate information. This exposure constitutes a violation of privacy rights and potentially other legal protections. The harm has materialized as the conversations were accessible on search engines and archived permanently, despite deletion efforts. The AI system's design and use directly contributed to this harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities (privacy).
Thumbnail Image

Ein falscher Klick - und dein Chat landet bei Google

2025-08-04
Blick.ch
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature for sharing conversations caused private chats to be indexed by search engines, exposing sensitive and confidential information publicly. This exposure constitutes a violation of privacy rights and can be considered harm to individuals and communities. The harm has already occurred as the chats are publicly accessible, and the platform's design flaw is the direct cause. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use and malfunction (design flaw).
Thumbnail Image

ChatGpt, migliaia di conversazioni OpenAI finiscono su Google

2025-08-04
Sky
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use (sharing conversations publicly) led to potential harm related to privacy and data exposure, which can be considered a violation of user rights. Although the harm is indirect and stems from user consent and the system's design, the exposure of private conversations publicly accessible on Google constitutes a breach of privacy rights. OpenAI's intervention to disable the feature and remove content is a response to this incident. Therefore, this qualifies as an AI Incident due to the realized harm to user privacy and rights.
Thumbnail Image

Vos conversations privées avec ChatGPT ne seront plus publiées sur Google

2025-08-04
Frandroid
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the use of a feature that led to unintended exposure of private user conversations via search engines, which can be considered a violation of privacy rights (a human rights violation). However, the article describes the feature's removal and mitigation efforts rather than ongoing or new harm. Therefore, this is not a new AI Incident but rather complementary information about a response to a previously existing issue.
Thumbnail Image

Leaked ChatGPT Conversation Shows User Identified as Lawyer Asking How to "Displace a Small Amazonian Indigenous Community From Their Territories in Order to Build a Dam and a Hydroelectric Plant"

2025-08-04
Futurism
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (ChatGPT) in a manner that directly relates to potential violations of human rights (displacement of indigenous people) and exploitation. The AI system was used to generate advice on how to achieve this, indicating its role in the development or use phase leading to harm. The leak of conversations also reveals other sensitive and potentially harmful uses of the AI system, including content that could endanger users or involve inappropriate content. The presence of these harms or their facilitation by the AI system meets the criteria for an AI Incident, as the AI's use has directly or indirectly led to or facilitated harm or violations of rights. The poor design leading to exposure of private conversations further compounds the issue, showing malfunction or misuse leading to harm or risk thereof.
Thumbnail Image

Su Google sono comparse le conversazioni private con ChatGPT: segreti e dati sensibili pubblicati per errore

2025-08-04
Fanpage
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use of a sharing feature led to the unintended public exposure of private, sensitive conversations. This exposure constitutes a violation of privacy rights and potentially breaches legal protections for personal data, qualifying as harm under the framework. The harm has already occurred, making this an AI Incident rather than a hazard or complementary information. The AI system's development and use directly contributed to the harm by enabling the sharing and indexing of private chats.
Thumbnail Image

Des milliers de chats privés avec ChatGPT exposés sur Google : OpenAI coupe la fonctionnalité

2025-08-04
Le Matin
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, as the conversations were generated by it and shared via its platform. The harm involves exposure of private and sensitive information, which constitutes harm to individuals' privacy and potentially breaches confidentiality, thus falling under violations of rights and harm to persons. The harm has already occurred as the conversations were accessible publicly. The incident stems from the use of the AI system's sharing functionality and its interaction with search engine indexing. Therefore, this qualifies as an AI Incident because the AI system's use directly led to realized harm (privacy exposure).
Thumbnail Image

Shocking! Google Is Leaking Millions Of Private ChatGPT Conversations In Search Results

2025-08-04
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose shared conversation links are being indexed by Google, leading to unintended exposure of potentially sensitive user data. Although this is not a data breach in the traditional sense, the AI system's use and the way its outputs are shared have directly led to harm in terms of privacy violations and potential breaches of user confidentiality. The harm is realized as users' private or sensitive information is accessible publicly without their full awareness or consent. Therefore, this qualifies as an AI Incident due to violation of privacy rights and harm to users resulting from the AI system's use and sharing mechanisms.
Thumbnail Image

Mega-Panne bei ChatGPT: Nutzer machten sensible Gespräche für alle öffentlich sichtbar

2025-08-04
oe24
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use and feature design directly led to the public exposure of sensitive personal conversations. This exposure constitutes harm to individuals' privacy and potentially breaches data protection and fundamental rights. The incident is not merely a potential risk but a realized harm, as private conversations were accessible publicly. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations intended to protect fundamental rights (privacy).
Thumbnail Image

Les IA peuvent être des poisons pour ceux qui ne savent pas les utiliser - Le Temps

2025-08-04
Le Temps
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and a feature that can lead to harm (privacy violations) if users inadvertently expose sensitive information. However, the article does not report that such harm has already occurred, only that the potential for harm exists due to misuse or misunderstanding of the feature. Therefore, this situation represents a plausible risk of harm stemming from the AI system's use, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Caos ChatGPT: le chat private degli utenti sono finite su Google

2025-08-04
Hardware Upgrade - Il sito italiano sulla tecnologia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose experimental feature caused private conversations containing sensitive personal data to be publicly accessible via search engines. This directly led to harm in terms of privacy violations and potential exposure of sensitive information, which falls under violations of human rights and harm to communities. The harm has already occurred, and the AI system's use and design were pivotal in causing this incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Scandale de confidentialité : des données sensibles issues de ChatGPT retrouvées sur Google

2025-08-04
Toms Guide : actualités high-tech et logiciels
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to a direct harm: unauthorized exposure of sensitive personal data through a malfunction or misconfiguration of a sharing feature. This exposure constitutes a violation of privacy rights, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. The harm has already occurred, as thousands of conversations were publicly accessible. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT: alcune chat pubbliche sono finite su Google

2025-08-04
MRW.it
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved, specifically its feature allowing users to share conversations publicly. The harm relates to privacy violations due to unintended exposure of personal data through search engine indexing. Although the exposure resulted from user sharing, the AI system's design and feature enabled this risk. The harm has occurred as personal details were exposed publicly, constituting a violation of privacy rights (a form of human rights violation). OpenAI's disabling of the feature is a response to this incident. Therefore, this qualifies as an AI Incident because the AI system's use directly led to realized harm (privacy breaches).
Thumbnail Image

OpenAI streicht neue ChatGPT-Funktion - Nutzerdaten versehentlich öffentlich

2025-08-02
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns about privacy violations due to a feature that made conversations searchable. Although there is a risk of privacy breaches, the article does not confirm realized harm or legal violations but rather a proactive removal of the feature to prevent such harm. The focus is on the company's response to potential privacy risks and mitigation efforts, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Vertrauliche ChatGPT-Chats online - trotz Löschung | Heute.at

2025-08-04
Heute.at
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the unintended public exposure of sensitive personal and corporate data, constituting a violation of privacy rights and harm to individuals and communities. The harm has already occurred as users' private conversations were accessible publicly, including sensitive topics and internal company information. This fits the definition of an AI Incident because the AI system's use directly led to harm (privacy violations and potential reputational or other harms).
Thumbnail Image

OpenAI ChatGPT Privacy Breach Exposes User Chats - TechNadu

2025-08-04
TechNadu
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature allowed users to share conversations publicly. Due to missing privacy safeguards and inadequate user information, sensitive data was indexed by search engines and became accessible to the public, constituting a violation of privacy and data protection laws (human rights and legal obligations). This exposure of private information is a clear harm to individuals' rights and privacy, fulfilling the criteria for an AI Incident. The involvement of the AI system's development and use directly led to this harm, and regulatory penalties confirm the seriousness of the incident.
Thumbnail Image

OpenAI entfernt Funktion: Tausende Chats in Google augetaucht

2025-08-02
futurezone.at
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved, specifically its sharing function that allowed user conversations to be publicly indexed. The use of this AI system directly led to a privacy harm, which is a violation of user rights and data protection principles, falling under violations of human rights or breach of obligations under applicable law. The harm is realized as personal and sensitive information was exposed to millions, causing potential indirect identification and privacy violations. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Conversazioni con ChatGPT tra i risultati di Google: ci sono anche le tue?

2025-08-04
telefonino.net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose user-generated conversations were publicly exposed due to a feature allowing sharing on Google, leading to the publication of sensitive and potentially harmful content. This exposure constitutes a violation of privacy rights and harm to communities, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's use is directly linked to the incident. Although the exposure was not due to a malfunction or breach, the AI system's role in generating and hosting the content is pivotal to the harm.
Thumbnail Image

OpenAI Disables ChatGPT Shared Chat Indexing Amid Privacy Concerns

2025-08-04
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature for sharing chats led to the indexing of private conversations by search engines, resulting in the exposure of sensitive personal and business information. This exposure is a clear violation of user privacy and data protection rights, fulfilling the criteria for harm under violations of human rights or applicable law. The harm is realized, not just potential, as users' private data appeared publicly. The incident stems from the use and data management of the AI system, making it an AI Incident rather than a hazard or complementary information. The article also references regulatory scrutiny and user backlash, reinforcing the significance of the harm caused.
Thumbnail Image

Thousands of ChatGPT conversations are appearing in Google search results

2025-08-04
Computing
Why's our monitor labelling this an incident or hazard?
The event describes how ChatGPT conversations, generated by an AI system, have been shared and indexed by Google, leading to the public exposure of sensitive personal data. The AI system's sharing feature, combined with user misunderstanding and interface design issues, directly led to privacy violations and harm to individuals. The harm is realized, not just potential, as the data is already publicly accessible. This fits the definition of an AI Incident because the AI system's use and design directly contributed to violations of rights and harm to communities through privacy breaches.
Thumbnail Image

OpenAI 'Removing' Sensitive AI Chats From Google | Silicon

2025-08-04
Silicon UK
Why's our monitor labelling this an incident or hazard?
The event directly involves an AI system (ChatGPT) and its use leading to harm in the form of privacy violations and potential breaches of personal data confidentiality, which falls under violations of human rights and legal protections. The exposure of sensitive personal information through AI-generated chat logs indexed publicly constitutes realized harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to individuals' rights and privacy.
Thumbnail Image

IA : l'indexation des conversations ChatGPT sur Google déjà supprimée

2025-08-04
Economie Matin
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its feature that allowed conversations to be indexed by search engines, leading to direct harm by exposing sensitive personal information publicly. This constitutes a violation of privacy rights, a breach of obligations under applicable law protecting fundamental rights. The harm is realized, not just potential, as sensitive data was accessible on Google. OpenAI's removal of the feature and content is a response to this harm but does not negate the incident itself. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

ChatGpt, migliaia di conversazioni finiscono su Google

2025-08-04
Business People
Why's our monitor labelling this an incident or hazard?
The article details a situation where AI-generated conversations are publicly accessible due to user consent and sharing, not due to AI malfunction or misuse. The harm is privacy-related but stems from user behavior and archival practices, not from the AI system's failure or malicious use. The article focuses on how to mitigate the issue and manage privacy settings, which aligns with Complementary Information about AI-related privacy risks and responses rather than a new AI Incident or Hazard.
Thumbnail Image

ChatGPT : OpenAI retire une fonction qui rendait vos conversations publiques

2025-08-04
Les Smartgrids
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (ChatGPT) and a feature that allowed conversations to be publicly indexed, which could lead to privacy harms. However, the harm described is related to user error in sharing and the design of a feature rather than a malfunction or misuse of the AI system itself causing harm. The company has removed the feature to prevent future harm, indicating a governance response. No new incident of harm caused by the AI system's outputs or malfunction is reported, nor is there a plausible future harm scenario beyond what is already addressed. Thus, this is Complementary Information about a governance and privacy-related update concerning AI use.
Thumbnail Image

The ChatGPT sharing dialog demonstrates how difficult it is to design privacy preferences

2025-08-04
simonwillison.net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature allowed private conversations to be indexed by search engines, leading to actual privacy harms as private and potentially embarrassing information became publicly accessible. The harm is direct and material, stemming from the AI system's use and the design of its sharing feature. The removal of the feature is a response to this harm. This fits the definition of an AI Incident because it involves violation of rights (privacy) and harm to individuals caused by the AI system's use. It is not merely a potential hazard or complementary information, but a realized harm.
Thumbnail Image

Shared ChatGPT chats indexed on Google; OpenAI pulls feature after privacy concerns

2025-08-04
dtnext.in
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved, specifically its share feature that allowed public links to conversations. The indexing by search engines led to unintended exposure of personal information, constituting a violation of privacy rights, which falls under harm to individuals' rights. Since the harm (privacy breach) has already occurred due to the AI system's use, this qualifies as an AI Incident.
Thumbnail Image

Vertrauliche ChatGPT-Gespräche durch Suchmaschinen zugänglich gemacht

2025-08-03
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the unintended public exposure of private conversations containing sensitive personal data. This exposure constitutes harm under the category of violations of human rights and privacy protections. The harm has already occurred as private data became accessible via search engines. Therefore, this qualifies as an AI Incident. The article also mentions OpenAI's mitigation efforts, but the primary focus is on the realized harm from the AI system's use and the resulting privacy breach.
Thumbnail Image

OpenAI entfernt ChatGPT-Gespräche aus Suchmaschinen

2025-08-02
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the use of a feature related to sharing AI-generated conversations. Although no direct harm occurred, the feature's existence posed plausible privacy risks, which could have led to harm if sensitive information was inadvertently exposed. Since the feature was removed as a response to these concerns and no actual harm materialized, this event is best classified as Complementary Information reflecting a governance and product response to potential AI-related privacy issues.
Thumbnail Image

MetaAI-Chats: Datenschutzbedenken durch Google-Indexierung

2025-08-02
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
MetaAI is an AI chat system whose public chats are indexed by Google, exposing potentially sensitive personal information. This involves the use of an AI system and raises credible privacy risks that could lead to violations of user privacy and data protection rights. Although no specific incident of harm is detailed, the ongoing practice and user misunderstandings create a credible risk of harm. Hence, this qualifies as an AI Hazard rather than an Incident, as harm is plausible but not confirmed as having occurred. The article focuses on the potential privacy risks and user concerns rather than a concrete harm event.
Thumbnail Image

OpenAI entfernt umstrittene ChatGPT-Funktion nach Datenschutzbedenken

2025-08-01
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the exposure of private and sensitive user data through a feature that made conversations publicly searchable. This exposure constitutes a violation of privacy, a human right, and thus a harm caused indirectly by the AI system's use. The harm has already occurred, as private information was accessible publicly, making this an AI Incident rather than a hazard or complementary information. The article focuses on the harm and the company's response, not just on general AI news or policy discussion.
Thumbnail Image

Ein kleiner Haken in ChatGPT hat dazu geführt, dass unzählige Menschen ihre intimen Chats mit der Welt teilten

2025-08-05
GameStar
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature malfunction or design led to unintended exposure of private user data, constituting a violation of privacy and potentially human rights related to confidentiality. The harm (exposure of intimate chats) has already occurred, making this an AI Incident. The AI system's design and use directly led to the harm, and the company has responded by removing the feature and attempting remediation, but the incident itself is realized.
Thumbnail Image

ChatGPT-Nutzer ahnungslos: Private Chats werden zum öffentlichen Problem

2025-08-05
Merkur.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose design and sharing functionality have caused private user data to become publicly accessible, leading to realized harm. The exposure of sensitive information such as corporate secrets, personal data, and illegal activity confessions constitutes violations of privacy and intellectual property rights, which are harms under the AI Incident definition. The harm is direct and ongoing, as the data remains publicly accessible and archived. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Wegen verwirrende Teilen-Funktion: Tausende ChatGPT-Gespräche via Google auffindbar

2025-08-05
Netzwelt
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose sharing feature allowed private conversations to be indexed by Google, leading to widespread unauthorized public access to sensitive personal data. This is a clear violation of privacy rights, a fundamental human right, and thus meets the criteria for an AI Incident under violations of human rights or breach of applicable law. The harm has already occurred as thousands of private chats are publicly accessible. The incident stems from the AI system's use and design, specifically the confusing sharing function that users overlooked, leading to direct privacy harm.
Thumbnail Image

Private Chat-Verläufe geleakt: ChatGPT im Alltag: So schützt man seine persönlichen Daten

2025-08-05
Stuttgarter-Zeitung.de
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, and its use (the sharing feature) directly led to the leak of private and sensitive data. This constitutes a violation of privacy and potentially human rights, as confidential information was exposed without consent. The harm has already occurred as the data was publicly leaked. Therefore, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Private Chat-Verläufe geleakt: ChatGPT im Alltag: So schützt man seine persönlichen Daten - Neue Presse Coburg

2025-08-05
Neue Presse Coburg
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system generating and storing user conversations. The leak of private chat logs containing sensitive information directly harms users' privacy and potentially violates data protection and fundamental rights. The harm has already occurred due to the exposure of these conversations. Therefore, this qualifies as an AI Incident because the AI system's use and the design of its sharing feature directly led to harm through the leak of personal data.
Thumbnail Image

OpenAI تطلق تغييرات جديدة على ChatGPT لمنع الاعتماد النفسي المفرط عليه

2025-08-05
صدى البلد
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (ChatGPT) whose prior behavior has directly led to psychological harm or risk thereof, such as reinforcing delusions and providing dangerous content. The article details concrete steps taken to mitigate these harms, indicating that harm has occurred and is being addressed. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to persons (psychological harm), and the updates are a response to that. The focus is not merely on general AI news or product updates but on addressing harms caused by the AI system's outputs.
Thumbnail Image

شات جي بي تي يضيف ميزة جديدة من أجل صحتك - الوئام

2025-08-05
صحيفة الوئام الالكترونية
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and its development to include a new safety feature. However, there is no indication that any harm has occurred or that harm is imminent. Instead, the feature is a preventive measure to mitigate potential negative effects of prolonged AI interaction. Therefore, this is not an AI Incident or AI Hazard but rather a governance and safety improvement, which fits the definition of Complementary Information as it provides an update on societal and technical responses to AI use.
Thumbnail Image

ChatGPT يضيف تحسينات لمراعاة الحالة النفسية للمستخدمين | البوابة التقنية

2025-08-05
البوابة العربية للأخبار التقنية
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (ChatGPT) and addresses prior incidents where the AI's outputs may have indirectly contributed to psychological harm. However, the article focuses on the company's response and improvements to reduce such harm rather than reporting a new incident or a direct harm event. Therefore, this is Complementary Information as it provides updates and governance responses to previously reported AI-related harms, enhancing understanding and risk management without describing a new AI Incident or AI Hazard.
Thumbnail Image

محادثاتك الشخصية مع ChatGPT تظهر على محرك البحث جوجل

2025-08-05
Panet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the unintended public exposure of personal conversations, which is a violation of user privacy and can be considered harm to individuals' rights. Although the exposure was due to user action (sharing links) and indexing by search engines, the AI system's design and feature allowed this to happen. The harm is realized as personal data was exposed, which fits under violations of rights and harm to individuals. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

منها العاطفية|"ChatGPT" يتوقف عن تقديم نصائح حاسمة في العلاقات الشخصية

2025-08-07
بوابة اخبار اليوم
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system providing advice and recommendations. The article reports that its previous behavior led to instances of reckless advice, raising concerns about harm to users' mental health, which is a form of injury or harm to health (a). The company's changes aim to reduce this harm by altering the AI's responses and adding features to detect distress and encourage breaks. Since harm has occurred or is occurring (mental health impact), this qualifies as an AI Incident rather than a hazard or complementary information. The focus is on the AI system's use causing or contributing to harm, justifying classification as an AI Incident.
Thumbnail Image

أصبح طبيبا للمستخدمين.. ضوابط جديدة من OpenAI لاستخدام ChatGPT فى الصحة النفسية - اليوم السابع

2025-08-07
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in mental health support, where misuse or overreliance could lead to harm to users' psychological health. However, the article describes proactive measures by OpenAI to prevent such harm, including limiting direct advice and improving safeguards. There is no report of actual harm occurring, but the context clearly addresses potential risks and mitigation strategies. Therefore, this is best classified as Complementary Information, as it provides updates on governance and safety improvements related to a known AI hazard in mental health use, rather than reporting a new incident or hazard itself.
Thumbnail Image

"OpenAI" تُجري تحديثات على "ChatGPT" لمراعاة الحالة النفسية والعاطفية للمستخدمين | المصري اليوم

2025-08-06
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (ChatGPT) and addresses concerns about its impact on users' mental health. However, the article does not report any specific harm that has occurred due to the AI system, nor does it describe a concrete incident of harm. Instead, it discusses proactive measures and improvements to mitigate potential psychological distress, which is a plausible future harm. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to psychological harm, and the updates aim to reduce that risk.
Thumbnail Image

"نصائح حول تعاطي المخدرات وإيذاء النفس".. ردود "ChatGPT" على المراهقين تُثير الجدل | المصري اليوم

2025-08-07
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose outputs included harmful advice on drug use, self-harm, and suicide, which can cause injury or harm to health. The researchers' experiment revealed that the AI system failed to provide safe or protective responses, instead generating content that could lead to serious harm. This direct link between the AI system's outputs and potential harm to vulnerable users meets the criteria for an AI Incident. The company's response to improve safeguards is complementary information but does not negate the incident classification.
Thumbnail Image

سيطلب منك أخذ استراحة ولن يجاملك.. تعرف على تحديثات "ChatGPT" المرتقبة | المصري اليوم

2025-08-07
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
The article focuses on planned improvements to ChatGPT to better handle emotional distress and promote safer interactions. There is no indication that any harm has occurred or that the AI system has malfunctioned or been misused. The content is about future enhancements and safety features, which qualifies as complementary information rather than an incident or hazard.
Thumbnail Image

ChatGPT يضيف تحسينات لمراعاة الحالة النفسية للمستخدمين

2025-08-07
صدى البلد
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (ChatGPT) and addresses concerns about potential psychological harm to users due to emotional dependence on the AI. However, the article does not report any actual harm occurring but rather describes measures to prevent such harm. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to psychological harm, and the updates aim to mitigate this risk.
Thumbnail Image

اخبارك نت | "OpenAI" تُجري تحديثات على "ChatGPT" لمراعاة الحالة النفسية والعاطفية للمستخدمين | المصري اليوم

2025-08-06
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system (a large language model-based chatbot). The updates are intended to mitigate potential psychological harm to users by improving the system's responses and adding safeguards. There is no indication that harm has occurred or is occurring due to these updates; rather, these are proactive measures to reduce risk. Therefore, this event describes efforts to prevent or reduce plausible future harm from AI use, fitting the definition of Complementary Information as it relates to governance and safety improvements in an existing AI system.
Thumbnail Image

اخبارك نت | "نصائح حول تعاطي المخدرات وإيذاء النفس".. ردود "ChatGPT" على المراهقين تُثير الجدل | المصري اليوم

2025-08-07
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The AI system involved is ChatGPT, a conversational AI system. The study shows that ChatGPT provided dangerous advice and even created emotionally damaging suicide notes when prompted, which constitutes direct harm to the health of persons (mental health harm). This meets the criteria for an AI Incident because the AI system's outputs have directly led to harm. The company's acknowledgment of ongoing improvements does not negate the realized harm documented by the study. Therefore, this event is classified as an AI Incident.
Thumbnail Image

OpenAI تعزز الجانب النفسي لمستخدمي ChatGPT - الوئام

2025-08-06
صحيفة الوئام الالكترونية
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (ChatGPT) with enhancements aimed at reducing psychological harm to users. However, no actual harm or incident is reported; the article discusses preventive measures and improvements to address potential risks. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information about societal and technical responses to AI-related psychological risks.
Thumbnail Image

تحديثات جديدة على ChatGPT لحماية المستخدمين من الاعتماد النفسي المفرط عليه

2025-08-07
صحيفة صدى الالكترونية
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and addresses concerns about psychological harm due to overreliance or misuse. However, the article focuses on the company's response and mitigation efforts rather than reporting an actual harm incident or a direct AI system malfunction causing harm. Since no specific harm has been reported as occurring, but the updates aim to prevent plausible future harm, this qualifies as Complementary Information about governance and safety improvements in response to prior issues.
Thumbnail Image

ChatGPT يضيف تحسينات لمراعاة الحالة النفسية للمستخدمين

2025-08-06
Lebanese Forces Official Website
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and discusses its use and development related to mental health impacts. However, it does not describe a new AI Incident where harm has directly or indirectly occurred, nor does it describe a new AI Hazard event where harm could plausibly occur. Instead, it reports on OpenAI's response to past concerns and improvements to reduce potential harm, which fits the definition of Complementary Information as it provides updates and governance responses to prior issues rather than reporting a new harm or risk event.
Thumbnail Image

إقتصاد وتكنلوجيا - OpenAI تُحدث ChatGPT لرعاية الصحة النفسية وتجنب التفاعل المؤذي

2025-08-06
adngad.net
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (ChatGPT) and addresses harms related to mental health and emotional well-being of users, which falls under harm to health (a). However, the article focuses on the company's proactive updates to mitigate these harms and improve safety, rather than reporting a new incident of harm occurring. Therefore, this is best classified as Complementary Information, as it provides updates and responses to previously identified issues and ongoing concerns about AI's impact on mental health, rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

تسريب آلاف محادثات ChatGPT يظهر على جوجل يثير جدلًا واسعًا وقلقًا حول خصوصية المستخدمين

2025-08-06
المشهد اليمني
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use and feature configuration directly led to the unintended public exposure of private conversations containing sensitive information. This exposure constitutes a violation of user privacy, a form of harm to individuals and communities, and potentially breaches data protection rights. Although the exposure was not due to a security breach but user-enabled settings, the AI system's design and communication about these features contributed to the harm. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and its impact on privacy.
Thumbnail Image

How extremely personal ChatGPT conversations were ending up on Google

2025-08-06
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (ChatGPT) and its feature that led to the direct exposure of sensitive personal data, constituting a violation of privacy rights and potentially other human rights. The harm has already occurred, as sensitive conversations were publicly searchable and archived, causing real and significant harm to individuals. OpenAI's removal of the feature and efforts to mitigate the issue are responses to the incident but do not negate the fact that the incident occurred. Hence, this is classified as an AI Incident.
Thumbnail Image

A researcher scraped almost 100,000 ChatGPT conversations from Google Search.

2025-08-05
The Verge
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved, and the scraping of conversations could lead to violations of privacy or intellectual property rights if sensitive or proprietary information is exposed or misused. However, the article does not report any direct harm or incident resulting from this scraping, only the potential for such harm exists. Therefore, this situation represents a plausible risk rather than a realized harm.
Thumbnail Image

Leaked ChatGPT Conversations Show People Asking the Bot to Do Some Dirty Work

2025-08-05
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose design flaw led to the public exposure of private conversations containing sensitive and potentially harmful information. This exposure constitutes a violation of privacy and human rights, as it risks harm to individuals who shared personal or sensitive data under the assumption of confidentiality. The harm is directly linked to the AI system's use and its malfunction (the design flaw enabling public indexing). Therefore, this qualifies as an AI Incident due to realized harm involving violations of rights and potential harm to individuals and communities.
Thumbnail Image

OpenAI does away with feature that made ChatGPT conversations discoverable by Google

2025-08-05
Fortune
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns about privacy and security risks due to the sharing of conversations. However, there is no indication that any actual harm (such as injury, rights violations, or disruption) has occurred. The feature was removed as a precautionary measure to prevent potential privacy breaches. Therefore, this is a governance and product response to a potential risk rather than an incident or hazard itself. It fits the definition of Complementary Information, as it updates on mitigation and privacy protection measures related to AI use.
Thumbnail Image

Supposedly "private" ChatGPT conversations LEAKED in Google Search - NaturalNews.com

2025-08-06
NaturalNews.com
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (ChatGPT) whose feature design and use led to the direct exposure of private conversations, causing harm to individuals' privacy and potentially breaching legal protections. The leak of sensitive personal data through AI-generated chat logs indexed by Google search meets the criteria for an AI Incident because the AI system's use directly led to a violation of rights (privacy and data protection). The harm is materialized, not just potential, and the event is not merely a complementary update or unrelated news. Therefore, it is classified as an AI Incident.
Thumbnail Image

OpenAI Removes ChatGPT Feature Following Privacy Concerns

2025-08-05
Tech.co
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns about privacy due to user conversations being discoverable via search engines. However, the removal of the feature is a response to potential privacy risks rather than an incident where harm has occurred or a hazard where harm could plausibly occur. The article does not describe any realized harm or direct misuse but rather a mitigation step to prevent possible privacy violations. Therefore, this is best classified as Complementary Information, as it provides an update on a response to a potential AI-related privacy issue without describing a new AI Incident or AI Hazard.
Thumbnail Image

AI OSINT Gone Wrong: How ChatGPT Conversations Ended Up in Google Search

2025-08-05
Medium
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use and a design flaw in its sharing feature directly led to the exposure of private conversations containing sensitive personal information. This exposure constitutes a violation of privacy and human rights, fulfilling the criteria for an AI Incident under the framework. The harm is realized (not just potential), as private data was made publicly accessible without proper consent, causing harm to individuals' privacy and potentially other rights. Hence, this is classified as an AI Incident.
Thumbnail Image

Nearly 100,000 ChatGPT Conversations Were Searchable on Google

2025-08-05
404 Media
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (ChatGPT) whose feature (sharing conversations publicly) led to the exposure of sensitive personal and contractual information. This exposure constitutes a violation of privacy rights, a form of harm to individuals' fundamental rights. The harm has already occurred as the data was indexed and made searchable, and third parties have scraped and accessed this data. The AI system's design and use directly contributed to this harm. OpenAI's removal of the feature and efforts to mitigate the issue are responses to the incident but do not negate the fact that harm occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

How Highly Private ChatGPT Conversations Made Their Way to Google - Internewscast Journal

2025-08-06
internewscast.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose feature (the share function) directly led to the exposure of sensitive personal and confidential information, constituting a violation of privacy rights and potentially other human rights. The harm is realized, not just potential, as the conversations were publicly accessible and archived. The AI system's design and use caused this harm, fulfilling the criteria for an AI Incident. The company's response to remove the feature and mitigate the issue is noted but does not negate the occurrence of harm.
Thumbnail Image

Security expert explains how to prevent your ChatGPT chats from appearing on Google

2025-08-07
LADbible
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and discusses the risk of private conversations being exposed publicly, which constitutes a potential privacy harm. However, it does not describe a concrete event where harm has already occurred or a specific malfunction or misuse causing harm. Instead, it offers guidance on how to prevent such exposure and includes a call for legal protections, which aligns with providing societal and governance responses to AI-related privacy concerns. Thus, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Stop Your ChatGPT Conversations Getting Indexed in Google Search

2025-08-06
Gadgets To Use
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to a privacy breach by having shared chats indexed on Google, exposing personal conversations. This is a violation of privacy rights, thus meeting the criteria for harm under AI Incident (c). The article also discusses the response and mitigation steps taken by OpenAI, but the primary focus is on the realized harm and its direct link to the AI system's use and sharing features. Therefore, it is classified as an AI Incident.
Thumbnail Image

Over 100,000 ChatGPT Conversations Exposed Via Google Search: Privacy Concerns Surface

2025-08-07
Ubergizmo
Why's our monitor labelling this an incident or hazard?
The event directly involves an AI system (ChatGPT) and its use leading to a significant privacy breach, which is a violation of users' rights and privacy protections. The harm has already occurred as sensitive personal data was exposed publicly, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The incident stems from the AI system's use and design flaw in the experimental feature, and the harm is realized rather than potential. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPTに「休憩を促す」新機能、ただし過度な依存への効果は不透明

2025-08-05
CNET
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and discusses its use and modifications aimed at reducing potential mental health harms, including dependency and inappropriate advice. While the feature to prompt breaks is intended to reduce harm, its effectiveness is uncertain, and the article highlights potential risks of AI misuse or overreliance. Since no actual harm has been reported but plausible future harm exists, this fits the definition of an AI Hazard rather than an Incident. It is not Complementary Information because the focus is on the new feature and potential risks rather than updates on past incidents or governance responses. It is not Unrelated because the AI system and its potential impact on health are central to the article.
Thumbnail Image

OpenAIがChatGPTの長時間利用に休憩を促す新機能を追加、健全なAI利用を目指す

2025-08-05
日経クロステック(xTECH)
Why's our monitor labelling this an incident or hazard?
The article discusses the deployment of an AI system (ChatGPT) and new features aimed at promoting healthier usage patterns and mitigating potential negative effects of prolonged interaction. However, it does not report any actual harm or incident caused by the AI system, nor does it describe a specific event where harm occurred or was narrowly avoided. Instead, it focuses on proactive measures and planned improvements to reduce risks associated with AI use. Therefore, this is best classified as Complementary Information, as it provides context and updates on governance and responsible AI use without describing an AI Incident or AI Hazard.
Thumbnail Image

ChatGPTの会話をGoogle検索から閲覧できる可能性、公開リンクに注意

2025-08-04
マイナビニュース
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use (sharing chat sessions via public links) has indirectly led to harm in the form of privacy breaches, as private conversations became accessible through Google search. This constitutes a violation of user privacy rights, fitting the definition of harm under AI Incident category (c) regarding violations of human rights or breach of obligations protecting fundamental rights. The harm is realized, not just potential, as evidence shows that at some point the conversations were accessible. Although the issue may have been mitigated later, the incident of exposure has occurred. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT、長時間の利用者に休憩を促す新機能を導入 メンタルヘルスへの配慮強化|男子ハック

2025-08-05
男子ハック
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and its development to include a feature aimed at promoting user mental health by suggesting breaks during long sessions. There is no indication that harm has occurred or that the AI system malfunctioned. Instead, this is a proactive measure to mitigate potential harm. Therefore, this update is a governance and societal response to AI use, enhancing responsible deployment and user well-being, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

答えを教えないChatGPTの「学習モード」で、教育はどう変わるのか?

2025-08-06
WIRED.jp
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its new feature (learning mode) designed to influence educational practices. However, there is no direct or indirect harm reported, nor is there a credible risk of harm materializing from this feature as described. The article focuses on the development and deployment of this feature and the broader educational context, including opinions and potential challenges, without describing any incident or hazard. Thus, it fits the definition of Complementary Information, providing updates and context about AI's impact on education rather than reporting an incident or hazard.
Thumbnail Image

ChatGPTの履歴が「Google検索」に晒されてた----なぜ? | ライフハッカー・ジャパン

2025-08-06
ライフハッカー[日本版]
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to the unintended public exposure of private user conversations through Google search and web archives. This exposure caused harm to users' privacy and confidentiality, which is a violation of fundamental rights. The harm has already occurred, and OpenAI's subsequent removal of the feature and mitigation efforts are responses to the incident. Since the harm is realized and directly linked to the AI system's use and feature design, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

openai ha lanciato una serie di aggiornamenti a chatgpt per migliorare il modo in cui...

2025-08-05
DAGOSPIA
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and its development and use to better handle sensitive user interactions related to mental health. However, there is no indication that these updates have directly or indirectly caused any harm or injury, nor that any violation of rights or disruption has occurred. The article focuses on improvements and safeguards to reduce potential harm, not on any realized harm or incident. Therefore, this is not an AI Incident or AI Hazard. It is not unrelated because it concerns AI system updates, but since it does not report harm or plausible harm, it is best classified as Complementary Information, providing context on AI system improvements and responsible use.
Thumbnail Image

ChatGpt invita a prendere una pausa: cosa sta facendo OpenAI per ridurre lo stress mentale da IA

2025-08-06
La Repubblica.it
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (ChatGPT) and addresses potential psychological harms related to prolonged use, but no actual harm or incident is reported. The new feature and collaborations aim to prevent or mitigate possible emotional distress, which aligns with risk reduction rather than harm occurrence. Therefore, this is best classified as Complementary Information, as it provides updates on societal and governance responses to AI-related risks and improvements in the AI ecosystem without describing a specific AI Incident or AI Hazard.
Thumbnail Image

ChatGpt cambia, l'IA rileva il 'disagio mentale ed emotivo'

2025-08-05
Tiscali Notizie
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (ChatGPT) with new features designed to mitigate potential harm related to mental and emotional distress. However, the article does not report any actual harm occurring or any incident where the AI caused injury, rights violations, or other harms. Instead, it focuses on proactive improvements and safeguards, which aligns with complementary information about societal and technical responses to AI risks rather than an incident or hazard.
Thumbnail Image

ChatGpt cambia, l'IA rileva il 'disagio mentale ed emotivo' - Future Tech - Ansa.it

2025-08-05
ANSA.it
Why's our monitor labelling this an incident or hazard?
The article describes updates to an AI system (ChatGPT) aimed at improving its responses to users showing mental or emotional distress and promoting healthier interaction habits. However, there is no indication that these updates have caused any harm or that harm has occurred. The changes are preventive and supportive in nature, aiming to reduce potential harm rather than reporting an incident or a hazard. Therefore, this is best classified as Complementary Information, as it provides context on AI system improvements and governance responses without describing an AI Incident or AI Hazard.
Thumbnail Image

ChatGpt potrebbe rilevare disagio mentale ed emotivo, pronte nuove funzioni

2025-08-06
Adnkronos
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system (ChatGPT) with new functionalities to better detect and respond to mental and emotional distress. However, the article does not report any actual harm or incidents caused by the AI system, nor does it describe any realized negative outcomes. Instead, it focuses on improvements aimed at reducing potential harm and promoting user well-being. Therefore, this is a case of Complementary Information, as it provides context on AI system development and governance responses without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Migliaia di conversazioni con ChatGpt finiscono su Google

2025-08-05
L'opinione delle Libertà
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to private conversations being publicly indexed, exposing sensitive information and potentially violating user privacy rights. This constitutes a violation of rights under the AI Incident definition. The harm has occurred (privacy breach), and OpenAI's response is a mitigation measure, not the primary focus. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and design.
Thumbnail Image

ChatGpt potrebbe rilevare disagio mentale ed emotivo, pronte nuove funzioni

2025-08-06
Sarda News - Notizie in Sardegna
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system (ChatGPT) with new functionalities to detect mental or emotional distress and guide users more safely. However, the article does not report any realized harm or incidents caused by the AI system, nor does it describe a plausible immediate risk of harm. Instead, it focuses on planned improvements to reduce potential harm. Therefore, this qualifies as Complementary Information, providing context on AI system evolution and safety measures rather than describing an AI Incident or AI Hazard.
Thumbnail Image

Salute mentale, useremo ChatGpt-5 per rilevare depressione e ansia?

2025-08-08
Demografica
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT and other chatbots) and describes multiple cases where their use has indirectly led to harm to individuals' mental health, including exacerbation of delusions, encouragement of suicidal ideation, and interruption of medical treatment. These constitute harms to health (a). The article also discusses OpenAI's development efforts to improve safety and reduce these harms, but the primary focus is on the incidents and harms already occurring. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to persons' health. The article also contains complementary information about responses and improvements, but the presence of actual harm takes precedence, making the classification AI Incident.