ChatGPT Data Leak Exposes User Conversations and Credentials

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In a recent security incident, ChatGPT users inadvertently accessed unrelated private conversations and login credentials belonging to other accounts. OpenAI attributes the breach to attackers exploiting compromised user accounts, but the leak of usernames, passwords, and sensitive chat histories underscores significant data security and privacy vulnerabilities in the AI platform.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (ChatGPT) whose malfunction or security failure (account compromise and data leakage) directly led to harm in terms of privacy violations and potential exposure of sensitive personal information. This fits the definition of an AI Incident as it involves harm to individuals' rights and privacy due to the AI system's use and security failure.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityAccountabilityTransparency & explainability

Industries
Consumer servicesDigital securityIT infrastructure and hosting

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsReputationalPsychological

Severity
AI incident

Business function:
Citizen/customer serviceICT management and information security

AI system task:
Interaction support/chatbotsContent generation

In other databases

Articles about this incident or hazard

Thumbnail Image

ChatGPT user finds passwords of other users in conversation - MSPoweruser

2024-01-31
MSPoweruser
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose malfunction or security failure (account compromise and data leakage) directly led to harm in terms of privacy violations and potential exposure of sensitive personal information. This fits the definition of an AI Incident as it involves harm to individuals' rights and privacy due to the AI system's use and security failure.
Thumbnail Image

ChatGPT account login credentials of users were compromised, OpenAI confirms

2024-01-31
India Today
Why's our monitor labelling this an incident or hazard?
The event describes a confirmed security breach involving an AI system (ChatGPT) where compromised login credentials allowed malicious actors to access private user data, including sensitive conversations and login details. This constitutes a violation of user privacy and data protection rights, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law protecting fundamental rights. The harm has already occurred, not just a potential risk, making this an AI Incident. The involvement of the AI system is explicit, and the misuse of the system's user accounts directly led to the harm described.
Thumbnail Image

ChatGPT leaking private chats, login credentials: Here's what company has to say

2024-01-31
Gadget Now
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (ChatGPT) whose malfunction or security vulnerability (account takeovers and data leakage) has directly led to harm in the form of privacy violations and exposure of sensitive personal and professional information. The unauthorized access and leakage of private chats constitute a breach of obligations under applicable laws protecting fundamental rights, including privacy and data protection. Therefore, this qualifies as an AI Incident because the AI system's use and security failure have directly caused harm.
Thumbnail Image

OpenAI Denies Report That ChatGPT Leaked User Passwords

2024-01-31
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) and discusses potential harm related to privacy breaches and data leakage. However, OpenAI denies that the AI system itself leaked data, attributing the issue to compromised user accounts and misuse by bad actors. No direct or indirect harm caused by the AI system's malfunction or use is confirmed in this report. The article also references past incidents and vulnerabilities, which are background context. Thus, the main focus is on clarifying and updating the understanding of previous and current security concerns, fitting the definition of Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

ChatGPT Leaking Private Chats and Login Credentials: What You Need to Know | - Times of India

2024-01-31
The Times of India
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system used for generating conversational responses. The reported leakage of private chats and login credentials due to unauthorized access constitutes a direct harm to individuals' privacy and security, which falls under violations of rights and harm to persons. The involvement of the AI system in the leak and the resulting harm qualifies this as an AI Incident.
Thumbnail Image

ChatGPT leaking private chats, login credentials: Here's what company has to say - Times of India

2024-01-31
The Times of India
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system (a generative AI chatbot). The leaks of private chats and login credentials represent realized harm to users' privacy and security, which is a violation of rights and harm to individuals. The cause is linked to compromised accounts and security weaknesses in the AI system's deployment. Therefore, this qualifies as an AI Incident because the AI system's use and vulnerabilities have directly led to harm.
Thumbnail Image

ChatGPT Leaking Conversations With Random People And That's A Big Worry - News18

2024-02-02
News18
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose malfunction or security breach (account takeover) led to the leaking of private conversations containing sensitive personal and work-related information. This directly harms individuals' privacy and could violate data protection laws, thus meeting the criteria for an AI Incident. The involvement of the AI system in the leak and the realized harm to users' privacy justifies this classification.
Thumbnail Image

ChatGPT is Reportedly Leaking Passwords in Conversations

2024-01-30
The How-To Geek
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system that processes and stores user conversations. The reported leakage of private conversations and credentials is a malfunction of the AI system's data management and security protocols. This has directly led to harm by exposing sensitive personal information, violating privacy rights and potentially enabling further harm such as unauthorized access to accounts. The incident is not hypothetical or potential; it has already occurred and caused harm. Hence, it meets the criteria for an AI Incident under violations of human rights and legal obligations.
Thumbnail Image

ChatGPT leaks sensitive conversations, ignites privacy concerns: Here's what happened

2024-01-31
mint
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose malfunction or misuse has directly led to the unauthorized disclosure of sensitive personal and organizational information, constituting a violation of privacy rights. This fits the definition of an AI Incident because the AI system's use or malfunction has directly led to harm (privacy breach). Although OpenAI claims the issue stems from a compromised account, the AI platform's role in displaying unrelated users' conversations indicates a system failure or misuse that caused the harm. Therefore, this is classified as an AI Incident due to realized harm involving an AI system.
Thumbnail Image

ChatGPT leaks personal data -- how to lock down your account

2024-01-31
LaptopMag
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose compromised accounts have led to unauthorized access and leakage of personal data, constituting harm to individuals' privacy and potentially violating their rights. The AI system's use (account management and chat history storage) is directly linked to the harm. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's use and security vulnerabilities.
Thumbnail Image

ChatGPT reportedly leaked private conversations from pharmacy customers

2024-01-30
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The event explicitly involves ChatGPT, a large language model AI system, which generated outputs containing private and sensitive information from a pharmacy customer, including login credentials and personal data. This leak is a direct consequence of the AI system's malfunction or failure to properly anonymize and protect data as per OpenAI's privacy policy. The harm is realized as private information was exposed, constituting a violation of privacy rights and potentially legal obligations. The incident is not merely a potential risk but an actual data leak, thus qualifying as an AI Incident rather than a hazard or complementary information. The involvement of the AI system in causing the harm is clear and direct.
Thumbnail Image

Be careful of what you share with AI: ChatGPT appears to be leaking private conversations

2024-01-30
Android Authority
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) malfunctioning or having a security flaw that led to the unauthorized exposure of private and sensitive user data. This directly harms individuals by compromising their privacy and potentially violating their rights. The harm has already occurred as users' private conversations were leaked. Therefore, this qualifies as an AI Incident under the definition of violations of human rights or breach of obligations intended to protect fundamental rights, including privacy.
Thumbnail Image

Ars reader reports ChatGPT is sending him conversations from unrelated AI users

2024-01-30
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event explicitly involves ChatGPT, a large language model AI system, leaking private conversations and sensitive data such as usernames and passwords from unrelated users. This is a direct malfunction or failure in the AI system's handling of user data, leading to a clear violation of privacy rights and potentially other legal obligations. The harm is actual and ongoing, not merely potential, as users' private information has been exposed. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's malfunction and the breach of user rights.
Thumbnail Image

ChatGPT Privacy Breach: Leaked Conversations Raise Concerns

2024-01-31
The Hans India
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (ChatGPT) and results in harm to privacy, which is a violation of user rights. However, the root cause is compromised user credentials and malicious use of the account, not a direct malfunction or failure of the AI system. Therefore, it is an AI Incident because the AI system's use (including misuse) led to harm, even if indirectly. The AI system's role is pivotal as the platform where the breach occurred and sensitive data was exposed.
Thumbnail Image

ChatGPT Accused of Leaking User Passwords

2024-01-30
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) malfunctioning by revealing private and unrelated user data, including sensitive login credentials and medical information. This malfunction directly leads to harm by compromising user privacy and potentially violating legal protections. The repeated nature of these leaks and the exposure of sensitive data confirm that this is an AI Incident rather than a mere hazard or complementary information. The harm is realized, not just potential, as users' private data has been disclosed to unauthorized parties.
Thumbnail Image

OpenAI is still dealing with chats being leaked

2024-01-30
Android Headlines
Why's our monitor labelling this an incident or hazard?
The event describes a situation where ChatGPT conversations containing sensitive data were leaked. Although the AI system is involved as the platform where the data was stored, the root cause of the leak was unauthorized access to a user's account by a bad actor, not a failure or misuse of the AI system itself. The harm (exposure of sensitive information) has occurred, but it is indirectly linked to the AI system since the AI system was used as a repository for the data. Given that the leak stems from account compromise rather than AI malfunction or misuse, this qualifies as an AI Incident due to realized harm involving sensitive data exposure through the AI platform.
Thumbnail Image

OpenAI Has Denied Reports That ChatGPT Leaked User Passwords - Wonderful Engineering

2024-01-31
Wonderful Engineering
Why's our monitor labelling this an incident or hazard?
The article discusses alleged privacy breaches involving an AI system (ChatGPT) and references past incidents and vulnerabilities. However, the primary cause of the reported data exposure is attributed to compromised user credentials, not a malfunction or misuse of the AI system itself. The AI system's role is indirect and disputed, and no new direct or indirect harm caused by the AI system's development, use, or malfunction is clearly established. The article mainly provides updates and context on privacy concerns and responses, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

ChatGPT leak exposes private conversations and login credentials! Know what happened

2024-01-31
HT Tech
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) where private conversations and login credentials were leaked to an unrelated user. Although OpenAI states the root cause was misuse of a compromised account, the AI system's use and malfunction (or security failure) directly led to the harm of privacy violation. The harm is realized and significant, involving exposure of sensitive personal data. This fits the definition of an AI Incident as the AI system's use/malfunction directly led to harm to persons' privacy and security, which is a violation of rights and harm to individuals. The event is not merely a potential risk or a complementary update but a realized harm involving an AI system.
Thumbnail Image

ChatGPT Leaks Sensitive Data - Spiceworks

2024-02-01
Spiceworks
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT, a generative AI large language model) whose use has directly led to harm in the form of privacy violations and exposure of sensitive personal data. The leak of user conversations and credentials constitutes a breach of privacy and security, which falls under violations of human rights and legal obligations protecting personal data. Therefore, this qualifies as an AI Incident because the AI system's use and its vulnerabilities directly caused harm to users' privacy and security.
Thumbnail Image

ChatGPT Security Concerns Emerge: Personal Details Reportedly Leaked

2024-02-01
Gizbot
Why's our monitor labelling this an incident or hazard?
The event describes actual leaks of private user data through ChatGPT, an AI system, resulting in harm to users' privacy and security. The leaks are linked to account compromises and possibly system vulnerabilities, which are related to the AI system's use and security management. This constitutes a violation of rights and harm to individuals, meeting the criteria for an AI Incident. The involvement of the AI system is explicit, and the harm has materialized, not just a potential risk. Therefore, the classification is AI Incident.
Thumbnail Image

Escándalo con ChatGPT: revelan conversaciones privadas filtradas y cuentas expuestas

2024-01-31
El Español
Why's our monitor labelling this an incident or hazard?
The incident involves the use and malfunction (security failure) of an AI system (ChatGPT) that directly led to harm in the form of privacy violations and exposure of sensitive personal data. The AI system's role is pivotal as the platform's account compromise allowed unauthorized access to private conversations. This fits the definition of an AI Incident because it caused a violation of human rights and data protection obligations. The event is not merely a potential hazard or complementary information but a realized harm due to the data exposure.
Thumbnail Image

Denuncian que ChatGPT podría estar filtrando conversaciones privadas de usuarios

2024-01-31
20 minutos
Why's our monitor labelling this an incident or hazard?
The event involves ChatGPT, an AI system, which is reportedly exposing private conversations and sensitive data from multiple users. This is a direct harm to users' privacy and potentially violates data protection and intellectual property rights. The AI system's malfunction or failure to properly segregate user data has led to this harm. The presence of leaked credentials and confidential information confirms realized harm, not just potential risk. Hence, this is classified as an AI Incident.
Thumbnail Image

Cuidado con ChatGPT: un usuario denuncia que podría filtrar conversaciones privadas

2024-01-31
El Confidencial
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, ChatGPT, which is leaking private and sensitive information from other users into a user's chat history. This is a direct malfunction or misuse of the AI system leading to a violation of privacy rights and confidentiality, which falls under harm to persons and violation of rights. The harm is realized, not just potential, as private data has been disclosed. Therefore, this qualifies as an AI Incident.
Thumbnail Image

¿Está ChatGPT filtrando contraseñas de conversaciones privadas de sus usuarios?

2024-02-02
La Razón
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) whose malfunction or security failure has directly led to the exposure of private and sensitive user data, including passwords and personal conversations. This constitutes a violation of user privacy and fundamental rights, fulfilling the criteria for an AI Incident. Although OpenAI claims the issue was due to account compromise, the event still involves the AI system's use and security leading to harm. Hence, it is not merely a hazard or complementary information but an incident with realized harm.
Thumbnail Image

Es posible que ChatGPT esté filtrando por error tus conversaciones privadas - La Opinión

2024-02-01
La Opinión Digital
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that ChatGPT, an AI system, is involved in accidentally exposing private conversations and sensitive data of users to others. This constitutes a violation of privacy rights and a breach of obligations under applicable data protection laws, which fits the definition of an AI Incident. The harm is realized, not just potential, as users have already seen private data from other accounts. Therefore, this event qualifies as an AI Incident due to the direct involvement of the AI system's malfunction leading to harm.
Thumbnail Image

Advierten que ChatGPT estaría filtrando datos de los usuarios

2024-01-31
Merca2.0 Magazine
Why's our monitor labelling this an incident or hazard?
ChatGPT is explicitly identified as an AI system (a large language model chatbot). The article reports realized harm: unauthorized disclosure of sensitive user data, including passwords and private information, which constitutes a violation of user privacy and potentially applicable data protection laws. The leaks are linked directly to the AI system's behavior, including a demonstrated exploit involving repeated word prompts causing the system to reveal confidential data. This meets the criteria for an AI Incident because the AI system's malfunction has directly led to harm (privacy violations).
Thumbnail Image

¿ChatGPT está filtrando contraseñas y conversaciones privadas? - Digital Trends Español

2024-01-30
Digital Trends Español
Why's our monitor labelling this an incident or hazard?
ChatGPT is explicitly mentioned as the AI system involved. The reported events describe the AI system malfunctioning or being exploited to reveal private and sensitive information, including passwords and personal conversations. This directly leads to harm in the form of violations of privacy and potentially breaches of applicable data protection laws, which falls under violations of human rights or legal obligations. Therefore, this qualifies as an AI Incident because the AI system's malfunction or misuse has directly led to harm.
Thumbnail Image

Un usuario ha visto como ChatGPT está filtrando conversaciones de terceros (incluso con contraseñas) sin tener que hacer 'nada'

2024-01-30
Genbeta
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) malfunctioning by exposing private conversations and sensitive data of other users to an unrelated user. This constitutes a direct harm to individuals' privacy and a violation of rights under applicable data protection laws. The AI system's malfunction directly led to this harm, fulfilling the criteria for an AI Incident. Therefore, this event should be classified as an AI Incident due to the realized harm caused by the AI system's failure to properly isolate user data.
Thumbnail Image

ChatGPT podría estar filtrando nuestras contraseñas y conversaciones privadas sin que lo sepamos

2024-01-30
FayerWayer
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system that processes and stores user conversations. The reported incidents involve the AI system malfunctioning or having vulnerabilities that caused exposure of private data such as usernames, passwords, and private conversations to unauthorized users. This constitutes a direct harm to users' privacy and security, which falls under violations of human rights and breach of obligations to protect fundamental rights. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's malfunction and use.
Thumbnail Image

ChatGPT filtra conversaciones privadas que incluyen contraseñas y otros datos personales de usuarios ajenos a esas consultas

2024-01-30
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves ChatGPT, an AI system, malfunctioning or mismanaging user data, leading to the direct exposure of sensitive personal information such as passwords and private conversations to unrelated users. This clearly results in harm through violation of privacy and potentially breaches legal obligations protecting personal data and fundamental rights. Therefore, it qualifies as an AI Incident under the definitions provided, specifically under violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

IA Generativa: Así puedes utilizar ChatGPT de forma segura - TyN Magazine

2024-01-30
TyN Magazine
Why's our monitor labelling this an incident or hazard?
The article centers on informing users about the safe use of ChatGPT and the associated risks, including data privacy and accuracy issues. It does not describe any realized harm or a specific event where ChatGPT caused injury, rights violations, or other harms. Nor does it describe a credible imminent risk or hazard event. Therefore, it fits the definition of Complementary Information, as it provides supporting context and guidance related to AI systems and their safe use without reporting a new AI Incident or AI Hazard.
Thumbnail Image

30 enero, 2024

2024-01-30
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves ChatGPT, an AI system, which has malfunctioned by leaking private conversations and credentials from unrelated users. The harm is realized as private and sensitive data exposure, which is a violation of privacy rights and data protection laws. The incident is not hypothetical or potential but has already occurred, with multiple examples of leaked data provided. Hence, it meets the criteria for an AI Incident due to direct harm caused by the AI system's malfunction and use.
Thumbnail Image

OpenAI dément l'existence d'une grave faille sur ChatGPT

2024-01-31
20 Minuten
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose malfunction or security breach led to the exposure of sensitive conversations of other users to an unrelated user. This constitutes a violation of privacy and data protection rights, which falls under violations of human rights or breach of applicable law. Although OpenAI denies a system flaw and suggests account hacking, the AI system's role in storing and displaying conversations is central to the harm. Therefore, this qualifies as an AI Incident due to realized harm involving sensitive data exposure linked to the AI system's use or malfunction.
Thumbnail Image

Il accuse ChatGPT de divulguer des mots de passe, OpenAI dément

2024-01-31
01net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) where the use of the system, through a compromised user account, led to the exposure of sensitive information including passwords and usernames. This exposure constitutes harm to privacy and potentially breaches legal protections. Although OpenAI denies a system malfunction and attributes the issue to account hacking, the AI system's use was pivotal in the harm occurring. Therefore, this is an AI Incident due to realized harm caused indirectly by the AI system's use under compromised conditions.
Thumbnail Image

Cet utilisateur a eu une très désagréable surprise avec ChatGPT

2024-02-01
Futura
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose malfunction or security failure has directly led to harm in the form of unauthorized access to personal and sensitive data of multiple users, constituting a violation of privacy rights. This fits the definition of an AI Incident as it involves harm to individuals' rights due to the AI system's use and malfunction. The presence of personal data exposure and account compromise indicates realized harm, not just potential risk.
Thumbnail Image

ChatGPT fait fuiter des données et des conversations privées

2024-01-30
KultureGeek
Why's our monitor labelling this an incident or hazard?
The event explicitly involves ChatGPT, an AI system, whose malfunction in handling cached conversation data caused private and sensitive information to be exposed to unintended users. This directly harms users' privacy and violates data protection rights, fitting the definition of an AI Incident. The harm is realized, not just potential, and the AI system's malfunction is the root cause. Hence, it is classified as an AI Incident.
Thumbnail Image

ChatGPT divulgue les mots de passe des conversations privées de ses utilisateurs. OpenAI tente de minimiser le problème en évoquant le piratage de compte, mais cette explication présente des limites

2024-01-31
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves ChatGPT, an AI system, whose malfunction or data handling errors have directly led to the disclosure of private conversations and sensitive personal information to unauthorized users. This constitutes a violation of users' privacy rights and breaches obligations to protect personal data, fitting the definition of harm under AI Incident category (c) regarding violations of human rights and legal protections. The harm is realized, not just potential, as users have already been exposed to others' private data. The incident is not merely a potential risk or a complementary update but a concrete case of AI system failure causing harm. Therefore, the classification as AI Incident is appropriate.