ChatGPT macOS App Vulnerability Exposes User Conversations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A vulnerability in the ChatGPT app for macOS, discovered by developer Pedro José Pereira Vieito, exposed user conversations by storing them in plain text without encryption. This posed a significant privacy risk. OpenAI quickly addressed the issue by releasing an update that encrypts stored conversations, enhancing user data protection.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (ChatGPT) and a security flaw in its macOS app that exposed user conversations, which constitutes a privacy and data security harm. Since the harm (exposure of conversations) has occurred due to the app's design and storage method, this qualifies as an AI Incident. The update fixing the flaw is a response to this incident. Therefore, the event is classified as an AI Incident because the AI system's use led to a direct harm (privacy breach).[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityRespect of human rightsAccountability

Industries
Consumer servicesDigital securityIT infrastructure and hosting

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsReputationalPsychologicalEconomic/Property

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation

In other databases

Articles about this incident or hazard

Thumbnail Image

In case you missed it, ChatGPT on Mac suffered a serious security issue

2024-07-08
Mashable
Why's our monitor labelling this an incident or hazard?
No actual data breach or user harm is reported; the piece focuses on the discovery of the vulnerability and OpenAI’s remediation via a software update. It serves as an update on mitigation measures rather than describing a new incident or hazard or broader governance action.
Thumbnail Image

ChatGPT On MacOS Previously Stored Conversations In Plain Text - Lowyat.NET

2024-07-05
Lowyat.NET
Why's our monitor labelling this an incident or hazard?
This event describes a weakness in an AI system’s implementation that could plausibly lead to unauthorized access to user data (a violation of privacy/human rights), but no actual data breach or harm has been reported. It therefore constitutes an AI Hazard rather than an incident.
Thumbnail Image

A Developer Easily Retrieved Conversations With ChatGPT On Mac Hidden Behind Plain Text, OpenAI Responds

2024-07-05
Wccftech
Why's our monitor labelling this an incident or hazard?
This piece focuses on the discovery of a prior storage vulnerability and its remediation through an update. No actual data breach or user harm is reported, and the main narrative is the company’s mitigation measures, making it complementary information rather than a new incident or hazard.
Thumbnail Image

OpenAI hack revealed as ChatGPT flaws exposed: Is your data at risk?

2024-07-05
LaptopMag
Why's our monitor labelling this an incident or hazard?
Although these events involve AI systems and potential risks to user data, there is no reported instance of actual user data being exfiltrated or harmed. The piece primarily provides updates on discovered vulnerabilities, remediation measures, and user guidance, rather than reporting a new realized incident or warning of a future unaddressed risk. This aligns with the definition of Complementary Information.
Thumbnail Image

OpenAI's ChatGPT Mac App Was Saving Chats as Plain Text

2024-07-07
PC Magazine
Why's our monitor labelling this an incident or hazard?
OpenAI’s ChatGPT Mac app is an AI system whose insecure design resulted in conversations being stored unencrypted and unsandboxed, enabling any process or malware on the machine to read private chats. Although no specific exploitation has been reported, this flaw creates a plausible privacy breach scenario, constituting an AI Hazard under the framework.
Thumbnail Image

ChatGPT's much-heralded Mac app was storing conversations as plain text

2024-07-05
Ars Technica
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, and the event concerns the use and design of the system leading to a security flaw that could result in harm to users' privacy and data security. Although no direct harm is reported as having occurred, the vulnerability plausibly could lead to harm if exploited. The update to encrypt data mitigates the risk but does not fully resolve sandboxing issues. Since the article focuses on the security issue and its remediation, and the risk of harm is credible, this qualifies as an AI Hazard rather than an Incident (no confirmed harm) or Complementary Information (the update is part of the hazard narrative).
Thumbnail Image

Using ChatGPT For macOS? Here's Why You Should Update Right Away - MySmartPrice

2024-07-05
MySmartPrice.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and a security flaw in its macOS app that exposed user conversations, which constitutes a privacy and data security harm. Since the harm (exposure of conversations) has occurred due to the app's design and storage method, this qualifies as an AI Incident. The update fixing the flaw is a response to this incident. Therefore, the event is classified as an AI Incident because the AI system's use led to a direct harm (privacy breach).
Thumbnail Image

OpenAI says that it's encrypting chats on Mac's ChatGPT desktop app, after controversy

2024-07-05
MSPoweruser
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its development and use. The security flaw allowed unauthorized access to private conversations, which constitutes a violation of privacy rights, a form of harm to individuals. Although OpenAI has updated the app to encrypt chats, the lack of sandboxing still poses potential future risks. Since the harm (privacy violation) has already occurred due to the plain text storage and unauthorized access, this qualifies as an AI Incident. The update to encrypt chats is a response but does not negate the fact that harm was realized.
Thumbnail Image

The ChatGPT macOS app had a huge bone-headed privacy flaw

2024-07-05
Android Headlines
Why's our monitor labelling this an incident or hazard?
An AI system (the ChatGPT macOS app) is explicitly involved, and its use led to a direct harm: the exposure of private user conversations stored insecurely, which constitutes a violation of privacy rights and a breach of obligations to protect user data. The harm is realized because the data was accessible in plain text and could be extracted by malicious actors, posing a clear risk to users' privacy and security. Although the issue was fixed after discovery, the incident of reckless data handling and the potential for harm had already occurred. Therefore, this qualifies as an AI Incident due to the direct harm to users' privacy caused by the AI system's malfunction or poor design.
Thumbnail Image

ChatGPT's Mac app exposed user conversations

2024-07-08
Computing
Why's our monitor labelling this an incident or hazard?
The ChatGPT macOS app is an AI system enabling user interaction with an AI language model. The app's storing of conversation history in plain text in a non-protected location directly led to a privacy breach, which is a violation of user rights and confidentiality. This constitutes harm under the AI Incident definition (violation of rights). Although OpenAI has released an update encrypting the data, the initial exposure and the app's bypassing of sandbox protections mean the AI system's use and design directly caused harm. Therefore, this event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI's Security Blunders Raise Cause for Concern

2024-07-08
Digit
Why's our monitor labelling this an incident or hazard?
The article involves AI systems as it concerns OpenAI's AI technology and the ChatGPT app, which is an AI system. The data breach and security lapses relate to the development and use of AI systems. However, no direct or indirect harm to persons, infrastructure, rights, property, or communities has been reported. The breach did not expose customer or partner data, and no AI malfunction or misuse causing harm is described. The departures of safety leads and security concerns indicate potential future risks. Thus, this event fits the definition of an AI Hazard, as it plausibly could lead to harm if security issues persist or worsen, but no AI Incident has occurred yet.
Thumbnail Image

ChatGPT Mac App saved plain text charts to local files, updated version resolves the issue

2024-07-05
O'Grady's PowerPage
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT app) was involved, and its use led to a privacy risk that could be considered a violation of user rights (a breach of obligations under applicable law protecting privacy). Since the harm was realized in the form of exposure of private conversations, this qualifies as an AI Incident. The update and response by OpenAI to fix the issue is complementary information but does not negate the incident classification. Therefore, the event is classified as an AI Incident due to the realized privacy harm caused by the AI system's design and use.
Thumbnail Image

Update your ChatGPT app on macOS to fix a security flaw

2024-07-05
Tech Edition
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, and the event concerns a security flaw in its use that led to a direct risk of harm to users' privacy and data security. Although no actual data breach or harm is reported, the exposure of private conversations constitutes a realized harm to users' privacy rights, which falls under violations of human rights or breach of obligations to protect fundamental rights. OpenAI's update mitigates this harm. Therefore, this event qualifies as an AI Incident due to the realized privacy harm caused by the AI system's prior insecure data handling.
Thumbnail Image

OpenAI swiftly fixes ChatGPT Mac app vulnerability

2024-07-05
NewsReports
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT app) was involved, and a vulnerability related to data storage security was found. Although no actual data breach or harm is reported, the vulnerability could plausibly have led to unauthorized access and harm to user privacy (a form of harm to persons). Since the issue was fixed before any harm occurred, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the vulnerability and its remediation, not on a realized harm or ongoing incident.
Thumbnail Image

After ChatGPT Flaw Affected MacOS Users: What Does It Mean When A Chat Is Not Encrypted? - Bullfrag

2024-07-06
Bullfrag
Why's our monitor labelling this an incident or hazard?
The ChatGPT application is an AI system. The reported vulnerability involved storing user chats in plain text locally, which is a failure in the AI system's implementation and security design. This failure directly led to a risk of harm to users' privacy and data security, which is a violation of rights under applicable data protection laws. Although OpenAI quickly released a fix, the harm or risk of harm was realized during the period the vulnerability existed. Therefore, this event meets the criteria for an AI Incident due to the direct link between the AI system's malfunction (lack of encryption) and the harm (privacy and security risk).
Thumbnail Image

L'application ChatGPT pour Mac présente une faille de sécurité majeure

2024-07-03
Frandroid
Why's our monitor labelling this an incident or hazard?
This report describes a security flaw in the deployment of an AI system (the ChatGPT macOS client) that has not yet caused a known data breach but creates a credible risk of harm (user data exposure). Therefore it qualifies as an AI Hazard, since the vulnerability could plausibly lead to privacy and security incidents.
Thumbnail Image

L'app ChatGPT sur Mac contenait une faille de confidentialité

2024-07-04
20 Minuten
Why's our monitor labelling this an incident or hazard?
This is a case where an AI system’s design (the ChatGPT Mac client) created a privacy risk that could plausibly lead to unauthorized disclosure of user data. While no breach has been reported, the vulnerability presents a clear potential harm to confidentiality. Therefore it constitutes an AI Hazard rather than a realized incident or merely routine product news.
Thumbnail Image

Oups : l'app officielle de ChatGPT n'est pas sandboxée et stocke les

2024-07-02
MacGeneration
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT app) is involved, and its use has led to a direct risk of harm to user privacy and data security. The lack of sandboxing and unencrypted storage of conversations constitutes a breach of data protection principles, potentially violating user rights to privacy and confidentiality. Since the harm (exposure of private conversations) is realized or highly likely due to the app's design, this qualifies as an AI Incident involving violation of rights and harm to individuals' privacy.
Thumbnail Image

L'app ChatGPT sur Mac ne stocke plus les conversations en clair

2024-07-03
iPhoneAddict.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT application) whose use led to a privacy and security risk—specifically, the exposure of user conversation data in plaintext. This constitutes a harm related to user privacy and data security, which can be considered a violation of rights or harm to individuals. Since the harm (exposure of conversations) has already occurred and the AI system's design and deployment directly contributed to this, it qualifies as an AI Incident. The update by OpenAI to encrypt the data is a remediation but does not change the fact that the incident occurred.
Thumbnail Image

Attention si vous utilisez l'application ChatGPT sur macOS, elle n'est pas du tout sécurisée

2024-07-02
PhonAndroid
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved. The issue arises from the use and implementation of the AI system, specifically how it stores user data insecurely. This has directly led to a significant risk of harm to users' privacy and potentially their rights, as unauthorized access to private conversations can violate privacy rights and lead to further harms. Although no specific harm is reported as having occurred yet, the vulnerability plausibly leads to an AI Incident because the AI system's malfunction (insecure data storage) directly exposes users to harm. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the potential for harm to users' privacy and rights.
Thumbnail Image

OpenAI avait oublié la base de la sécurité pour ChatGPT sur Mac

2024-07-05
Numerama.com
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT application) was involved, and its use led to a security flaw that could have allowed unauthorized access to user conversation data, which constitutes a potential violation of user privacy and data protection rights. However, no actual harm or data breach is reported; the issue was discovered and fixed before exploitation occurred. Therefore, this event represents an AI Hazard because the AI system's use could plausibly have led to harm (privacy violation) but no incident has materialized. The update and fix are a mitigation response, but the main event is the prior vulnerability and its potential risk.
Thumbnail Image

ChatGPT pour Mac a eu un grave problème de sécurité

2024-07-04
Informaticien.be
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, and its use led directly to a harm related to privacy and security of user data, which constitutes harm to individuals' rights and potentially their personal security. The vulnerability allowed unauthorized access to sensitive conversation data, which is a violation of privacy and a breach of security obligations. Since the harm occurred and was materialized before the fix, this qualifies as an AI Incident. The subsequent update is a remediation but does not change the classification of the original event.
Thumbnail Image

ChatGPT : avant la mise à jour, l'appli Mac ne chiffrait pas les conversations

2024-07-04
L'Éclaireur Fnac
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT application) whose use led to a direct risk of harm to user privacy and data security, which can be considered harm to persons' rights and privacy. The lack of encryption allowed unauthorized access to sensitive conversation data, constituting a breach of data protection and potentially human rights related to privacy. Since the harm (exposure of conversations) has already occurred and the AI system's design and deployment caused this, it qualifies as an AI Incident. The update and patch are responses but do not negate the incident classification.
Thumbnail Image

L'application ChatGPT pour Mac stockait les informations en clair - Next

2024-07-05
Next
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT application) was involved, and its development choices led to a security flaw exposing user data, which constitutes a violation of privacy and potentially a breach of user rights. Although no direct harm such as injury or property damage is reported, the exposure of personal information is a clear harm to users' privacy rights, fitting under violations of human rights or legal protections. Since the harm occurred and was addressed, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Actualiza la app de ChatGPT para MAC o tu información podría quedar a la vista de todos - La Opinión

2024-07-04
La Opinión Digital
Why's our monitor labelling this an incident or hazard?
The main focus is on OpenAI’s remediation of a past vulnerability—an update that fixes the issue, adds encryption, and improves security—rather than describing a new harm or warning of future risks. This makes it complementary information about an AI system update and security response.
Thumbnail Image

Falla mundial en ChatGPT: millones de conversaciones desprotegidas en dispositivos Apple

2024-07-03
infobae
Why's our monitor labelling this an incident or hazard?
An insecure storage vulnerability in an AI system that could plausibly lead to privacy breaches qualifies as an AI Hazard. There is no evidence in the article that harms materialized, only that the flaw created an opportunity for malicious actors to access users’ conversations.
Thumbnail Image

Descubren un fallo de seguridad en ChatGPT que expone todas las conversaciones de los usuarios

2024-07-04
20 minutos
Why's our monitor labelling this an incident or hazard?
This event involves an AI system (ChatGPT) whose malfunction (insecure storage of conversation data) could plausibly lead to user privacy breaches, but there is no indication that the flaw was actively exploited to cause harm. The focus is on a potential risk rather than a realized data breach, so it qualifies as an AI Hazard.
Thumbnail Image

¿Tienes la app de ChatGPT en tu Mac? Actualiza de inmediato si no quieres problemas

2024-07-04
ComputerHoy.com
Why's our monitor labelling this an incident or hazard?
The news details a design flaw in an AI system that could have directly exposed private user data, constituting a near-miss scenario with potential for significant harm if exploited. Since no actual data breach occurred, it is best classified as an AI Hazard.
Thumbnail Image

Descubren una vulnerabilidad importante en ChatGPT para Mac

2024-07-03
Hipertextual
Why's our monitor labelling this an incident or hazard?
The issue involved the AI system’s design and use—ChatGPT for Mac—directly leading to exposure of personal conversations, a clear violation of user privacy (a human rights breach). Although OpenAI issued a patch afterward, the harm (data exposure) had already occurred, qualifying this as an AI Incident.
Thumbnail Image

Vulnerabilidad en la app de ChatGPT para macOS

2024-07-04
WWWhat's new
Why's our monitor labelling this an incident or hazard?
The article describes a flaw in the ChatGPT macOS application’s handling of user data—a situation that has not yet resulted in a confirmed breach but could plausibly lead to unauthorized exposure of private conversations. This matches the definition of an AI Hazard: a malfunction or design oversight that could plausibly lead to harm (privacy violation) if exploited. No actual data theft is reported, so it is not an Incident, and the primary focus is on the vulnerability and its remediation rather than a broader governance update, ruling out Complementary Information.
Thumbnail Image

Detectan fallo de seguridad en conversaciones de ChatGPT para Mac

2024-07-04
Merca2.0 Magazine
Why's our monitor labelling this an incident or hazard?
The event centers on a design/malfunction of an AI system (ChatGPT for Mac) that could plausibly lead to privacy violations and unauthorized data access, but there is no evidence of actual malicious exploitation affecting other users. This fits the definition of an AI Hazard—a real vulnerability creating a credible risk of harm—rather than an AI Incident, which requires realized harm or rights violations.
Thumbnail Image

Mac users, update your ChatGPT app now to fix a security flaw

2024-07-04
Hardwarezone.com.sg
Why's our monitor labelling this an incident or hazard?
The ChatGPT app is an AI system as it uses AI to generate conversational outputs. The security flaw exposed users' private conversation data, which is a violation of privacy and can be considered a breach of obligations under applicable law protecting fundamental rights. Since the vulnerability has already led to exposure of sensitive data, this constitutes realized harm. Therefore, this event qualifies as an AI Incident due to the AI system's development and use leading to a breach of user privacy and security.
Thumbnail Image

ChatGPT Mac app raises security concern; Here's what you can do

2024-07-04
The Financial Express
Why's our monitor labelling this an incident or hazard?
The ChatGPT macOS app is an AI system (a large language model interface) whose use has led to a potential security vulnerability that could expose sensitive user data. Although no direct harm is reported as having occurred, the lack of sandboxing and storage of conversations in plain text creates a plausible risk of harm to users' privacy and data security. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to harm (privacy violations) if exploited by malicious actors. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on a specific security risk tied to the AI system's design and use.
Thumbnail Image

Why Apple Mac users must update their ChatGPT desktop app right now - Times of India

2024-07-04
The Times of India
Why's our monitor labelling this an incident or hazard?
The ChatGPT desktop app is an AI system providing chatbot interaction. The reported bug caused user conversations to be stored insecurely, exposing them to unauthorized access, which constitutes a direct harm to users' privacy and data security. This aligns with violations of rights and harm to individuals. Since the harm has occurred or was plausible before the fix, and the AI system's malfunction was the cause, this qualifies as an AI Incident. The article focuses on the incident and the fix, not just a general update or policy response, so it is not merely Complementary Information.
Thumbnail Image

ChatGPT Mac app sparks security worries with chats being stored in plain text

2024-07-04
MoneyControl
Why's our monitor labelling this an incident or hazard?
The ChatGPT app is an AI system (a large language model chatbot). The storing of chats in plain text on users' computers exposed sensitive user data to potential unauthorized access, constituting a direct harm to user privacy and security, which falls under violations of rights and harm to individuals. The issue was realized and reported, so this is not merely a potential hazard but an actual incident. The subsequent update encrypting chats is a mitigation step but does not change the fact that the incident occurred. Therefore, this event is classified as an AI Incident.
Thumbnail Image

ChatGPT Security Issue: After User Points Out Major Security Flaw About Chatbot Storing Conversation in Plain Text, OpenAI Releases Desktop Version Update and Fixes Issue | 📲 LatestLY

2024-07-04
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to a security flaw that exposed user data, posing a direct risk of harm to users' privacy and potentially their rights. Since the vulnerability was present and could have led to unauthorized data access (harm), and OpenAI has since fixed it, this qualifies as an AI Incident due to the realized security harm and the AI system's role in it.
Thumbnail Image

ChatGPT for Mac was saving your conversations in plain text, but its all good now

2024-07-04
The Indian Express
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved, and its use led to a realized harm in terms of privacy and data security risks due to storing conversations in plain text accessible by others. This constitutes a violation of user privacy rights, which falls under harm category (c) - violations of human rights or breach of obligations intended to protect fundamental rights. The harm was directly linked to the AI system's use and its data handling practices. The prompt update by OpenAI to encrypt data is a mitigation but does not negate the fact that the incident occurred. Therefore, this qualifies as an AI Incident.
Thumbnail Image

ChatGPT App on macOS Faces Major Security Issue: Update Right Away - News18

2024-07-04
News18
Why's our monitor labelling this an incident or hazard?
The ChatGPT app is an AI system that processes user conversations. The security flaw allowed other apps to read sensitive chat data, which constitutes a violation of user privacy and a breach of data protection obligations. This harm has already occurred or was imminent before the fix, making it an AI Incident. The involvement of the AI system's use and its malfunction (lack of encryption) directly led to this harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI's ChatGPT Mac app was storing conversations in plain text

2024-07-03
The Verge
Why's our monitor labelling this an incident or hazard?
The ChatGPT macOS app is an AI system as it involves interaction with a large language model generating conversational outputs. The issue described is related to the use of the AI system and its data storage practices, which directly led to a potential privacy harm by exposing user conversations in plain text. This constitutes a violation of user privacy rights, which falls under harm category (c) - violations of human rights or breach of obligations intended to protect fundamental rights. Since the harm (privacy breach risk) has already occurred due to the app's design, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT's free macOS app had a big, worrying security hole

2024-07-04
TechRadar
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT app) was involved, and its use led to a direct risk of harm to users' privacy and potentially their security due to exposure of sensitive information. The harm here is the violation of user privacy and potential breach of data protection, which falls under violations of rights and harm to individuals. Since the vulnerability was active and could have led to actual data exposure, this constitutes an AI Incident rather than a mere hazard or complementary information. The event involves the use and malfunction (lack of proper data protection) of an AI system leading to realized or highly probable harm.
Thumbnail Image

OpenAI hit by two big security issues this week

2024-07-04
engadget
Why's our monitor labelling this an incident or hazard?
The ChatGPT Mac app is an AI system as it is a client for an AI language model. The local storage of unencrypted user conversations represents a security vulnerability that could lead to harm if sensitive data is accessed by unauthorized parties. The hack of internal messaging systems also indicates a breach of security potentially affecting the AI system's development environment. However, the article does not report any realized harm such as injury, rights violations, or operational disruption caused by these vulnerabilities. The first issue was patched, and the second is a past event with ongoing controversy but no direct harm reported. Therefore, these events represent potential risks related to AI system security but do not describe an actual AI Incident. They are best classified as Complementary Information providing context on security challenges and responses related to AI systems.
Thumbnail Image

What went wrong with ChatGPT macOS app, and why you should update it now

2024-07-04
The Hindu
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT macOS app) whose development and use led to a security flaw exposing user chat data in plain text, thereby causing a direct risk to user privacy and data security. The harm is realized as user chats were stored unencrypted, violating privacy rights and potentially applicable data protection laws. The release of an update to encrypt chats is a remediation measure but does not negate the fact that harm occurred. Therefore, this qualifies as an AI Incident due to the direct harm to users' rights and privacy stemming from the AI system's malfunction and design choices.
Thumbnail Image

OpenAI releases fix for ChatGPT Mac app exposing user conversations

2024-07-04
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT app) whose development and use led to a security flaw exposing user data, which constitutes a violation of user privacy and a potential breach of rights. Since the harm (exposure of conversations) has occurred and the AI system's role is direct, this qualifies as an AI Incident. The release of a fix is a response but does not negate the incident classification.
Thumbnail Image

ChatGPT for macOS just got caught breaching Apple security rules -- how that affects you

2024-07-04
Tom's Guide
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved, and its use led to a security vulnerability that could have compromised user data privacy, which is a violation of rights under applicable law protecting fundamental rights to privacy and data protection. Since the harm (potential unauthorized data access) was plausible and could have occurred but was mitigated quickly, and no explicit report of actual data breaches or harm is stated, this event is best classified as an AI Hazard. The prompt fix and update indicate mitigation but do not change the initial plausible risk of harm. Therefore, it is not an AI Incident (no confirmed realized harm), nor is it merely Complementary Information or Unrelated.
Thumbnail Image

ChatGPT users on macOS shocked to learn chats were stored unencrypted

2024-07-05
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to a privacy vulnerability by storing sensitive chat data unencrypted on users' devices. This is a malfunction or misuse in the deployment of the AI system that could plausibly lead to harm (privacy violations) if the data were accessed by unauthorized parties. Since no confirmed data breach or harm has been reported, the event represents a credible risk of harm rather than realized harm. Therefore, it qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the core issue is the potential for harm due to the AI system's data handling practices, not just an update or response to a past incident.
Thumbnail Image

ChatGPT Mac App Stored User Chats in Plain Text Prior to Latest Update

2024-07-04
MacRumors
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the storage of user data in an insecure manner, which could lead to privacy violations (a form of harm to users' rights). However, the article does not report any actual harm occurring, only the potential for unauthorized access prior to the update. Since the vulnerability was fixed and no direct harm is reported, this qualifies as an AI Hazard (plausible risk of harm) rather than an AI Incident. It is not merely complementary information because the main focus is on the security risk and its mitigation, not on broader ecosystem updates or responses.
Thumbnail Image

ChatGPT Mac App Stored User Chats in Plain Text Prior to Latest Update

2024-07-04
MacRumors Forums
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved, specifically its Mac app storing user data insecurely. The event stems from the use and development of the AI system, where a security oversight led to potential unauthorized access to sensitive user data, constituting a violation of user privacy rights. However, the article does not report any actual harm occurring from this vulnerability, only the potential for harm (privacy breach). The update released by OpenAI addresses the issue, indicating mitigation. Therefore, this event is best classified as Complementary Information, as it provides an update on a previously identified AI-related privacy risk and the response to it, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

ChatGPT for macOS raises concerns for storing chats in plain text

2024-07-03
9to5Mac
Why's our monitor labelling this an incident or hazard?
The ChatGPT app is an AI system that processes user conversations. The app's failure to sandbox and the storage of chats in plain text directly leads to a risk of unauthorized access to sensitive user data, which constitutes harm to users' privacy and potentially their rights. Since the harm is realized or ongoing (exposure of sensitive data), this qualifies as an AI Incident under violations of rights and harm to individuals. The event is not merely a potential risk but describes actual data exposure due to the app's design and storage practices.
Thumbnail Image

The ChatGPT Mac app has been storing your conversations in plain text

2024-07-04
Pocket-lint
Why's our monitor labelling this an incident or hazard?
The ChatGPT Mac app is an AI system (a chatbot using AI language models). The event involves the use of this AI system and a security flaw in how it stored user data, which could lead to harm by exposing sensitive personal information (harm to individuals' privacy and potentially their rights). Although the harm is indirect and stems from poor data security practices rather than AI malfunction, the risk of unauthorized data access constitutes a realized harm. The release of a patch to encrypt data is a mitigation response. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's use and data handling practices.
Thumbnail Image

ChatGPT Mac security flaw raises red flags ahead of Apple Intelligence integration

2024-07-04
Macworld
Why's our monitor labelling this an incident or hazard?
The ChatGPT Mac app is an AI system as it uses AI to generate conversational outputs. The flaw involved the app failing to sandbox and encrypt user conversations, allowing unauthorized access to sensitive data, which directly harms users' privacy and security. This is a violation of rights and a breach of obligations intended to protect fundamental rights. The harm occurred due to the app's malfunction in handling data securely. Although the issue was patched, the event describes realized harm and thus qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT for Mac app logged queries in an unencrypted file before getting caught | AppleInsider

2024-07-03
AppleInsider
Why's our monitor labelling this an incident or hazard?
The ChatGPT for Mac app is an AI system that processes user input and generates conversational outputs. The incident involves the app's use (specifically, its data storage practices) leading to a privacy and security risk, which constitutes harm to users' rights to data privacy and protection. Although no direct exploitation is reported, the exposure of sensitive user data due to unencrypted storage is a realized harm (violation of privacy rights) linked to the AI system's malfunction or misuse of data handling. Therefore, this qualifies as an AI Incident under the definition of violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

OpenAI's ChatGPT app on macOS was storing all conversations in plain text

2024-07-04
Neowin
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose development and use included a security flaw that could plausibly lead to harm (unauthorized access to private conversations). Although no actual harm or breach is reported, the vulnerability represents a credible risk of violation of user privacy rights, which fits the definition of an AI Hazard. The subsequent update by OpenAI to encrypt data is a mitigation step but does not change the classification of the original event as a hazard. The event is not an AI Incident because no realized harm is described, nor is it Complementary Information since the main focus is on the security flaw and its implications rather than a response to a past incident. It is not Unrelated because it clearly involves an AI system and potential harm.
Thumbnail Image

ChatGPT for Mac Fixes Privacy Concern That Stores Conversations in Plain Text

2024-07-04
Tech Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (ChatGPT for Mac) and a privacy concern involving storage of conversations in plain text, which could have led to harm (privacy violation). However, there is no indication that any harm actually occurred or that data was accessed maliciously. The main focus is on the discovery of the issue and the immediate fix by OpenAI, which improves user privacy and security. Therefore, this is not an AI Incident (no realized harm) nor an AI Hazard (no plausible future harm beyond the fixed issue). Instead, it is Complementary Information providing an update on a previously identified risk and the response to it.
Thumbnail Image

ChatGPT for macOS Exposed User Conversations in Plain Text

2024-07-04
The Mac Observer
Why's our monitor labelling this an incident or hazard?
The ChatGPT macOS app is an AI system as it uses AI to generate conversational outputs. The security flaw involved the storage of AI-generated conversation data in an unencrypted form, which directly led to a privacy and data security harm by exposing sensitive user information. This constitutes a violation of user privacy rights, which falls under violations of human rights or breach of obligations under applicable law protecting fundamental rights. Since the harm has occurred and the AI system's use directly led to it, this qualifies as an AI Incident. The subsequent update by OpenAI is a remediation but does not change the classification of the original event.
Thumbnail Image

Using ChatGPT for Mac? Update Now to Fix This Privacy Hole

2024-07-03
iDrop News
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT for Mac) is explicitly involved, and its use led to a privacy and security harm—exposure of private user conversations to other apps and potential malicious actors. This constitutes a violation of privacy rights and a breach of security obligations, which fits the definition of an AI Incident. The harm has occurred due to the app's design and storage practices, and the update is a remediation step. Therefore, this event is classified as an AI Incident.
Thumbnail Image

OpenAI Encrypts ChatGPT Mac App Conversations After Security Flaw - WinBuzzer

2024-07-04
WinBuzzer
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, and the event concerns a security flaw in how user data (AI-generated conversations) was stored and accessed. This flaw could lead to violations of user privacy and data protection rights, which are human rights-related harms. Although the harm is not described as having been exploited maliciously, the vulnerability posed a direct risk of harm. OpenAI's update to encrypt conversations addresses this risk. Since the vulnerability existed and posed a direct risk of harm to users' privacy, this qualifies as an AI Incident due to the realized or imminent harm linked to the AI system's use and data handling.
Thumbnail Image

ChatGPT Privacy and Mac Sandbox Containers

2024-07-04
Michael Tsai
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved, specifically its Mac app storing user conversation data insecurely. The issue arises from the app's use and data storage practices, which led to a privacy and security risk (a form of harm to users' data privacy). Although no direct physical harm or legal violation is explicitly stated, the exposure of private user data due to lack of sandboxing and unencrypted storage constitutes a violation of user privacy rights, which falls under harm category (c) - violations of human rights or breach of obligations intended to protect fundamental rights. The update to encrypt stored chats is a mitigation response. Since the harm (privacy risk) has already occurred and the AI system's use and design choices contributed to it, this qualifies as an AI Incident. The article primarily focuses on the incident and its implications rather than just a response or ecosystem update, so it is not merely Complementary Information.
Thumbnail Image

OpenAI fixed ChatGPT for macOS Security Flaw That Exposed User Conversations ~ My Mobile India

2024-07-04
My Mobile
Why's our monitor labelling this an incident or hazard?
The ChatGPT app is an AI system that generates and stores user conversations. The security flaw allowed other apps to access these conversations in plain text, directly leading to a breach of user privacy, which is a violation of rights and harm to users. The issue was demonstrated and verified, confirming realized harm. OpenAI's update to encrypt conversations is a remediation measure. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by the AI system's malfunction (insecure storage).
Thumbnail Image

OpenAI's ChatGPT Mac App Stored Conversation History Outside the Sandbox

2024-07-04
Pixel Envy
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's ChatGPT app) whose use (storage of conversation data) led to a direct risk of harm to users' privacy and potentially their rights to data protection. The conversations contain personal and sensitive information, and their exposure constitutes a violation of privacy rights, which falls under violations of human rights or breach of obligations under applicable law protecting fundamental rights. Although the harm is primarily privacy-related and no physical injury or property damage is described, the unauthorized access to private conversations is a significant harm. The update to encrypt conversations is a mitigation measure but does not negate the fact that the initial storage practice caused an AI Incident. Therefore, this event qualifies as an AI Incident due to realized harm from the AI system's use and data handling practices.
Thumbnail Image

The ChatGPT app on Mac leaves all our conversations exposed, but it has an easy solution - Softonic

2024-07-04
Softonic
Why's our monitor labelling this an incident or hazard?
The ChatGPT app is an AI system as it is an interface to a large language model chatbot. The security flaw in storing conversations in plain text exposed sensitive user data, which constitutes a violation of privacy and could be considered a breach of obligations under applicable law protecting fundamental rights. Although no direct harm is reported as having occurred, the exposure of conversations to any application or process on the device, including malicious ones, represents a plausible risk of harm to users' privacy and data security. The update encrypting stored conversations mitigates this risk. Since the article focuses on a realized security flaw that could have led to harm and was promptly addressed, this qualifies as an AI Incident due to the direct involvement of the AI system's development and use leading to a privacy harm risk.
Thumbnail Image

ChatGPT for macOS Fixes Flaw That Stored Conversations Without Encryption

2024-07-04
Gadgets 360
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and a security flaw in how it stored user data, which could have led to harm through unauthorized access. However, there is no indication that any harm actually occurred, only that the vulnerability existed and was fixed. The update encrypting conversations is a mitigation measure, making this a follow-up or complementary report rather than a new incident or hazard. The event does not describe realized harm or a credible ongoing risk but rather the resolution of a known issue, fitting the definition of Complementary Information.
Thumbnail Image

OpenAI addresses ChatGPT macOS app security loophole

2024-07-03
TestingCatalog
Why's our monitor labelling this an incident or hazard?
The ChatGPT macOS app is an AI system as it involves a large language model generating and storing conversations. The security loophole allowed unauthorized access to sensitive user data, which constitutes a violation of privacy and potentially breaches data protection rights. Although no explicit harm to users is reported, the vulnerability directly exposed user data to unauthorized parties, fulfilling the criteria for an AI Incident due to realized harm (privacy breach). OpenAI's prompt remediation does not negate the fact that harm occurred or was possible before the fix. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

OpenAI updates ChatGPT macOS app to encrypt conversations

2024-07-03
Stack Diary
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the use and storage of data generated by the AI system. The flaw allowed unauthorized access to sensitive user data, which constitutes a violation of privacy and potentially a breach of data protection rights, falling under harm to rights as defined. Since the harm (privacy risk) was realized due to the plaintext storage, this qualifies as an AI Incident. The update and mitigation are responses but do not negate the incident classification.
Thumbnail Image

ChatGPT for MacOS Store All The Conversation in Plain Text

2024-07-04
Cyber Security News
Why's our monitor labelling this an incident or hazard?
An AI system (the ChatGPT app) is involved, and its use has directly led to a significant harm: the potential unauthorized access to sensitive user conversations, which constitutes a violation of user privacy and security. This is a clear breach of data protection principles and can be considered harm to users' rights and privacy. Since the harm is realized or ongoing (conversations stored unprotected and accessible), this qualifies as an AI Incident rather than a hazard or complementary information. The update to encrypt conversations is a response but does not negate the incident classification.
Thumbnail Image

As ChatGPT prepares for iPhone iOS integration, it just failed one of Apple's key pillars -- privacy

2024-07-04
iMore
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to a security vulnerability exposing private user conversations in plain text, which is a violation of privacy rights (a human rights violation). The vulnerability was real and exploitable, thus the harm is realized, not just potential. This fits the definition of an AI Incident because the AI system's use directly led to a breach of privacy, a fundamental right. The prompt fix by OpenAI is a mitigating action but does not negate the fact that the incident occurred. Therefore, the classification is AI Incident.
Thumbnail Image

Sicurezza a rischio con ChatGPT per macOS: salva le conversazioni in testo non criptato

2024-07-04
Tom's Hardware
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, and its use (storage of conversations) led to a direct risk of harm to users' privacy and data security, which is a violation of rights and harm to individuals. Although no specific harm is reported as having occurred, the exposure of sensitive data due to unencrypted storage constitutes a realized AI Incident because the AI system's malfunction (lack of encryption and sandboxing) directly led to a significant privacy risk. The update to encrypt data is a response but does not negate the incident classification.
Thumbnail Image

Brutta falla di sicurezza per ChatGPT su Mac, adesso corretta

2024-07-04
Hardware Upgrade - Il sito italiano sulla tecnologia
Why's our monitor labelling this an incident or hazard?
The ChatGPT application is an AI system. The vulnerability allowed unauthorized access to sensitive user conversations, which is a direct harm to user privacy and data security, falling under harm to persons and violation of rights. The event describes realized harm (unauthorized access was possible) and the AI system's deployment and use directly led to this harm. The subsequent fix is a response but does not change the classification of the original event as an AI Incident.
Thumbnail Image

ChatGPT per Mac: falla di sicurezza mette a rischio le conversazioni

2024-07-04
IlSoftware.it
Why's our monitor labelling this an incident or hazard?
The ChatGPT app is an AI system as it uses a large language model to generate conversational outputs. The security flaw allowed unauthorized access to private chat data, which constitutes a violation of user privacy rights and harm to individuals' data security. This harm is directly linked to the AI system's malfunction (lack of proper encryption in the app). Although the issue was fixed, the event describes a realized harm scenario, not just a potential one. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information. The mention of OpenAI's broader privacy challenges in the EU is contextual and does not change the classification of this specific event.
Thumbnail Image

ChatGPT per Mac: falla di sicurezza esponeva conversazioni private

2024-07-04
HTML.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to a direct harm: exposure of private user conversations due to a security flaw. This constitutes a violation of privacy rights and harm to individuals' personal data, fitting the definition of an AI Incident. The harm has already occurred as private data was accessible, and the AI system's design and deployment were directly involved. The prompt remediation by OpenAI is noted but does not negate the incident classification.
Thumbnail Image

App ChatGPT per Mac memorizzava le chat, corretta falla

2024-07-04
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) and a security flaw in how it stored user data locally. The flaw allowed unauthorized access to sensitive user conversations, constituting a violation of privacy rights, a recognized harm under the AI Incident definition. The harm has occurred as the vulnerability was demonstrably exploited by a researcher to access chat data. The prompt remediation by OpenAI is noted but does not negate the fact that harm was realized. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI正在抢走谁的工作?一个人使用ChatGPT取代了60名员工

2024-07-02
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the deployment and use of ChatGPT, an AI system, to replace human writers and editors, leading to the direct loss of jobs and degradation of work quality and satisfaction. The harm is realized and significant, affecting a large group of people economically and socially. This fits the definition of an AI Incident because the AI system's use directly caused harm to people through job displacement and the erosion of meaningful work. The event is not merely a potential risk or a complementary update but a concrete case of AI-driven harm.
Thumbnail Image

Mac版ChatGPT应用被发现将查询记录在未加密文件中 - cnBeta.COM 移动版

2024-07-04
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) whose development and use led to a security vulnerability that exposed private user data in unencrypted form. This exposure could lead to harm to users' privacy and potentially violate their rights. Although no exploitation is reported, the vulnerability existed and posed a direct risk of harm. The subsequent patch addresses the issue, but the incident itself qualifies as an AI Incident due to the realized risk and breach of data protection norms caused by the AI application's design and deployment choices.
Thumbnail Image

macOS 版 ChatGPT 被指以纯文本存储AI对话,OpenAI 紧急更新修复_手机网易网

2024-07-03
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use led to a privacy and security vulnerability, exposing user conversations to unauthorized access. This constitutes a violation of user privacy rights, which falls under harm category (c) - violations of human rights or breach of obligations intended to protect fundamental rights. Since the harm (privacy breach risk) has occurred and OpenAI has taken remedial action, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Probleme majore de securitate pentru OpenAI: Ce trebuie să știi neapărat dacă folosești ChatGPT

2024-07-08
PLAYTECH.ro
Why's our monitor labelling this an incident or hazard?
The ChatGPT application is an AI system as it generates conversational outputs based on user input. The security flaws described directly relate to the use and deployment of this AI system, leading to realized or ongoing harm in terms of data privacy and security breaches. The unencrypted storage of conversations exposes sensitive user data to potential unauthorized access, which is a direct harm. The internal hack and subsequent data exposure also represent realized harm linked to the AI system's operational environment. These harms fall under violations of rights and harm to communities. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.