OpenAI Fined for ChatGPT Data Breach Exposing South Korean Users' Personal Information

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

South Korea's privacy regulator fined OpenAI 3.6 million won after a bug in ChatGPT exposed the personal and payment information of 687 South Korean users. The incident, caused by a caching issue, led to a data breach and failure to promptly notify authorities, prompting regulatory recommendations for improved safeguards.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes a malfunction in the AI system ChatGPT that directly led to the exposure of personal information of 687 users, which is a clear harm under the category of violations of human rights and data protection laws. The involvement of the AI system is explicit, and the harm has materialized. The fine imposed and recommendations for compliance further confirm the incident's nature. Hence, this qualifies as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityAccountabilityTransparency & explainabilityRespect of human rights

Industries
Consumer servicesDigital securityIT infrastructure and hostingGovernment, security, and defence

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsEconomic/PropertyReputationalPsychological

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

ChatGPT fined 3.6 mln won for exposing personal info of 687 S. Korean users | Yonhap News Agency

2023-07-27
Yonhap News Agency
Why's our monitor labelling this an incident or hazard?
The event describes a malfunction in the AI system ChatGPT that directly led to the exposure of personal information of 687 users, which is a clear harm under the category of violations of human rights and data protection laws. The involvement of the AI system is explicit, and the harm has materialized. The fine imposed and recommendations for compliance further confirm the incident's nature. Hence, this qualifies as an AI Incident.
Thumbnail Image

South Korea fines OpenAI's chatbot ChatGPT of 3.6 million won, Facebook's Meta Platform of 7.4 billion won; here's why

2023-07-28
mint
Why's our monitor labelling this an incident or hazard?
The event describes a data breach caused by a bug in an AI system (ChatGPT) that led to the exposure of sensitive user information, and unauthorized data collection by Meta's platform, both resulting in regulatory fines. These breaches represent violations of personal data protection laws, which fall under violations of human rights and legal obligations protecting fundamental rights. Since the AI systems' development or use directly led to harm in terms of privacy violations and legal breaches, this qualifies as an AI Incident under the framework.
Thumbnail Image

Korea: South Korea has fined ChatGPT maker OpenAI 3.6 million won, here's why

2023-07-28
Gadget Now
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose malfunction (a bug causing exposure of user data) directly led to harm in the form of personal data exposure, which is a violation of privacy rights and personal information protection laws. This constitutes an AI Incident because the AI system's malfunction caused realized harm to individuals' privacy and legal obligations were breached. The fine and regulatory response are consequences of this incident, but the core event is the data exposure caused by the AI system's bug.
Thumbnail Image

ChatGPT fined $2,829 for exposing personal info of 687 Korean users

2023-07-27
The Korea Times
Why's our monitor labelling this an incident or hazard?
ChatGPT is a generative AI system. The incident stems from a malfunction (a bug in an open-source library used by ChatGPT) that caused personal data exposure, which is a violation of privacy rights and personal information protection laws. The harm (exposure of sensitive personal data) has already occurred, fulfilling the criteria for an AI Incident under violations of human rights or breach of legal obligations. The fine and recommendations are responses to this incident, but the primary event is the data breach caused by the AI system's malfunction.
Thumbnail Image

South Korea: PIPC imposes fine of 3.6 million won on ChatGPT for exposing personal info of 687 citizens

2023-07-27
Telangana Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) whose malfunction (a bug causing caching issues) directly led to the exposure of personal information, which is a violation of privacy rights and data protection laws. This constitutes harm under the category of violations of human rights and legal obligations protecting personal data. The fine and regulatory actions confirm the harm has materialized. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT: OpenAI Fined 3.6 Million Won for Exposing Personal Information of 687 South Koreans | 📲 LatestLY

2023-07-27
LatestLY
Why's our monitor labelling this an incident or hazard?
The event describes a malfunction (a bug causing caching issues) in an AI system (ChatGPT, a generative AI chatbot) that led to the exposure of personal information, constituting a breach of privacy rights. This is a direct harm to individuals' rights under applicable law, fulfilling the criteria for an AI Incident. The fine and regulatory response further confirm the recognition of harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

ChatGPT fined 3.6 mn won for exposing personal info of 687 S. Koreans

2023-07-27
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The incident is directly linked to a malfunction in an AI system (ChatGPT) that caused personal information of users to be exposed. This exposure harms individuals' privacy rights, a form of human rights violation. The involvement of an AI system is explicit, and the harm has materialized, meeting the criteria for an AI Incident. The fine and regulatory response further confirm the recognition of harm caused by the AI system's malfunction.
Thumbnail Image

South Korea fines OpenAI 3.6 million won for data leakage of 687 Korean ChatGPT users

2023-07-28
MediaNama
Why's our monitor labelling this an incident or hazard?
The incident directly involves an AI system (ChatGPT) whose malfunction (a bug in the caching solution) caused a data breach exposing sensitive personal information. This breach constitutes a violation of privacy rights protected under law, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's malfunction is the direct cause. The regulatory fine and recommendations further confirm the recognition of harm and responsibility.
Thumbnail Image

ChatGPT fined US$2,829 for exposing 687 South Korean users' personal info

2023-07-27
Daily Express Sabah
Why's our monitor labelling this an incident or hazard?
The incident directly involves an AI system (ChatGPT) whose malfunction (a caching bug) caused the exposure of sensitive personal information, constituting a violation of privacy rights under applicable law. This meets the criteria for an AI Incident as the AI system's malfunction directly led to harm in the form of a data breach and legal violations. The fine imposed and regulatory response further confirm the recognition of harm caused by the AI system's failure.
Thumbnail Image

ChatGPT in trouble? South Korea fines chatbot Rs. 2.31 lakh for exposing people's private data

2023-07-28
telecomlive.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves ChatGPT, an AI system, whose malfunction (outage and vulnerability) led to the exposure of personal data of 687 users, constituting a violation of privacy rights and personal data protection laws. This meets the criteria for an AI Incident as the AI system's malfunction directly led to harm (privacy breach). The class action lawsuit further supports the presence of harm related to AI training practices violating legal rights. Therefore, the event is classified as an AI Incident.
Thumbnail Image

(LEAD) ChatGPT fined 3.6 mln won for exposing personal info of 687 S. Korean users | Yonhap News Agency

2023-07-27
Yonhap News Agency
Why's our monitor labelling this an incident or hazard?
The event describes a data breach caused by a bug in ChatGPT, a generative AI system, which led to the exposure of personal information of 687 users. This is a direct harm related to the AI system's malfunction (a caching issue in an open-source library used by ChatGPT). The harm involves violation of personal data protection laws and users' privacy rights, fitting the definition of an AI Incident under violations of human rights or breach of legal obligations. The fine imposed and regulatory response further confirm the recognition of harm. Hence, the classification is AI Incident.