Critical Security Flaws in ChatGPT Plugins Exposed User Data and Accounts to Attackers

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers discovered severe vulnerabilities in ChatGPT plugins (now called GPTs) that allowed attackers to access private user data, including GitHub repositories and third-party accounts, via zero-click exploits. These flaws risked data theft and account takeovers before being reported and remediated by OpenAI and plugin developers.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes security flaws in an AI system (ChatGPT plugins) that have directly led to realized harms or credible risks of harm, such as unauthorized account takeover and data access. These harms fall under violations of user rights and harm to property. The involvement of AI systems is explicit, and the harms are direct or imminent. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, since the vulnerabilities have been demonstrated and pose real threats. The article also mentions mitigation steps by OpenAI, but the primary focus is on the harm and risk from the AI system's use and malfunction.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRobustness & digital securitySafetyRespect of human rights

Industries
Digital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

ChatGPT plugin flaws could have allowed hackers to take over other accounts

2024-03-14
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT plugins) that interact with third-party accounts and perform tasks such as code commits and data retrieval. The security flaws in these AI-enabled plugins could have directly led to harm by enabling unauthorized access to user accounts and sensitive data, which constitutes harm to property and privacy. Although no actual harm was reported, the vulnerabilities represent a credible risk of harm if exploited. Since the flaws have been fixed and no exploitation was observed, the event is best classified as an AI Hazard, reflecting the plausible risk of harm due to AI system vulnerabilities.
Thumbnail Image

Researchers warn devs of vulnerabilities in ChatGPT plugins | Tech...

2024-03-13
TechTarget
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly—ChatGPT plugins and custom GPTs that connect AI language models to third-party applications. The vulnerabilities relate to OAuth authentication and redirect manipulation, which could allow attackers to misuse these AI systems to access sensitive data and user accounts. Although no actual harm has been reported, the potential for such harm is credible and significant, including violations of privacy and unauthorized access to critical accounts like GitHub. Since the harm is plausible but not yet realized, this fits the definition of an AI Hazard rather than an AI Incident. The article focuses on the warning and potential risks rather than an actual incident of harm, so it is not Complementary Information or Unrelated.
Thumbnail Image

ChatGPT plugins are handy, but they could give bad actors unbridled access to your accounts

2024-03-16
BGR
Why's our monitor labelling this an incident or hazard?
The event describes security flaws in an AI system (ChatGPT plugins) that have directly led to realized harms or credible risks of harm, such as unauthorized account takeover and data access. These harms fall under violations of user rights and harm to property. The involvement of AI systems is explicit, and the harms are direct or imminent. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, since the vulnerabilities have been demonstrated and pose real threats. The article also mentions mitigation steps by OpenAI, but the primary focus is on the harm and risk from the AI system's use and malfunction.
Thumbnail Image

Flaws in ChatGPT extensions allowed access to sensitive data

2024-03-13
BetaNews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (ChatGPT and its plugins) and describes security flaws that could plausibly lead to harms including unauthorized access to sensitive data and account takeovers. Since no exploitation or harm has been reported yet, but the risk is credible and significant, the event fits the definition of an AI Hazard rather than an AI Incident. The disclosure and remediation efforts are noted but do not change the classification since the potential for harm remains inherent in the vulnerabilities discovered.
Thumbnail Image

Critical ChatGPT Plugin Vulnerabilities Expose Sensitive Data

2024-03-13
Dark Reading
Why's our monitor labelling this an incident or hazard?
The vulnerabilities in ChatGPT plugins represent a malfunction or security flaw in the use of AI systems that could directly lead to harm, including unauthorized access to sensitive data and account takeovers. These harms fall under violations of rights and harm to property (data). Although no exploitation was confirmed, the vulnerabilities' existence and potential for exploitation constitute an AI Incident because the AI system's use directly led to a significant security risk and potential harm. The event is not merely a future risk (hazard) or complementary information; it reports concrete vulnerabilities that have been found and fixed, indicating a realized security incident risk associated with AI system use.
Thumbnail Image

Salt Security Uncovers Security Flaws in ChatGPT Extensions, Remediated Promptly

2024-03-13
AiThority
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (ChatGPT plugins and GPTs) that extend AI chatbot capabilities to interact with third-party services. The vulnerabilities discovered could have allowed attackers to take over user accounts and access sensitive data, which constitutes a plausible risk of harm to property and privacy. Although no actual harm or exploitation has been reported, the potential for harm is credible and significant. Therefore, this event is best classified as an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the newly discovered vulnerabilities and their implications, not on updates or responses to previously known incidents. It is not Unrelated because the event clearly involves AI systems and security risks related to them.
Thumbnail Image

Salt Security identifies critical flaws in ChatGPT plugins that risk third-party data breaches - SiliconANGLE

2024-03-13
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT and its plugins) whose vulnerabilities could directly lead to harm in the form of unauthorized access to sensitive data and account takeovers, which constitute violations of privacy and potentially human rights or harm to property. Although the vulnerabilities were fixed before widespread exploitation, the report details actual security flaws that could have caused harm. Therefore, this qualifies as an AI Incident because the AI system's use and its plugin architecture directly led to security flaws that risked harm to users and organizations.
Thumbnail Image

Your ChatGPT account might not be safe if you're using plugins

2024-03-14
Android Headlines
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT plugins and GPTs) and details how their development and use have led to direct security vulnerabilities that could cause harm to users' data privacy and security. The described vulnerabilities have already led to some accounts being hacked or could plausibly lead to such incidents, constituting harm to individuals' property and privacy rights. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction have directly or indirectly led to harm.
Thumbnail Image

Security Flaws within ChatGPT Ecosystem Allowed Access to Accounts On Third-Party Websites and Sensitive Data

2024-03-14
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT and its generative AI ecosystem) and details how security flaws in its use and integration with third-party services led to vulnerabilities that could be exploited to gain unauthorized access to user accounts and sensitive data. This constitutes direct harm under the definitions provided, specifically violations of rights and harm to property. The harm is realized in the form of potential account takeovers and data breaches, even though no confirmed data compromise occurred, the vulnerabilities were critical and could have led to significant harm. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's malfunction (security flaws) and the harm potential to users' digital property and rights.
Thumbnail Image

Researchers Find Flaws in OpenAI ChatGPT, Google Gemini

2024-03-14
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (ChatGPT and Gemini) and their vulnerabilities that could lead to unauthorized access to sensitive data and misinformation generation. These vulnerabilities have been identified and remediated, but the potential for harm was real and significant. The harms include privacy violations and misinformation that could affect communities, especially in the context of elections. Since the vulnerabilities have been found and could have been exploited, this constitutes an AI Incident due to the direct link between AI system flaws and potential harm. The article does not merely discuss potential future risks but reports on actual security flaws that could have led to harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Research uncovers vulnerabilities in ChatGPT plugins

2024-03-14
Security Magazine
Why's our monitor labelling this an incident or hazard?
The vulnerabilities involve the development and use of AI systems (ChatGPT plugins) and their exploitation could lead to harm to users by compromising their accounts, which is a form of harm to individuals. Since the vulnerabilities have been exploited or could be exploited to cause harm, this qualifies as an AI Incident. The fact that the issues have been addressed does not negate the incident classification, as the harm or risk was realized or imminent.
Thumbnail Image

Warning After Huge Security Flaws Found in ChatGPT Plugins

2024-03-14
Tech.co
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions severe security flaws in AI-powered ChatGPT plugins that were exploited by cybercriminals to access third-party accounts and steal user data. This is a direct harm to users' data privacy and security, caused by the use and malfunction of an AI system (ChatGPT plugins). Although the issues have been resolved, the harm occurred, qualifying this as an AI Incident.
Thumbnail Image

ChatGPT plugins contained 'critical security flaws', research reveals | Ctech

2024-03-13
CTECH
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems (ChatGPT plugins) is explicit, and the security flaws represent a malfunction that could lead to harm (unauthorized access to sensitive data). Since the flaws have been discovered and remediated, and the article reports on the research findings rather than an active or ongoing incident causing harm, this qualifies as Complementary Information. It provides important context about AI system vulnerabilities and responses but does not describe a current AI Incident or an AI Hazard with plausible future harm beyond what has been addressed.
Thumbnail Image

Critical ChatGPT Plugins Flaw Let Attackers Gain Control Over Organization's Account

2024-03-15
GBHackers On Security
Why's our monitor labelling this an incident or hazard?
The article explicitly details how AI systems (ChatGPT plugins) have security flaws that attackers exploit to gain unauthorized access to user accounts and data. This involves the use and malfunction of AI systems leading directly to harm, including account takeovers and data breaches. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use and vulnerabilities have directly led to violations of rights and harm to property (data).
Thumbnail Image

New Research Exposes Security Risks in ChatGPT Plugins

2024-03-13
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT plugins) and details security flaws that could plausibly lead to significant harm, including unauthorized access to sensitive data and account takeovers. Although no actual harm or incidents of exploitation are reported, the vulnerabilities present a credible risk of AI-related harm. The coordinated disclosure and remediation efforts indicate the event is about managing a potential threat rather than reporting a realized incident. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

ChatGPT 0-click plugin exploit risked leak of private GitHub repos

2024-03-13
SC Media
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (ChatGPT plugins and custom GPTs) that use OAuth to access third-party user data. The vulnerabilities directly led to unauthorized access risks to private user data, including private GitHub repositories, which is a violation of intellectual property rights and privacy (harm category c). The exploit is described as a zero-click attack, meaning harm could occur without user interaction, indicating a direct link between the AI system's malfunction and realized harm. Although the vulnerabilities have been fixed, the incident itself involved actual risk and potential data breaches, qualifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT Plugin Vulnerabilities Exposed by Security Researchers - WinBuzzer

2024-03-14
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The article explicitly details multiple security flaws in AI systems (ChatGPT plugins and GPTs) that have been exploited to steal sensitive user data and take over accounts, constituting direct harm to users' privacy and security. These harms fall under violations of user rights and harm to individuals. The AI system's malfunction or flawed design is a direct cause of these harms. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Salt Security Uncovers Security Flaws within ChatGPT Extensions that Allowed Access to Third-Party Websites and Sensitive Data - Issues have been Remediated - Global Security Mag Online

2024-03-13
Global Security Mag Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT and its plugins) whose vulnerabilities could have directly led to significant harms including account takeovers and unauthorized access to sensitive data, which are violations of rights and harm to property. Although no actual harm was reported, the vulnerabilities represent a credible risk of harm that was discovered and remediated before exploitation. Therefore, this qualifies as an AI Hazard because it plausibly could have led to an AI Incident but was prevented through remediation. The article focuses on the security flaws and their remediation rather than reporting realized harm, so it is not an AI Incident. It is more than complementary information because it reports new vulnerabilities with potential for harm rather than just updates or responses to past incidents.