Europol Warns of Criminal Exploitation of ChatGPT and LLMs

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Europol has warned that criminals are exploiting or could exploit ChatGPT and similar large language models to commit cybercrimes such as fraud, phishing, malware creation, and disinformation. The agency notes that safeguards are easily bypassed, enabling even non-experts to use AI for malicious purposes, increasing risks to public safety.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article discusses the potential misuse of AI systems (large language models like ChatGPT) by criminals to perpetrate fraud, spread disinformation, and conduct cybercrime. While no specific harm or incident is reported as having already occurred, the warning clearly indicates plausible future harms stemming from the use of these AI systems in malicious ways. Therefore, this constitutes an AI Hazard, as the development and use of these AI systems could plausibly lead to incidents involving harm to communities (disinformation), harm to property or individuals (fraud), and cybercrime.[AI generated]
AI principles
Robustness & digital securitySafetyAccountabilityDemocracy & human autonomy

Industries
Digital security

Affected stakeholders
ConsumersBusinessGeneral public

Harm types
Economic/PropertyPublic interest

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Europol warns ChatGPT in the wrong hands can worsen crime

2023-03-28
Silicon Republic
Why's our monitor labelling this an incident or hazard?
The article discusses the potential misuse of AI systems (large language models like ChatGPT) by criminals to perpetrate fraud, spread disinformation, and conduct cybercrime. While no specific harm or incident is reported as having already occurred, the warning clearly indicates plausible future harms stemming from the use of these AI systems in malicious ways. Therefore, this constitutes an AI Hazard, as the development and use of these AI systems could plausibly lead to incidents involving harm to communities (disinformation), harm to property or individuals (fraud), and cybercrime.
Thumbnail Image

Cybercrime, fraud using ChatGPT on the rise, says Europol

2023-03-28
SC Media
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in facilitating cybercrime and fraud, which are harms to individuals and communities. The malicious use of ChatGPT to generate phishing content, disinformation, and malicious code directly leads to harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in criminal activities.
Thumbnail Image

Europol warns of ChatGPT's potential criminal applications

2023-03-28
TechSpot
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how ChatGPT, a Large Language Model AI system, is being used by criminals to commit phishing, fraud, disinformation, and cybercrime, which are harms to communities and violations of law. These harms are occurring, not just potential, making this an AI Incident. The involvement of the AI system is direct, as its capabilities enable these criminal activities. The article also references prior incidents and ongoing misuse, confirming realized harm rather than just potential risk.
Thumbnail Image

Europol sounds alarm about criminal use of ChatGPT, sees grim outlook

2023-03-27
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies ChatGPT, an AI system, and discusses its potential exploitation by criminals to cause harm. Although no concrete incidents of harm are reported, the credible and plausible risks of phishing, disinformation, and cybercrime stemming from the AI's capabilities constitute a potential for harm. Therefore, this qualifies as an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving harm to individuals and communities.
Thumbnail Image

Europol sounds alarm about criminal use of ChatGPT, sees grim outlook - ET Telecom

2023-03-28
ETTelecom.com
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies ChatGPT, an AI system (a large language model), as being actively exploited by criminals to cause harm through phishing scams, disinformation campaigns, and cybercrime. These harms have already materialized or are ongoing, fulfilling the criteria for an AI Incident. The harms include violations of rights and harm to communities, with the AI system playing a pivotal role in enabling these criminal activities.
Thumbnail Image

Europol Warns 'Grim' Criminal Abuse of ChatGPT is On The Cards

2023-03-27
News18
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses ChatGPT, an AI system, and its potential exploitation by criminals to commit cybercrimes. Although no actual incidents of harm are described, the credible and plausible risks of harm such as fraud, disinformation, and cyberattacks are clearly articulated. Therefore, this event fits the definition of an AI Hazard, as it concerns circumstances where AI use could plausibly lead to an AI Incident in the future. It is not an AI Incident because no realized harm is reported, nor is it Complementary Information or Unrelated since the focus is on potential harm from AI misuse.
Thumbnail Image

'Grim' criminal abuse of ChatGPT is coming: Europol

2023-03-27
The Daily Star
Why's our monitor labelling this an incident or hazard?
The article describes a credible risk scenario where AI systems, specifically ChatGPT, could be exploited by criminals to facilitate harmful activities. Although no direct incidents of harm are reported, the potential for such misuse is clearly articulated and plausible, fitting the definition of an AI Hazard. The involvement of the AI system is in its use (or misuse) by criminals, and the harms described include fraud, cybercrime, and disinformation, which could disrupt communities and violate rights. Since the harms are potential and not yet realized, this event is best classified as an AI Hazard.
Thumbnail Image

Europol Sets Out 'Grim' Prospects For Law Enforcement In The Era Of ChatGPT

2023-03-28
Forbes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (LLMs like ChatGPT) whose use has directly led to harms including fraud, social engineering, cybercrime, and disinformation, all of which impact communities and individuals. The report documents ongoing misuse and harm, not just potential risks, fulfilling the criteria for an AI Incident. The harms are clearly articulated and linked to the AI system's outputs (realistic text generation enabling phishing and malicious code). Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Europol sounds alarm about criminal use of ChatGPT, sees grim outlook

2023-03-27
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in criminal activities that have already occurred or are actively occurring, such as phishing and disinformation campaigns. These activities cause harm to people and communities, fulfilling the criteria for an AI Incident. The report details actual misuse and resulting harms rather than potential or hypothetical risks, so it is not merely a hazard or complementary information.
Thumbnail Image

Criminal Exploitation Of ChatGPT Is Coming, Europol Warns

2023-03-27
NDTV
Why's our monitor labelling this an incident or hazard?
The article describes how the use of ChatGPT by criminals could plausibly lead to harms including fraud, cybercrime, and disinformation campaigns. Although no actual incidents of harm are detailed, the warning about potential exploitation and the risks associated with AI-generated content and code constitute a credible risk of future harm. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Europol Warns of 'Grim Outlook' Regarding ChatGPT

2023-03-27
www.theepochtimes.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the AI system ChatGPT and its capabilities that could be exploited by criminals for harmful purposes. Europol's warning about the potential for phishing, impersonation, disinformation, and malicious code generation indicates a credible risk of harm that could plausibly lead to AI Incidents. Since the harms are potential and not reported as realized, this event fits the definition of an AI Hazard rather than an AI Incident. The article also includes broader context about AI development and calls for regulation, but the main focus is on the plausible future harms from misuse of ChatGPT.
Thumbnail Image

Europol warning as criminals commandeer AI chatbots | Digital Trends

2023-03-31
Digital Trends
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how AI chatbots are being used by criminals to create convincing phishing emails, disinformation, and malicious code, which are causing or facilitating harm to individuals and communities. This constitutes direct involvement of AI systems in causing harm through their use by criminals. Therefore, this event qualifies as an AI Incident because the AI systems' use has directly led to harms such as fraud, social engineering, and disinformation dissemination.
Thumbnail Image

'Grim' Criminal Abuse of ChatGPT is Coming, Europol Warns

2023-03-28
News18
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how the AI system ChatGPT is being used by criminals to commit various harms including fraud, phishing attacks, disinformation campaigns, and cybercrime. These activities constitute realized harms to individuals and communities, such as data theft and misinformation dissemination. The AI system's use in these criminal activities directly leads to violations of rights and harm to communities, fitting the definition of an AI Incident. The article also notes that safeguards can be circumvented, indicating ongoing misuse rather than just potential future harm.
Thumbnail Image

Europol warns ChatGPT is being used to commit crime

2023-03-29
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose outputs are being exploited by criminals to commit harmful acts, including cybercrime and terrorism, which are harms to persons and communities. The AI system's role is pivotal as it provides specific, actionable information that facilitates these crimes. The harms are realized and ongoing, not merely potential, thus qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'Grim' criminal abuse of ChatGPT is coming, Europol warns

2023-03-27
Deccan Herald
Why's our monitor labelling this an incident or hazard?
The article describes the potential misuse of an AI system (ChatGPT) by criminals to commit various harms such as fraud, phishing, disinformation, and cybercrime. Although no specific incident of harm is reported, the credible risk of such harms occurring due to the AI's capabilities is clearly articulated. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving harm to people and communities.
Thumbnail Image

Europol Warns About Exploitation Of AI Systems Including ChatGPT

2023-03-28
BW Businessworld
Why's our monitor labelling this an incident or hazard?
The article describes a credible risk scenario where AI systems, specifically large language models like ChatGPT, could be exploited by criminals to commit various crimes. This constitutes a plausible future harm stemming from the use or misuse of AI systems, fitting the definition of an AI Hazard. There is no indication that actual harm has already occurred or that a specific incident has taken place, so it does not qualify as an AI Incident. The report serves as a warning and overview of potential threats, not a description of realized harm or a governance response, so it is not Complementary Information either.
Thumbnail Image

Europol sounds alarm about criminal use of ChatGPT, sees grim outlook

2023-03-27
Financial Post
Why's our monitor labelling this an incident or hazard?
The article describes how the AI system ChatGPT is being exploited by criminals to produce phishing texts, impersonate individuals, spread disinformation, and create malicious code. While no specific harm is reported as having occurred yet, the described criminal uses present credible risks of harm to individuals and communities. Therefore, this constitutes an AI Hazard, as the misuse of the AI system could plausibly lead to AI Incidents involving harm.
Thumbnail Image

Europol sends out ChatGPT warning on phishing, misinformation and cybercrime

2023-03-28
MaltaToday.com.mt
Why's our monitor labelling this an incident or hazard?
Europol's report highlights the potential for AI systems like ChatGPT to be abused by criminals, which could plausibly lead to harms such as fraud, misinformation, and cybercrime. However, the article does not describe any realized harm or specific incidents caused by AI misuse. Instead, it serves as a cautionary advisory and a call for dialogue and safeguards. Therefore, this event fits the definition of an AI Hazard, as it concerns plausible future harm stemming from the use or misuse of an AI system.
Thumbnail Image

Europol warns ChatGPT is already helping criminals

2023-03-28
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) whose use by criminals has directly led to harms including fraud, cybercrime, terrorism facilitation, and the production of malicious code. Europol's report documents concrete examples of misuse and resulting criminal activities, fulfilling the criteria for an AI Incident. The harms are realized, not merely potential, and the AI system's role is pivotal in enabling these crimes. Therefore, this is classified as an AI Incident.
Thumbnail Image

Europol study identifies cyber crime, scams and disinformation as key ChatGPT risks

2023-03-27
TheJournal.ie
Why's our monitor labelling this an incident or hazard?
The article discusses the potential misuse of an AI system (ChatGPT) for criminal activities, which could plausibly lead to harms such as fraud, disinformation, and cybercrime. However, it does not report any actual incident where harm has occurred. The focus is on raising awareness and promoting preventive measures, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Europol warns cops to prep for malicious AI abuse | Computer Weekly

2023-03-28
Computer Weekly
Why's our monitor labelling this an incident or hazard?
The article discusses the potential for LLMs to be abused by criminals to generate sophisticated phishing scams that could cause harm to individuals and organizations. Although no actual incident of harm is reported, the credible risk of such misuse and its implications for law enforcement preparedness clearly indicate a plausible future harm scenario. The involvement of AI systems (LLMs) is explicit, and the focus is on the potential negative impacts and recommendations to mitigate these risks. Therefore, this event is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Europol warns about criminal abuse of ChatGPT

2023-03-27
TRT World
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses ChatGPT, an AI system, and how its capabilities could be exploited by criminals to commit cybercrimes. The harms described (fraud, phishing, disinformation, malware) are serious and clearly articulated, but the article focuses on potential misuse and risks rather than a realized incident. Therefore, this constitutes an AI Hazard, as the development and use of ChatGPT could plausibly lead to AI Incidents involving harm to individuals and communities. There is no description of an actual incident or harm having occurred yet, so it is not an AI Incident. It is more than general AI news or complementary information because it focuses on the credible risk of harm from AI misuse.
Thumbnail Image

Europol is worried about the potential misuse of ChatGPT

2023-03-28
Android Headlines
Why's our monitor labelling this an incident or hazard?
The article discusses the potential misuse of ChatGPT by criminals for cybercrime, disinformation, and phishing, which could plausibly lead to harms such as manipulation of public opinion, cyberattacks, and scams. Since these harms have not been explicitly reported as realized incidents but are credible risks based on the AI system's capabilities, this qualifies as an AI Hazard rather than an AI Incident. The involvement of an AI system (ChatGPT) is explicit, and the concerns relate to its use and misuse, fitting the definition of an AI Hazard.
Thumbnail Image

Europol sounds alarm about criminal use of ChatGPT

2023-03-28
iTnews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT, a large language model) whose capabilities could plausibly lead to harms such as phishing scams, disinformation campaigns, and malicious code generation. Europol's report is a warning about potential misuse rather than a report of realized harm. Therefore, this constitutes an AI Hazard because it identifies credible risks of future AI-related harms stemming from the use or misuse of ChatGPT, but does not describe an actual incident where harm has occurred.
Thumbnail Image

Europol Warns Of Potential ChatGPT Criminal Uses

2023-03-27
Eurasia Review
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the potential for criminal abuse of ChatGPT and similar LLMs, which are AI systems, but does not report any actual harm or incident caused by such misuse. The event is about the plausible future risk of AI-enabled crime, making it an AI Hazard. There is no indication of realized harm or ongoing incident, nor is the article primarily about responses or updates to past incidents, so it is not an AI Incident or Complementary Information.
Thumbnail Image

Europol: 'Dark LLMs' may become a key criminal business model

2023-03-28
Computing
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (ChatGPT and similar LLMs) that have directly led to criminal activities such as phishing attacks, disinformation campaigns, and facilitation of serious crimes like terrorism and child exploitation. Europol confirms that criminals are already exploiting these AI systems, which constitutes realized harm to communities and violations of law. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through criminal misuse.
Thumbnail Image

Europol Warns on the Criminal Usage of ChatGPT and Its Implications for Law Enforcement

2023-03-28
CircleID
Why's our monitor labelling this an incident or hazard?
The article discusses the identification of criminal use cases involving ChatGPT and the potential for harm, but it does not describe a specific incident where harm has already occurred due to the AI system. Instead, it emphasizes the plausible future risks and the need for preparedness and safeguards. Therefore, it fits the definition of an AI Hazard, as it concerns events and circumstances where AI use could plausibly lead to harm, rather than an AI Incident or Complementary Information.
Thumbnail Image

The Dark Side of ChatGPT and Other Large Language Models - HS Today

2023-03-28
HSToday
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (LLMs like ChatGPT) and their potential misuse, which could plausibly lead to harms such as disinformation campaigns. However, it does not describe any actual harm or incident that has occurred. The focus is on raising awareness and exploring possible future misuse, which fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their potential risks.
Thumbnail Image

ChatGPT in the wrong hands can contribute to crime; Understood!

2023-03-31
sivtelegram.media
Why's our monitor labelling this an incident or hazard?
The article centers on the potential for AI chatbots like ChatGPT to be misused for criminal purposes, which constitutes a credible risk of harm but does not report any actual incident where harm has occurred. The involvement of AI systems is explicit, and the harms described (fraud, cybercrime, disinformation) are serious. Since no specific harm has yet materialized according to the article, and the focus is on the potential for misuse and ongoing mitigation efforts, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

EU-crime-internet-AI-Europol

2023-03-27
nampa.org
Why's our monitor labelling this an incident or hazard?
The article describes a warning from Europol about the potential misuse of AI systems such as ChatGPT by criminals to commit various cybercrimes. While no specific harm has yet occurred, the warning highlights a plausible future risk of AI-enabled criminal activities causing harm to individuals and communities. This fits the definition of an AI Hazard, as the development and use of AI systems could plausibly lead to incidents involving fraud, disinformation, and malware-related harms.
Thumbnail Image

Criminals ready to abuse ChatGPT, warns Europol - The Bobr Times

2023-03-27
bobrtimes.com
Why's our monitor labelling this an incident or hazard?
The article describes the potential misuse of an AI system (ChatGPT) by criminals to commit fraud, phishing, cybercrime, and disinformation. Although no actual incident of harm is reported, the warning from Europol highlights a credible risk that the AI system's capabilities could be exploited to cause harm to individuals and communities. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to harms such as fraud, misinformation, and cybercrime.
Thumbnail Image

Europol warns that a criminal breach of ChatGPT is coming

2023-03-28
Valley Post
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and discusses its potential malicious use by criminals, which could plausibly lead to harms such as fraud, phishing attacks, disinformation campaigns, and cyber intrusions. Although no actual harm is reported as having occurred yet, the credible risks and warnings about exploitation constitute an AI Hazard under the framework, as the development and use of the AI system could plausibly lead to incidents causing harm.
Thumbnail Image

OpenAI's ChatGPT safeguards 'trivial to bypass' for criminals, Europol says

2023-03-28
Tech Monitor
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT, a large language model) whose safeguards to prevent malicious outputs are circumvented, enabling criminals to generate malware and phishing content. This misuse has directly led to harms such as increased cybercrime risks, social engineering attacks, and dissemination of disinformation, which harm communities and violate legal protections. The report documents ongoing criminal exploitation, confirming realized harm rather than just potential risk. Therefore, this qualifies as an AI Incident.
Thumbnail Image

'Grim' criminal abuse of ChatGPT is coming, Europol warns

2023-03-28
HT Tech
Why's our monitor labelling this an incident or hazard?
The article describes the potential misuse of an AI system (ChatGPT) by criminals to commit fraud, phishing, disinformation, and other cybercrimes. Although no actual incidents of harm are reported, the warning from Europol about the plausible future exploitation of AI for criminal purposes fits the definition of an AI Hazard. The AI system's development and use could plausibly lead to harms such as violations of rights, harm to communities, and cybercrime-related damages. Therefore, this event is best classified as an AI Hazard.
Thumbnail Image

ChatGPT will help more crimes to be committed, warns Europol

2023-03-28
Bullfrag
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLMs like ChatGPT) and their potential misuse by criminals. The Europol report explicitly warns that these AI systems could plausibly lead to harms including fraud, terrorism, and cybercrime. However, the article does not describe a concrete incident where harm has already occurred due to AI misuse, but rather focuses on the potential risks and calls for regulation and responsible AI practices. Therefore, this fits the definition of an AI Hazard, as it concerns plausible future harms stemming from AI misuse.
Thumbnail Image

"Bell" from Europol: Potential Criminal Uses of ChatGPT

2023-03-30
Valley Post
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses ChatGPT, an AI system, and its potential misuse by criminals, which could plausibly lead to harms such as fraud, disinformation, and cybercrime. Since no actual harm or incident is reported, but credible risks are identified, this qualifies as an AI Hazard. The focus is on raising awareness and anticipating possible abuse rather than reporting a concrete AI Incident or complementary information about responses to past incidents.
Thumbnail Image

3 ways ChatGPT can help criminals take advantage of you

2023-03-28
BGR
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions ChatGPT, a generative AI system, and details how its use by criminals can lead to harms such as phishing attacks, disinformation campaigns, and creation of malicious code. These harms align with violations of rights and harm to communities. However, the article does not describe a specific event where such harm has already occurred but rather warns about the potential and ongoing misuse. This fits the definition of an AI Hazard, as it plausibly could lead to AI Incidents. The article also notes ongoing efforts to mitigate these risks, but the main focus is on the potential for harm rather than a concrete incident or a governance response. Hence, the classification is AI Hazard.