Dark Web AI Clones FraudGPT and WormGPT Fuel Cybercrime

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In the Dark Web, cybercriminals subscribe monthly to malicious LLMs FraudGPT and WormGPT—’evil twins’ of ChatGPT—to automate phishing emails, malware, deepfake scams, and vulnerability exploits. Discovered by Netenrich and detailed in IJSR CSEIT reports, these unrestricted AI tools lower the barrier for sophisticated online fraud.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems (WormGPT and FraudGPT) that are used maliciously to generate phishing emails, malware, and hacking tools. These uses directly lead to harm by enabling cybercrime and fraud, which affect individuals and communities. The AI systems' lack of ethical safeguards and their subscription-based availability to criminals further confirm their role in causing harm. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI in causing realized harm through cybercrime.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyPrivacy & data governanceRespect of human rightsTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Digital securityIT infrastructure and hostingFinancial and insurance servicesMedia, social platforms, and marketing

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyReputationalPsychologicalHuman or fundamental rightsPublic interest

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Content generationInteraction support/chatbotsReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Los hermanos malignos de ChatGPT están cambiando el mundo del fraude online. Esto les cuesta a los cibercriminales su suscripción

2024-01-24
Genbeta
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (WormGPT and FraudGPT) that are used maliciously to generate phishing emails, malware, and hacking tools. These uses directly lead to harm by enabling cybercrime and fraud, which affect individuals and communities. The AI systems' lack of ethical safeguards and their subscription-based availability to criminals further confirm their role in causing harm. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI in causing realized harm through cybercrime.
Thumbnail Image

No todas las IAs son buenas: FraudGPT y WormGPT son versiones "malvadas" para ciberdelincuentes que se pueden comprar en la Dark Web

2024-01-23
xataka.com.mx
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (WormGPT and FraudGPT) that are developed and used for malicious purposes, including generating malware and phishing content. These uses directly lead to harms such as fraud, cyberattacks, and potential injury to individuals and organizations. The AI systems' role is pivotal as they enable more sophisticated and effective cybercrime. Therefore, this qualifies as an AI Incident because the AI systems' use has directly led to realized harms related to cybersecurity and fraud.
Thumbnail Image

FraudGPT es el gemelo maligno de ChatGPT. Y los cibercriminales de la Dark Web lo están pagando

2024-01-22
Xataka
Why's our monitor labelling this an incident or hazard?
FraudGPT and WormGPT are AI systems explicitly mentioned as generative AI models used maliciously to produce phishing emails, malware, and other cybercrime tools. Their use directly leads to harm by enabling cybercriminals to deceive users and conduct fraud, which constitutes harm to people and communities. The article reports on the active use and subscription-based availability of these AI tools for malicious purposes, indicating realized harm rather than just potential risk. Therefore, this event qualifies as an AI Incident due to the direct link between the AI systems' use and realized harm through cybercrime.
Thumbnail Image

Esta es la razón por la que los ciberdelincuentes ya no tienen faltas de ortografía

2024-01-24
El Confidencial
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (FraudGPT) developed and used to produce malware and phishing content, which directly leads to harm by enabling cybercrime and theft of personal information. This constitutes a violation of rights and harm to individuals, fitting the definition of an AI Incident due to the AI system's use causing realized harm.
Thumbnail Image

Ciberdelincuentes están utilizando la inteligencia artificial para robar: así están operando

2024-01-23
Noticias RCN | Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (FraudGPT and WormGPT) being used maliciously to generate phishing emails and malware, which directly leads to harm through cybercrime (fraud, theft). This fits the definition of an AI Incident because the AI system's use has directly led to harm to people and communities. The harm is realized, not just potential, as these tools are actively used by cybercriminals. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Alertan por FraudGPT, el clon maligno de ChatGPT para ciberestafas

2024-01-23
Tiempo
Why's our monitor labelling this an incident or hazard?
FraudGPT and WormGPT are AI systems explicitly mentioned as being used to create phishing emails, malware, and other malicious content. Their use by cybercriminals has directly led to harm through cyber scams and privacy violations. The article details how these AI models enable more sophisticated and convincing attacks, increasing the risk and actual occurrence of harm. Therefore, this event qualifies as an AI Incident because the AI systems' use has directly led to harm to people through cybercrime.
Thumbnail Image

Las réplicas de ChatGPT disponibles en la dark web potencian las estafas de ciberdelincuentes con inteligencia artificial

2024-01-23
Rosario3
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (FraudGPT, WormGPT) used by cybercriminals to conduct phishing, malware creation, and deepfake-based scams. These activities have caused actual harm to victims, including financial loss and reputational damage, fulfilling the criteria for an AI Incident. The AI systems' development and use are central to the harm described, and the harms are realized, not merely potential. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

FraudGPT: conoce la AI más peligrosa dentro de la Deep Web

2024-01-25
El Nuevo Diario
Why's our monitor labelling this an incident or hazard?
The article describes FraudGPT as an AI-powered malicious tool actively used in the Deep Web to perpetrate fraud, create malware, and spread disinformation. These activities have directly led to harms such as financial fraud, identity theft, and societal manipulation, which fall under harms to persons and communities. The AI system's use is central to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Using ChatGPT to create your resume? Experts weigh in on what not to do

2024-02-08
India Today
Why's our monitor labelling this an incident or hazard?
The article describes the use of an AI system (ChatGPT) in resume creation but does not report any direct or indirect harm resulting from its use. It does not describe any incident where AI caused injury, rights violations, disruption, or other harms. Nor does it suggest a credible risk of future harm from AI use in this context. Instead, it offers expert advice to improve outcomes and avoid common mistakes, which qualifies as complementary information about AI's role in recruitment and job seeking. Therefore, the event is best classified as Complementary Information.
Thumbnail Image

Cybercriminals are creating their own AI chatbots to support hacking and scam users

2024-02-08
The Conversation
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (LLMs like ChatGPT and malicious variants like WormGPT and FraudGPT) being used by criminals to perpetrate scams, phishing, hacking, and malware creation. These uses have directly caused harm to people through fraud and privacy breaches, fulfilling the criteria for an AI Incident. The harms include violations of privacy, financial harm to individuals, and broader societal harm from cybercrime. Therefore, this event is classified as an AI Incident.
Thumbnail Image

ChatGPT 'Lobotomized'? Performance Crash Sees Users Leaving in Droves

2024-02-08
Dani of DaniWeb.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose degraded performance is causing users to cancel subscriptions and express frustration. While this reflects a negative user experience and potential economic harm to the provider, it does not meet the criteria for an AI Incident since no direct or indirect harm to people, rights, infrastructure, or communities is described. Nor does it qualify as an AI Hazard because the article does not suggest plausible future harms beyond current dissatisfaction. The main focus is on describing the current state and user feedback, which aligns with Complementary Information as it enhances understanding of the AI ecosystem and user impact without reporting a new harm or risk.
Thumbnail Image

Artificial Intelligence Testifies at Pennsylvania House Hearing

2024-02-07
Government Technology
Why's our monitor labelling this an incident or hazard?
The article details a legislative hearing involving AI experts and ChatGPT providing insights on AI's future and regulation. It does not describe any incident or hazard where AI caused or could cause harm. The AI system's involvement is limited to providing testimony and information, with no indication of malfunction, misuse, or harm. Therefore, this is complementary information about AI governance and societal response, not an incident or hazard.
Thumbnail Image

What my classes learned about the ChatGPT revolution - MinnPost

2024-02-08
MinnPost
Why's our monitor labelling this an incident or hazard?
The article does not describe any AI Incident or AI Hazard. It focuses on educational experiences and opinions about ChatGPT's effectiveness and limitations. There is no indication that ChatGPT's use led to any harm or could plausibly lead to harm. The content is best classified as Complementary Information because it provides contextual insights into AI's role in education and user perceptions, which helps understand the broader AI ecosystem and its impact on teaching and learning.
Thumbnail Image

🔮 The brilliant, complicated simplicity of ChatGPT

2024-02-08
exponentialview.co
Why's our monitor labelling this an incident or hazard?
The content focuses on explaining how ChatGPT operates via system prompts, including legal and ethical guidelines embedded in its instructions. There is no indication of any realized harm (AI Incident) or plausible future harm (AI Hazard) stemming from these instructions or the AI system's behavior. Nor does it report on responses or governance actions related to AI harms. Therefore, it is best classified as Complementary Information, providing context and understanding of the AI system's operation without describing an incident or hazard.