Malicious AI Chatbots Enable Cybercrime Surge on the Dark Web

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI tools like FraudGPT and WormGPT, designed without ethical safeguards, are being sold and used on the dark web to facilitate phishing, malware creation, and business email compromise attacks. These generative AI systems lower barriers for cybercriminals, directly enabling large-scale cyberattacks and financial harm to organizations and individuals.[AI generated]

Why's our monitor labelling this an incident or hazard?

The chatbots mentioned are AI systems based on generative AI technology (ChatGPT-3) that are being used maliciously to create realistic phishing content and malware. This use directly leads to harm to people by enabling fraud and cybercrime. Therefore, this constitutes an AI Incident as the AI system's use has directly led to harm through cybercriminal activities.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyPrivacy & data governanceRespect of human rightsTransparency & explainability

Industries
Digital securityFinancial and insurance servicesIT infrastructure and hosting

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Beware of FraudGPT, the rogue AI chatbot - ET CISO

2023-08-01
ETCISO.in
Why's our monitor labelling this an incident or hazard?
The chatbots mentioned are AI systems based on generative AI technology (ChatGPT-3) that are being used maliciously to create realistic phishing content and malware. This use directly leads to harm to people by enabling fraud and cybercrime. Therefore, this constitutes an AI Incident as the AI system's use has directly led to harm through cybercriminal activities.
Thumbnail Image

Beware of FraudGPT, the rogue AI chatbot - ET Telecom

2023-08-01
ETTelecom.com
Why's our monitor labelling this an incident or hazard?
The chatbots described are AI systems based on generative AI technology (ChatGPT-3) that generate realistic malicious content used in cybercrime. Their use has directly led to harms such as phishing, malware distribution, and scams, which constitute harm to persons and communities. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI systems in causing realized harm through cybercriminal activities.
Thumbnail Image

Beware of FraudGPT, the rogue AI chatbot | Ahmedabad News - Times of India

2023-07-31
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (FraudGPT and similar chatbots) explicitly mentioned as generative AI based on ChatGPT-3 technology. Their use by cybercriminals to create phishing emails, malware, and hacking tools directly leads to harm such as fraud, theft, and potential injury to individuals' financial and informational security, which qualifies as harm to communities and violation of rights. Therefore, this constitutes an AI Incident due to the realized harm caused by the malicious use of AI systems.
Thumbnail Image

Beware of FraudGPT, the rogue AI chatbot - ET CIO

2023-08-01
ETCIO.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI chatbots based on ChatGPT-3 technology) being used maliciously to generate realistic phishing emails and malware. This use directly leads to harm by enabling cybercrime, fraud, and deception, which harm individuals and communities. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in criminal activities.
Thumbnail Image

Beware: Cybercriminals using 'limitless' AI tools like FraudGPT or WormGPT for frauds

2023-07-31
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (FraudGPT and WormGPT) based on generative AI technology (ChatGPT-3) being used by criminals to produce harmful outputs like phishing emails and malware. These outputs directly cause harm to individuals and communities by enabling fraud and cybercrime. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm through malicious activities.
Thumbnail Image

What's FraudGPT and how criminals are using this AI chatbot to target innocent internet users

2023-07-31
Economic Times
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (FraudGPT) explicitly used by criminals to generate fraudulent and deceptive content that causes direct harm to people and organizations through scams, data breaches, and malware infections. This constitutes an AI Incident because the AI system's use has directly led to realized harms such as financial loss and compromised security. The involvement of the AI system in generating convincing fraudulent content is central to the harm described.
Thumbnail Image

What is WormGPT? How is it Different from ChatGPT?

2023-07-31
My Mobile
Why's our monitor labelling this an incident or hazard?
The article describes WormGPT as an AI system explicitly created and used for malicious activities, including malware creation and cyberattacks, which constitute harm to computer systems and networks (harm to property and communities). The AI system's development and use directly lead to these harms. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's malicious use.
Thumbnail Image

WormGPT And FraudGPT Emerge As Scammers Weaponize AI Chatbots To Steal Data

2023-07-28
ZeroHedge
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (WormGPT and FraudGPT) that are generative AI chatbots based on large language models, repurposed or designed for malicious use. Their use directly leads to harms including data theft, financial fraud, and cybercrime, which are harms to individuals and communities. The AI systems are central to the incident as they automate and enhance the sophistication of phishing and fraud attacks, increasing their success rates. This meets the definition of an AI Incident because the AI system's use has directly led to significant harm.
Thumbnail Image

There's no reason to panic over WormGPT | TechCrunch

2023-08-01
TechCrunch
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (WormGPT and FraudGPT) explicitly described as large language models used for malicious purposes. While these AI systems have been used or could be used to generate phishing emails and malicious code, the article indicates that the harm caused so far is minimal and the models are not very effective. There is no evidence of significant realized harm (such as successful large-scale phishing attacks or breaches) directly caused by these AI systems. The article mainly provides an analysis and assessment of the threat level, concluding that the threat is not as severe as some headlines suggest. Therefore, this is best classified as Complementary Information, as it provides context and evaluation of AI-related threats without reporting a specific AI Incident or imminent AI Hazard.
Thumbnail Image

What Is FraudGPT? How to Protect Yourself From This Dangerous Chatbot

2023-08-01
MakeUseOf
Why's our monitor labelling this an incident or hazard?
The article describes FraudGPT as an AI system designed and used explicitly to enable cybercriminals to commit fraud and cybercrime. The harms include financial fraud, identity theft, and malware attacks, which are direct harms to individuals and communities. Since the AI system's use has already led to these harms, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to people and communities.
Thumbnail Image

Phishing as a Service

2023-07-31
Analytics India Magazine
Why's our monitor labelling this an incident or hazard?
FraudGPT is an AI system (a large language model) explicitly described as being used to generate malicious code, malware, phishing pages, and scam content, which are directly linked to cybercrime harms such as fraud, unauthorized transactions, and data breaches. The article reports that this tool is actively circulating and used by cybercriminals, indicating realized harm rather than just potential risk. The harms include financial fraud, violation of property rights, and harm to communities through cybercrime. Thus, this qualifies as an AI Incident because the AI system's use has directly led to significant harms as defined in the framework.
Thumbnail Image

Hackers Use FraudGPT to Train on Malware-Focused Data -- Evil AI Chatbot Counterpart?

2023-08-01
Tech Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (FraudGPT) developed and used by hackers to create malware and phishing scams, which are harmful activities causing injury to individuals and communities through cybercrime. The AI system's use directly leads to harm by enabling and enhancing cybercriminal operations. This meets the criteria for an AI Incident because the AI system's use has directly led to harm (a) injury or harm to persons or groups, and (e) other significant harms where AI's role is pivotal. Therefore, the event is classified as an AI Incident.
Thumbnail Image

AI chatbots in the hands of hackers: The latest threats

2023-07-31
Komando.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots trained on malware-related data) being used maliciously to produce phishing emails and fraud-related content, which directly causes harm to people through scams and financial fraud. This fits the definition of an AI Incident because the AI system's use has directly led to harm to individuals (harm to people and communities).
Thumbnail Image

'DarkBERT' GPT-Based Malware Trains Up on the Entire Dark Web

2023-08-01
Dark Reading
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (large language models trained on Dark Web data) being used maliciously to facilitate cybercrime, including phishing and exploitation of vulnerabilities. These activities directly lead to harms such as violations of rights, harm to communities, and potential disruption of critical infrastructure. The AI systems are already in use or imminent deployment, indicating realized harm rather than just potential. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

WormGPT: Business email compromise amplified by ChatGPT hack

2023-07-31
Security Boulevard
Why's our monitor labelling this an incident or hazard?
WormGPT is an AI system (a generative large language model) explicitly described as being used to produce fraudulent emails that have caused or are causing business email compromise fraud, which is a form of financial harm to organizations. The article details how the AI's outputs are used maliciously to deceive employees into transferring money to criminals, fulfilling the criteria for harm (a) injury or harm to persons or groups (financial harm to businesses and individuals). The AI system's development and use are central to the incident, and the harm is realized, not just potential. Thus, this is an AI Incident.
Thumbnail Image

After WormGPT and FraudGPT, DarkBERT and DarkBART are on the Horizon

2023-08-01
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as malicious chatbots leveraging generative AI capabilities to conduct cybercrime activities. The use of these AI systems is directly linked to potential harms including attacks on critical infrastructure, fraud, and malware distribution, which fall under harms (a), (b), and (d) in the AI Incident definition. While some harms may not have fully materialized yet, the article indicates ongoing use of similar tools (WormGPT, FraudGPT) causing harm, and the new tools are expected to exacerbate these issues. Therefore, the event qualifies as an AI Hazard due to the plausible and credible risk of significant harm from the development and deployment of these AI-powered malicious tools. Since the article focuses on the imminent threat and development rather than confirmed widespread harm from DarkBERT and DarkBART, it is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

AI Risks Becoming More Potent as 'FraudGPT' Goes on Sale on the Dark Web

2023-07-31
TradingView
Why's our monitor labelling this an incident or hazard?
The article describes FraudGPT as an AI tool developed specifically for malicious use, including phishing and identity fraud, which are harmful activities causing direct harm to people and communities. The AI system's involvement is explicit and its use has already led to harm, fulfilling the criteria for an AI Incident. The mention of its sale and active use on the dark web confirms ongoing harm rather than just potential risk. Therefore, this event is classified as an AI Incident.
Thumbnail Image

AI Risks Becoming More Potent as 'FraudGPT' Goes on Sale on the Dark Web

2023-07-31
BeInCrypto
Why's our monitor labelling this an incident or hazard?
FraudGPT is an AI system designed and used for malicious purposes, including phishing and fraud, which directly causes harm to individuals and communities. The article details active use and sale of this AI tool facilitating cybercrime, meeting the criteria for an AI Incident due to realized harm. The involvement of AI in enabling these harms is explicit and central to the event. Therefore, the classification is AI Incident.
Thumbnail Image

KI und Regulierung

2023-08-03
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The article mentions AI-related risks and regulatory discussions but does not report a concrete incident or hazard involving AI systems causing or plausibly causing harm. It is a general discussion about AI's potential dangers and regulatory considerations, without detailing a specific AI Incident or AI Hazard. Therefore, it fits best as Complementary Information, providing context and background on AI and regulation rather than reporting a new incident or hazard.
Thumbnail Image

Sechs Meilensteine, wie Künstliche Intelligenz die Politik verändern könnte

2023-08-02
heise online
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the potential and speculative future impacts of AI on political processes, including risks like misinformation and AI-driven political messaging, but does not describe any concrete event where AI caused harm or disruption. It focuses on outlining milestones and possible developments rather than reporting an incident or hazard. There is no direct or indirect harm reported, nor a credible imminent risk of harm from a specific AI system malfunction or misuse. Hence, it is not an AI Incident or AI Hazard. Instead, it provides valuable context and analysis about AI's societal implications, which aligns with the definition of Complementary Information.
Thumbnail Image

Aufsätze und Hausaufgaben mit ChatGPT: Wie sollen Schulen und Eltern damit umgehen?

2023-08-04
OVB Online
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT and similar tools) used by students. However, it does not describe any realized harm such as injury, rights violations, or other significant harms caused by AI use. The harms discussed (academic dishonesty, learning impact) are indirect and societal but not framed as an AI Incident with direct or indirect harm as defined. There is no specific event of harm or credible near-miss described. The article mainly discusses the challenges, debates, and responses around AI in education, including calls for policy and educational changes. This fits the definition of Complementary Information, as it provides supporting context and societal/governance responses to AI use in schools without reporting a new AI Incident or Hazard.
Thumbnail Image

Kriminelle Intelligenz WormGPT - ChatGPTs böser kleiner Bruder - Mittelstand Cafe

2023-08-02
Mittelstand Cafe
Why's our monitor labelling this an incident or hazard?
WormGPT is an AI system based on a large language model that is explicitly used for criminal purposes, including generating phishing emails and exploiting social engineering techniques. The article details how this AI is actively used by hackers to cause financial and data harm to businesses and individuals. The harms described include violations of property and harm to communities through cybercrime. Since the AI system's use directly leads to these harms, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

FraudGPT: Phishing-Mails und Malware auf Knopfdruck

2023-08-02
connect-living
Why's our monitor labelling this an incident or hazard?
The article describes FraudGPT as an AI system (a large language model-based chatbot) explicitly designed to create phishing emails and malware, and to assist criminals in targeting victims. This directly involves the use of an AI system in causing harm (fraud, malware infections) to people and communities. The harms are realized and ongoing, not merely potential. Hence, this qualifies as an AI Incident under the OECD framework, as the AI system's use has directly led to significant harms.
Thumbnail Image

ChatGPT & Google Bard clones created by bank-draining cybercriminals on dark web

2023-08-07
The Irish Sun
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (WormGPT and FraudGPT) developed and used by cybercriminals to generate phishing emails, hacking codes, and undetectable malware. These AI systems facilitate scams that have already caused harm by stealing money and personal information from victims. The AI systems' role is pivotal in lowering barriers for novice cybercriminals and producing highly persuasive scam content, directly leading to harm. Hence, this is an AI Incident involving the use of AI systems to cause harm to persons and communities through cybercrime.
Thumbnail Image

Criminals Have Created Their Own ChatGPT Clones

2023-08-07
Wired
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (large language models similar to ChatGPT) developed and used by criminals to generate phishing emails and malware. These AI systems are used maliciously, directly leading to harm by enabling scams and cybercrime. The harm includes deception, potential financial loss, and security breaches, which fall under harm to communities and violation of rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm through criminal activities.
Thumbnail Image

ChatGPT & Google Bard clones created by bank-draining cybercriminals on dark web

2023-08-07
The US Sun
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (WormGPT and FraudGPT) developed and used by cybercriminals to generate phishing emails, hacking codes, and undetectable malware. These AI systems facilitate scams that steal money and personal data, causing direct harm to victims. The involvement of AI in enabling these harms is clear and direct, meeting the criteria for an AI Incident under the definitions provided. Therefore, this event is classified as an AI Incident.
Thumbnail Image

How PoisonGPT and WormGPT Brought the Generative AI Boogeyman to Life

2023-08-09
Techopedia.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (malicious LLMs WormGPT and PoisonGPT) used to generate phishing emails that have led to a significant increase in social engineering attacks, which cause harm to individuals and organizations by tricking victims into compromising their credentials or systems. This fits the definition of an AI Incident because the AI system's use has directly led to harm (phishing scams and cybercrime). The article also discusses the nature of the AI system's use (weaponized generative AI for phishing) and the resulting realized harms, not just potential risks. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

Criminals Have Created Their Own ChatGPT Clones

2023-08-07
WIRED UK
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (WormGPT and FraudGPT) developed by criminals to generate phishing emails and malware, which directly facilitate harm to people by increasing the effectiveness and accessibility of cybercrime. The AI systems' use is malicious and leads to violations of rights and harm to communities. Therefore, this qualifies as an AI Incident due to the direct link between AI use and realized harm.
Thumbnail Image

Meet the Brains Behind the Malware-Friendly AI Chat Service 'WormGPT'

2023-08-08
Security Boulevard
Why's our monitor labelling this an incident or hazard?
WormGPT is an AI system (a large language model chatbot) explicitly designed and used to generate malicious software and phishing content. The article documents that it has been used to create sophisticated phishing emails and malware, directly leading to harms such as cybercrime, fraud, and potential data breaches. This constitutes violations of law and harm to communities. The AI system's development and use are central to these harms, meeting the criteria for an AI Incident. The article does not merely discuss potential risks or responses but reports on realized harms caused by the AI system's outputs.
Thumbnail Image

Criminals Have Created Their Own ChatGPT Clones - WIRED

2023-08-07
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbot clones based on large language models) developed and used by criminals to generate scam content and facilitate fraud, which constitutes harm to individuals (harm to persons through fraud and deception). The AI systems' use directly leads to realized harm, fulfilling the criteria for an AI Incident. The article provides evidence of actual use and sales of these AI systems for malicious purposes, not just potential or hypothetical risks. Therefore, this is classified as an AI Incident.
Thumbnail Image

Pakar Ungkap Kecerdasan Buatan AI Permudah Aksi Kejahatan Siber

2023-08-16
Liputan 6
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (dark web chatbots like DarkBERT, WormGPT, FraudGPT, and misuse of ChatGPT) that are explicitly described as enabling cybercriminals to conduct phishing and fraud, causing harm to individuals and institutions. The AI systems' use is directly linked to realized harms such as identity theft, financial fraud, and reputational damage, fulfilling the criteria for an AI Incident. The article does not merely warn of potential harm but reports ongoing criminal activities facilitated by AI, thus it is not a hazard or complementary information.
Thumbnail Image

Hati-Hati, ChatGPT Bikin Manusia Makin Lemot

2023-08-14
SINDOnews.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) and discusses its use and potential negative effects on human cognition. However, it does not describe any direct or indirect harm that has already occurred due to the AI's development, use, or malfunction. The concerns raised are about plausible future harm (cognitive decline from overuse), but no specific incident or event of harm is reported. Therefore, this fits the definition of an AI Hazard, as it plausibly could lead to harm but no harm is documented yet.
Thumbnail Image

Perguruan Tinggi Bakal Kembalikan Ujian Tulis Tangan, Lawan AI ChatGP : Okezone Edukasi

2023-08-16
https://edukasi.okezone.com/
Why's our monitor labelling this an incident or hazard?
The article centers on educational institutions' strategies to address the potential misuse of AI tools like ChatGPT by students, which is a response to a perceived risk rather than a realized harm. There is no mention of any injury, rights violation, disruption, or other harm caused by AI use. The content is about managing AI's impact on academic integrity and teaching methods, which qualifies as complementary information about societal and governance responses to AI rather than an incident or hazard.