GhostGPT: AI Tool for Cybercrime

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

GhostGPT, an uncensored AI chatbot, is being used by cybercriminals for creating malware and phishing scams. Developed from a jailbroken version of ChatGPT or similar models, it lacks ethical safeguards, offering features like no-log policies and easy access via Telegram. This raises significant concerns about AI misuse in cybercrime.[AI generated]

Why's our monitor labelling this an incident or hazard?

GhostGPT is an AI system (a jailbroken LLM) whose misuse by threat actors is enabling business email compromise scams and malware creation. This represents direct, ongoing harm to victims via AI‐enabled phishing and cyberattacks, fitting the definition of an AI Incident.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityPrivacy & data governanceTransparency & explainabilityRespect of human rights

Industries
Digital securityIT infrastructure and hostingFinancial and insurance servicesGovernment, security, and defence

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyReputationalPublic interestHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Introducing GhostGPT - the latest criminal malware creator

2025-01-24
cyberdaily.au
Why's our monitor labelling this an incident or hazard?
GhostGPT is an AI system (a jailbroken LLM) whose misuse by threat actors is enabling business email compromise scams and malware creation. This represents direct, ongoing harm to victims via AI‐enabled phishing and cyberattacks, fitting the definition of an AI Incident.
Thumbnail Image

New GhostGPT Chatbot Creates Malware and Phishing Emails

2025-01-24
eWEEK
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (GhostGPT) being actively misused by criminals to create phishing campaigns and malware. This misuse has directly led to harmful outcomes by producing more convincing, higher-scale cyberattacks, qualifying as an AI Incident.
Thumbnail Image

How Hackers Use GhostGPT to Generate Malware & Exploits?

2025-01-27
Analytics Insight
Why's our monitor labelling this an incident or hazard?
Hackers are directly deploying an AI system (GhostGPT) to generate complex malicious code that outsmarts detection and facilitates cyberattacks. This constitutes actual, realized harm driven by the AI’s outputs, so it is classified as an AI Incident.
Thumbnail Image

GhostGPT offers AI coding, phishing assistance for cybercriminals

2025-01-24
SC Media
Why's our monitor labelling this an incident or hazard?
GhostGPT is explicitly used by attackers to generate malicious content—phishing templates and malware code—leading to actual harms (successful phishing and ransomware campaigns). This constitutes an AI Incident, as the AI system’s use directly contributes to violations of rights and harms to communities.
Thumbnail Image

Introducing GhostGPT -- The New Cybercrime AI Used By Hackers

2025-01-23
Forbes
Why's our monitor labelling this an incident or hazard?
The article describes a newly discovered AI system (GhostGPT) that is already being deployed by hackers to create malware and conduct phishing scams, directly facilitating harm. This constitutes an AI Incident because the misuse of the AI has realized harms to individuals and organizations through cybercrime.
Thumbnail Image

GhostGPT: Uncensored Chatbot Used by Cyber Criminals for Malware Creation, Scams

2025-01-23
TechRepublic
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies GhostGPT as an AI system (a chatbot likely based on a jailbroken large language model) used by criminals to create phishing emails and malware, directly facilitating cybercrime. This use leads to realized harms including scams and malware distribution, which constitute violations of rights and harm to communities. The AI system's development and use are central to these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

GhostGPT: A Malicious AI Chatbot for Hackers

2025-01-24
Security Boulevard
Why's our monitor labelling this an incident or hazard?
GhostGPT is an AI system (a chatbot similar to ChatGPT) that is used maliciously to create malware and phishing emails, directly leading to harm such as fraud and cyberattacks. The article details how the AI's uncensored nature enables threat actors to bypass ethical safeguards and produce harmful content easily, which has already resulted in cybercrime incidents. The harms include violations of rights and harm to communities through scams and cyberattacks. This meets the criteria for an AI Incident because the AI system's use has directly led to realized harm.
Thumbnail Image

New GhostGPT AI Chatbot Facilitates Malware Creation and Phishing

2025-01-23
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
GhostGPT is an AI system (a generative AI chatbot) whose use directly leads to harms including cybercrime, phishing, and malware creation, which cause harm to individuals and communities. The article details that the tool is actively sold and used by criminals, resulting in realized harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harms.