FraudGPT Enables Cybercriminals to Launch AI-Powered Attacks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

FraudGPT, an AI tool sold on the dark web, is designed to generate malicious content for cyberattacks, such as phishing and malware creation. Unlike ChatGPT, it lacks safety controls, enabling cybercriminals to exploit its capabilities for fraud and data theft, posing significant risks to individuals and organizations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article centers on the potential risks and pitfalls of using ChatGPT for enterprise purposes, particularly regarding data leakage and intellectual property violations. These risks are plausible and credible but have not yet materialized as actual incidents causing harm. Therefore, the event qualifies as an AI Hazard because it describes circumstances where the use of an AI system (ChatGPT) could plausibly lead to harm, but no direct or indirect harm has been reported or confirmed in the article. It is not Complementary Information because the focus is not on responses or updates to a past incident, nor is it Unrelated since it clearly involves an AI system and its potential risks.[AI generated]
AI principles
AccountabilitySafetyRobustness & digital securityPrivacy & data governanceRespect of human rightsTransparency & explainabilityHuman wellbeing

Industries
Digital securityIT infrastructure and hostingFinancial and insurance servicesGovernment, security, and defence

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI hazard

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

IBM blockchain and AI expert says ChatGPT poses several 'key risks' for enterprise use

2023-08-18
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The article centers on the potential risks and pitfalls of using ChatGPT for enterprise purposes, particularly regarding data leakage and intellectual property violations. These risks are plausible and credible but have not yet materialized as actual incidents causing harm. Therefore, the event qualifies as an AI Hazard because it describes circumstances where the use of an AI system (ChatGPT) could plausibly lead to harm, but no direct or indirect harm has been reported or confirmed in the article. It is not Complementary Information because the focus is not on responses or updates to a past incident, nor is it Unrelated since it clearly involves an AI system and its potential risks.
Thumbnail Image

FraudGPT: Unveiling the dark web's weapon for cyber attack

2023-08-17
The Sociable
Why's our monitor labelling this an incident or hazard?
The article explicitly describes FraudGPT as an AI system that generates malicious content to facilitate cyberattacks, including phishing and malware creation. These activities cause direct harm to individuals and organizations by enabling fraud, data theft, and potential disruption of critical online infrastructure. The AI system's development and use by cybercriminals are central to these harms. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to realized harms.
Thumbnail Image

expert reaction to study measuring political bias in ChatGPT

2023-08-16
Science Media Centre
Why's our monitor labelling this an incident or hazard?
The content centers on expert opinions about a research study assessing political bias in ChatGPT. There is no indication of realized harm, violation of rights, or disruption caused by ChatGPT's outputs. The discussion is about understanding and measuring bias, highlighting the need for transparency and further testing. This fits the definition of Complementary Information, as it provides context and expert insight into AI bias research without describing a new AI Incident or AI Hazard.
Thumbnail Image

IBM blockchain and AI expert says ChatGPT poses several 'key risks' for enterprise use

2023-08-18
cryptonews.com.au
Why's our monitor labelling this an incident or hazard?
The article centers on the potential risks and hazards of using ChatGPT, an AI system, in enterprise contexts. It does not describe any realized harm or incident but warns about plausible future harms such as data leakage, intellectual property violations, and legal exposure. Therefore, it fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if sensitive data is leaked or misused. There is no indication of an actual incident or harm having occurred yet, nor is the article primarily about responses or updates to past incidents, so it is not Complementary Information. It is clearly related to AI systems and their risks, so it is not Unrelated.