Defense Distributed Launches 'GatGPT', an Unfiltered Firearms-Focused AI Chatbot

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Defense Distributed, led by Cody Wilson, has released 'GatGPT', an AI chatbot trained on firearms data and designed without standard safety filters. Promoted as a 'Digital Second Amendment' tool, its uncensored nature raises concerns about potential future harm from unregulated or dangerous information dissemination, though no incidents have yet occurred.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI system (GatGPT chatbot) designed without safety filters and with a political agenda to avoid content moderation. Although no direct harm is reported, the removal of safety constraints in an AI system that can generate content, especially with firearm-related data, plausibly could lead to harms such as misinformation, incitement, or other societal harms. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the release of a potentially harmful AI system.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityRespect of human rightsTransparency & explainabilityHuman wellbeingDemocracy & human autonomy

Industries
Government, security, and defenceDigital securityMedia, social platforms, and marketing

Affected stakeholders
General public

Harm types
Physical (injury)Physical (death)Public interestHuman or fundamental rights

Severity
AI hazard

Business function:
Other

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

'Anti-Woke' AI Chatbot 'GatGPT' Promises to Remove Constraints of Leftist 'Safety Filters'

2023-09-19
Breitbart
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (GatGPT chatbot) designed without safety filters and with a political agenda to avoid content moderation. Although no direct harm is reported, the removal of safety constraints in an AI system that can generate content, especially with firearm-related data, plausibly could lead to harms such as misinformation, incitement, or other societal harms. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the release of a potentially harmful AI system.
Thumbnail Image

Introducing the Digital Second Amendment: Empowering Users with Cutting-Edge Digital Tools

2023-09-19
Nwo Report
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (GatGPT) explicitly mentioned as an uncensored chatbot, which fits the AI system definition. However, there is no indication that the AI system has caused or led to any harm (physical, legal, societal, or otherwise) at this time. The concerns raised are about potential censorship and political control, but these are framed as motivations for the chatbot's creation rather than realized harms or credible imminent risks. The article mainly discusses the broader societal and governance context, including debates and calls for regulation, which aligns with the definition of Complementary Information. There is no direct or indirect harm reported, nor a plausible immediate hazard described. Hence, the classification is Complementary Information.
Thumbnail Image

Defense Distributed Unveils 'Gatgpt' - Championing the Digital Second Amendment and AI Freedom

2023-09-22
freedomsphoenix.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (a large language model) developed by Defense Distributed, trained on firearms data, which is intended to empower users in a way that could challenge existing regulations and public safety norms. Although no direct or indirect harm is reported, the nature of the AI system—providing potentially sensitive or dangerous information related to firearms—creates a credible risk of future harm, such as enabling unregulated firearm production or misuse. This fits the definition of an AI Hazard, as the AI system's development and intended use could plausibly lead to incidents involving harm to persons or communities. There is no evidence of realized harm or incident, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it centers on the AI system and its potential implications.
Thumbnail Image

Defense Distributed Unveils 'Gatgpt' - Championing the Digital Second Amendment and AI Freedom | AI artificial intelligence | CryptoRank.io

2023-09-20
cryptorank.io
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Gatgpt) and its development and intended use. However, there is no report or indication of any harm caused or plausible harm that could arise imminently from this AI system's use. The focus is on ideological and political advocacy regarding AI regulation and digital rights, not on any realized or imminent harm. Thus, it does not qualify as an AI Incident or AI Hazard. Instead, it provides complementary information about AI ecosystem developments and societal discourse around AI and rights, fitting the Complementary Information category.