Cardano Founder Charles Hoskinson Warns of AI Censorship by Big Tech

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Cardano co-founder Charles Hoskinson warned that AI alignment training by major tech firms (OpenAI, Microsoft, Meta, Google) risks censorship, restricting access to knowledge. In social media posts, he shared examples of GPT-4 and Claude refusing technical prompts, arguing that a few unaccountable entities could decide what information future generations see.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article involves AI systems (generative AI chatbots) and discusses their use and content filtering (alignment training) that could plausibly lead to harm in the form of restricted access to knowledge, which is a form of harm to communities or violation of rights. However, no actual harm or incident has occurred yet; the concerns are about potential future censorship and information control. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving censorship and restricted knowledge access, but no incident has been reported yet.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRespect of human rightsDemocracy & human autonomyFairness

Industries
Media, social platforms, and marketingEducation and trainingIT infrastructure and hosting

Affected stakeholders
General public

Harm types
Public interestHuman or fundamental rights

Severity
AI hazard

Business function:
Research and developmentCitizen/customer serviceMonitoring and quality control

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Cardano Founder Voices Concerns About "AI Censorship"

2024-07-01
u.today
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (generative AI chatbots) and discusses their use and content filtering (alignment training) that could plausibly lead to harm in the form of restricted access to knowledge, which is a form of harm to communities or violation of rights. However, no actual harm or incident has occurred yet; the concerns are about potential future censorship and information control. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving censorship and restricted knowledge access, but no incident has been reported yet.
Thumbnail Image

Cardano founder Charles Hoskinson raises AI censorship concerns | AI cardano | CryptoRank.io

2024-07-01
CryptoRank
Why's our monitor labelling this an incident or hazard?
The article centers on expressed concerns about AI censorship and control by dominant companies, illustrating this with examples of AI models withholding potentially dangerous information. While these concerns point to potential future harms related to restricted access to knowledge and possible misinformation or manipulation, no actual harm or incident is reported. The discussion is about risks and governance issues, making this a plausible AI Hazard scenario rather than an AI Incident. It also includes broader societal and governance responses and opinions, but these do not constitute Complementary Information about a specific incident. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

Cardano founder Charles Hoskinson Sounds Alarm on AI Censorship and Bias by Big Tech

2024-07-01
Bitcoinik
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (AI chatbots like ChatGPT and Claude) and discusses their use and training by big tech companies. The concerns raised relate to AI censorship and bias, which could plausibly lead to harm such as violation of rights to information and biased knowledge dissemination. However, no direct or indirect harm has been reported as having occurred yet. The event is thus best classified as an AI Hazard, as it highlights credible risks of harm from AI censorship and bias in the future, rather than an AI Incident or Complementary Information.
Thumbnail Image

Charles Hoskinson AI Censorship Concerns: Cardano Co-Founder Criticizes Selective AI Training - The Bit Journal

2024-07-02
The Bit Journal
Why's our monitor labelling this an incident or hazard?
The article centers on the potential and plausible risks of AI censorship through selective training and concentration of power among a few tech companies. There is no description of an actual AI system causing harm or censorship at this time, only concerns and warnings about what could happen. Therefore, this fits the definition of an AI Hazard, as it plausibly could lead to harm such as violations of rights or harm to communities if unchecked, but no realized harm is reported.