
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Cardano co-founder Charles Hoskinson warned that AI alignment training by major tech firms (OpenAI, Microsoft, Meta, Google) risks censorship, restricting access to knowledge. In social media posts, he shared examples of GPT-4 and Claude refusing technical prompts, arguing that a few unaccountable entities could decide what information future generations see.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (generative AI chatbots) and discusses their use and content filtering (alignment training) that could plausibly lead to harm in the form of restricted access to knowledge, which is a form of harm to communities or violation of rights. However, no actual harm or incident has occurred yet; the concerns are about potential future censorship and information control. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving censorship and restricted knowledge access, but no incident has been reported yet.[AI generated]