
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Meta’s AI systems have misfired, with its chatbot still naming Joe Biden as president days after Donald Trump’s inauguration, its content moderation algorithms blocking contraceptive providers, and its follower-transfer algorithm automatically forcing US users to follow Trump’s accounts—prompting internal emergency procedures and public outcry over misinformation, rights violations and censorship.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for content moderation and enforcement of platform policies. The AI system's use has directly led to the suppression of contraceptive-related content, which harms users' access to important health information and services, constituting harm to communities and a violation of rights. The blocking and shadowbanning of accounts and posts is a direct consequence of the AI moderation system's operation, including errors and overreach. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.[AI generated]