Meta’s AI misfires cause misinformation, censorship and forced follows

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta’s AI systems have misfired, with its chatbot still naming Joe Biden as president days after Donald Trump’s inauguration, its content moderation algorithms blocking contraceptive providers, and its follower-transfer algorithm automatically forcing US users to follow Trump’s accounts—prompting internal emergency procedures and public outcry over misinformation, rights violations and censorship.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems used by Meta for content moderation and enforcement of platform policies. The AI system's use has directly led to the suppression of contraceptive-related content, which harms users' access to important health information and services, constituting harm to communities and a violation of rights. The blocking and shadowbanning of accounts and posts is a direct consequence of the AI moderation system's operation, including errors and overreach. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.[AI generated]
AI principles
AccountabilityFairnessRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingHealthcare, drugs, and biotechnologyGovernment, security, and defence

Affected stakeholders
ConsumersBusinessGeneral public

Harm types
Economic/PropertyReputationalHuman or fundamental rightsPublic interest

Severity
AI incident

Business function:
Monitoring and quality controlCitizen/customer service

AI system task:
Interaction support/chatbotsContent generationOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Meta es acusada de "obligar" a los usuarios a seguir a Donald Trump en Facebook e Instagram

2025-01-23
Prensa Libre
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in social media platforms managing follows and content, but there is no confirmed direct or indirect harm caused by AI malfunction or misuse. The forced follows and unfollowing difficulties are user complaints and platform technical issues, with Meta denying forced follows and explaining transition-related delays. No explicit or inferred harm such as rights violations or health impacts is reported. The event mainly provides updates and clarifications from Meta about ongoing platform issues and user concerns, fitting the definition of Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

Mark Zuckerberg le sigue el juego a Donald Trump: Meta estaría ocultando y bloqueando anuncios anticonceptivos

2025-01-24
El Español
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for content moderation and enforcement of platform policies. The AI system's use has directly led to the suppression of contraceptive-related content, which harms users' access to important health information and services, constituting harm to communities and a violation of rights. The blocking and shadowbanning of accounts and posts is a direct consequence of the AI moderation system's operation, including errors and overreach. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

Meta obliga a muchos usuarios a seguir a Donald Trump en Facebook e Instagram

2025-01-23
Cubadebate
Why's our monitor labelling this an incident or hazard?
The event describes an AI-driven process (algorithmic account management and follower transfer) that has directly led to users being forced to follow political accounts without their explicit consent, causing harm to user autonomy and potentially violating rights related to user control over their social media experience. The AI system's role is pivotal in automatically transferring followers and re-following accounts, which has caused realized harm in terms of user frustration and perceived manipulation. Therefore, this is classified as an AI Incident due to violation of rights through the AI system's use causing direct harm to users' control over their accounts.
Thumbnail Image

Meta busca una solución urgente a la confusión del chatbot de IA sobre el nombre del presidente de EU

2025-01-24
Forbes México
Why's our monitor labelling this an incident or hazard?
The Meta AI chatbot is an AI system that generates responses to user queries. Its failure to update the current US president's name and continuing to provide outdated information constitutes a malfunction. This malfunction has directly led to misinformation being spread to users, which harms communities by causing confusion and potentially influencing political perceptions. Additionally, other related platform issues (forced refollowing of political figures, hashtag search blocking) further indicate systemic problems affecting user rights and information access. These harms fall under violations of rights and harm to communities, meeting the criteria for an AI Incident.