Lizzo Criticizes Social Media Algorithms for Harming Music Promotion

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Lizzo publicly criticized social media algorithms, claiming they are biased and negatively impacting her ability and that of other artists to promote new music. She alleges these AI-driven systems disrupt music industry marketing, reduce album visibility, and perpetuate discrimination, leading to economic harm for artists.[AI generated]

Why's our monitor labelling this an incident or hazard?

Social media algorithms are AI systems that curate and recommend content to users. Lizzo's complaint highlights that these algorithms are malfunctioning or operating in a way that harms the music industry's ability to promote new releases effectively. This disruption can be considered harm to the music industry, which is a form of harm to property and economic interests of artists and related communities. Since the harm is occurring due to the use of AI systems (algorithms) and is directly impacting the promotion and potential sales of music, this qualifies as an AI Incident under the definition of harm to communities and property through disruption of industry operations.[AI generated]
AI principles
FairnessTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Workers

Harm types
Economic/Property

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

When is Lizzo's new album releasing? Rapper claims social media algorithm is negatively affecting her music promotion

2026-05-13
Sportskeeda
Why's our monitor labelling this an incident or hazard?
Social media algorithms are AI systems that curate and recommend content to users. Lizzo's complaint highlights that these algorithms are malfunctioning or operating in a way that harms the music industry's ability to promote new releases effectively. This disruption can be considered harm to the music industry, which is a form of harm to property and economic interests of artists and related communities. Since the harm is occurring due to the use of AI systems (algorithms) and is directly impacting the promotion and potential sales of music, this qualifies as an AI Incident under the definition of harm to communities and property through disruption of industry operations.
Thumbnail Image

Lizzo Says Social Media Algorithms Are 'Destroying the Music Industry' & Messing Up Her Album Promo

2026-05-13
Billboard
Why's our monitor labelling this an incident or hazard?
The social media algorithms involved are AI systems that influence content delivery. Lizzo's statements highlight negative effects on music promotion and potential bias (racism and fat phobia) in the algorithms. However, the article does not report any realized harm such as injury, rights violations, or other significant harms directly caused by the AI systems. The harm described is more about marketing challenges and perceived unfairness, which does not meet the threshold for an AI Incident or AI Hazard. The content mainly provides insight into societal and industry concerns about AI algorithms, fitting the definition of Complementary Information.
Thumbnail Image

Lizzo Says "Racist And Fatphobic" Algorithm Is Hurting Her Album Promotion

2026-05-13
Stereogum
Why's our monitor labelling this an incident or hazard?
The social media algorithms mentioned are AI systems that influence content visibility and user experience. Lizzo's critique points to discriminatory biases ('racist and fatphobic') in these algorithms, which can be interpreted as a violation of rights or harm to communities. However, the article does not describe a specific event where harm has concretely occurred or been legally recognized; instead, it discusses ongoing challenges and systemic effects on album promotion. This aligns with the definition of an AI Hazard, where the AI system's use could plausibly lead to harm (e.g., unfair treatment, discrimination) but no specific incident is detailed. Hence, the classification is AI Hazard rather than AI Incident.