AI-Driven Social Media Algorithms Promote Alcohol to French Youth, Raising Health Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

French associations report that AI-powered algorithms on platforms like Instagram and TikTok have exposed youth to over 11,000 alcohol-promoting posts from 2021 to 2024. This targeted promotion, largely unregulated, increases young people's desire to consume alcohol, raising significant public health concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions targeted algorithms used to promote alcohol content on social media, which can be reasonably inferred to involve AI systems for content recommendation and targeting. The exposure of young people to such advertising is linked to increased desire to consume alcohol, which is a health harm and social harm. The AI system's role in enabling this targeted exposure is pivotal in the chain of harm. Thus, this qualifies as an AI Incident due to indirect harm caused by AI-enabled targeted advertising promoting alcohol to minors.[AI generated]
AI principles
SafetyHuman wellbeingTransparency & explainabilityPrivacy & data governanceAccountabilityDemocracy & human autonomyRespect of human rights

Industries
Media, social platforms, and marketingHealthcare, drugs, and biotechnology

Affected stakeholders
Children

Harm types
PsychologicalPublic interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Alcool : 8 jeunes sur 10 sont exposés à des publicités sur les réseaux sociaux

2024-09-26
Le Figaro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions targeted algorithms used to promote alcohol content on social media, which can be reasonably inferred to involve AI systems for content recommendation and targeting. The exposure of young people to such advertising is linked to increased desire to consume alcohol, which is a health harm and social harm. The AI system's role in enabling this targeted exposure is pivotal in the chain of harm. Thus, this qualifies as an AI Incident due to indirect harm caused by AI-enabled targeted advertising promoting alcohol to minors.
Thumbnail Image

Contenus valorisant l'alcool auprès des jeunes : le rôle des réseaux sociaux dénoncé par un rapport

2024-09-27
Ouest France
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through social media platforms' content recommendation algorithms and influencer marketing strategies that use AI to target and promote alcohol-related content to young users. This AI involvement contributes to a societal harm—promotion of alcohol consumption among youth—which is a form of harm to communities. However, the article does not document a specific event or incident where this AI use directly caused harm, nor does it describe a plausible future harm event or a near-miss. Instead, it focuses on reporting findings from a study and discussing policy and enforcement issues, which aligns with providing complementary information about AI's societal impacts and governance responses rather than reporting a new AI Incident or Hazard.
Thumbnail Image

La question du jour. La publicité pour l'alcool pullule sur les réseaux sociaux : ça vous choque ?

2024-09-26
Ouest France
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is reasonably inferred from the mention of targeted algorithms used to promote alcohol content on social media. The harm involves promotion of alcohol to young people, which can lead to health and social harms. Since the article discusses ongoing exposure and lack of regulation but does not report a specific incident of harm caused directly by AI, it fits the definition of an AI Hazard, where AI use could plausibly lead to harm. There is no indication of a completed AI Incident or a complementary information update, nor is it unrelated to AI.
Thumbnail Image

Comment les réseaux sociaux font la promotion de l'alcool auprès des jeunes

2024-09-28
20minutes
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of social media algorithms that target and promote alcohol-related content to young users, contributing to increased alcohol consumption risks. The harm is indirect but materialized, as the article cites studies showing significant exposure and influence on youth behavior. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to health (harm to a group of people).
Thumbnail Image

Ricard, Heineken, Aperol Spritz... l'alcool est légion sur les réseaux sociaux, au détriment des jeunes

2024-09-26
Le Parisien
Why's our monitor labelling this an incident or hazard?
The event involves AI systems indirectly through the use of targeted algorithms on social media platforms that promote alcohol content to young users. However, the article does not report any direct or realized harm caused by these AI systems, nor does it describe a specific incident where harm occurred. Instead, it highlights a potential risk and ongoing exposure, which could plausibly lead to harm in the future (e.g., increased alcohol consumption among youth). Since no actual harm is reported, and the focus is on the potential for harm due to AI-driven content promotion, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Alcool chez les jeunes: le rôle des réseaux sociaux pointé du doigt par des associations

2024-09-26
RMC
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of social media algorithms that target and promote alcohol-related content to young users. This targeted promotion has directly contributed to increased exposure and desire to consume alcohol among youth, which constitutes harm to health and well-being. Although the article focuses on the societal and regulatory challenges, the AI system's role in amplifying harmful content is pivotal. Therefore, this qualifies as an AI Incident due to the realized harm linked to AI-driven content promotion.
Thumbnail Image

Alcool chez les jeunes : le rôle des réseaux sociaux pointé du doigt en France

2024-09-28
Doctissimo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of targeted algorithms on social media platforms to promote alcohol content to young people, which involves AI systems. The harm described is indirect and systemic—exposure to alcohol promotion leading to increased consumption desire and potential health risks among youth. There is no specific AI Incident (no direct or indirect harm caused by a particular AI system failure or misuse event) nor a clear AI Hazard (no plausible future harm from a new or emerging AI system). Instead, the article focuses on the broader societal and regulatory context, the insufficiency of current laws, and the challenges in enforcement. This aligns with the definition of Complementary Information, which includes societal and governance responses and contextual information about AI impacts without describing a new incident or hazard.