TikTok's 'Bold Glamour' AI Filter Linked to Mental Health Harms Among Youth

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

TikTok's 'Bold Glamour' AI-powered beauty filter, which realistically alters users' facial features, has sparked widespread concern over its psychological impact. Users, especially teenagers and young women, report increased insecurity, body dysmorphia, and pressure to seek cosmetic surgery, highlighting the filter's role in perpetuating harmful beauty standards and mental health issues.[AI generated]

Why's our monitor labelling this an incident or hazard?

The filter is an AI system that generates altered facial images in real time, influencing users' perception of themselves. The article highlights that this use of AI has directly led to psychological harm (a form of harm to health) among users, especially vulnerable individuals. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to people's health and well-being.[AI generated]
AI principles
FairnessHuman wellbeingSafetyTransparency & explainabilityAccountabilityRespect of human rights

Industries
Media, social platforms, and marketingConsumer servicesHealthcare, drugs, and biotechnology

Affected stakeholders
ConsumersChildrenWomen

Harm types
Psychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

"Bold Glamour": le filtre TikTok qui remplace la chirurgie esthétique (et renforce les complexes)

2023-03-01
BFMTV
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that generates altered facial images in real time, influencing users' perception of themselves. The article highlights that this use of AI has directly led to psychological harm (a form of harm to health) among users, especially vulnerable individuals. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to people's health and well-being.
Thumbnail Image

"Bold Glamour", le nouveau filtre de TikTok qui fait polémique

2023-03-01
Ladepeche.fr
Why's our monitor labelling this an incident or hazard?
The filter is an AI system because it performs advanced image processing that alters user appearance in a way that is hard to detect. The event reports direct harm to mental health and well-being of users, especially young people, caused by the use of this AI filter. The harm is realized and ongoing, as evidenced by user complaints and public criticism. Hence, this is an AI Incident involving harm to health (mental health).
Thumbnail Image

Bold Glamour : le nouveau filtre de TikTok qui fait polémique

2023-03-02
Les Numériques
Why's our monitor labelling this an incident or hazard?
The filter is an AI system as it performs real-time image modification based on user input. Its use has led to indirect harm to mental health and well-being, which falls under harm to persons. The article reports actual harm occurring through the filter's effects on users' self-esteem and mental health, thus qualifying as an AI Incident.
Thumbnail Image

#BoldGlamour : on vous dit tout sur ce filtre TikTok accusé de créer des complexes - Elle

2023-02-28
Elle
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that modifies facial images in real time with high realism. Its use has led to concerns about harm to users' mental health and self-esteem, which constitutes harm to communities. Although the harm is psychological rather than physical, it fits within the definition of harm to communities. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

La montée en puissance des filtres hyperréalistes sur les réseaux sociaux inquiète: "Ça devrait être illégal"

2023-03-03
7sur7
Why's our monitor labelling this an incident or hazard?
While the article discusses advanced AI-powered filters that modify facial features in a highly realistic manner, it does not report any actual harm or incident resulting from their use. There is no mention of injury, rights violations, or other harms directly or indirectly caused by these filters. The concerns expressed are about potential implications, but no specific incident or plausible immediate harm is described. Therefore, this is best classified as Complementary Information, providing context and societal response to AI developments in social media filters.
Thumbnail Image

Comment fonctionne le filtre Bold Glamour de TikTok ?

2023-03-03
L'ADN
Why's our monitor labelling this an incident or hazard?
The filter is explicitly described as using AI (GANs) to generate altered facial images that are highly realistic and fixed to the user's face. The article documents realized psychological harms such as increased dysmorphia and distress among users, which are direct harms to health and communities. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to significant psychological harm and societal impact.
Thumbnail Image

Pourquoi le filtre TikTok " Bold glamour " inquiète autant

2023-03-04
20minutes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the "Bold Glamour" filter) that uses AI to alter facial images in real-time. The use of this AI system has directly led to harm in the form of negative impacts on mental health and well-being, including risks of dysmorphia and lowered self-esteem, which constitute harm to groups of people and communities. The article provides evidence of realized harm through user testimonies and expert opinions. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and harm to health and communities.
Thumbnail Image

Le "Bold Glamour", le filtre qui cartonne sur le réseaux sociaux, mais inquiète les professionnels

2023-03-04
So Soir
Why's our monitor labelling this an incident or hazard?
The 'Bold Glamour' filter is an AI system (augmented reality filter) that modifies facial images. The article raises concerns about its psychological effects and potential to influence behavior (e.g., increased cosmetic surgery), but no direct or indirect harm caused by the AI system is reported as having occurred. The concerns are about possible societal and psychological impacts, which are not framed as realized harms or imminent risks but as professional worries and observations. Hence, the article fits best as Complementary Information, providing context and expert opinion on the broader implications of AI filter use rather than reporting an AI Incident or Hazard.
Thumbnail Image

Ce filtre TikTok inquiète les professionnels : " même avec la chirurgie esthétique, vous n'aurez jamais ce rendu "

2023-03-06
Toms Guide : actualités high-tech et logiciels
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the TikTok AI-based facial enhancement filter) whose use has directly led to psychological harm risks, including the potential triggering of dysmorphophbia, a recognized mental health disorder. The filter's realistic alterations create an unattainable image of perfection, which can harm users' mental health, especially young and vulnerable individuals. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to groups of people (psychological harm).
Thumbnail Image

Réseaux sociaux : ce filtre beauté indétectable sur votre visage fait le buzz, il n'est pas sans danger ! - Voici

2023-03-07
Voici.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the beauty filter) whose use has directly led to psychological harm and social discrimination, which are forms of harm to health and communities. The article explicitly states that the filter increases users' complexes and promotes unrealistic beauty ideals, causing mental health risks and social harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

Voici l'impact que les filtres "esthétiques" ont sur notre cerveau

2023-03-03
Metro
Why's our monitor labelling this an incident or hazard?
While the filters are AI systems that alter images and influence user perception, the article focuses on the psychological and social consequences rather than a concrete AI Incident or Hazard. There is no specific event of harm caused by the AI filters described, nor a clear plausible immediate risk of harm directly linked to the AI system's malfunction or misuse. Therefore, this is best classified as Complementary Information, providing context and expert insight into the broader societal implications of AI aesthetic filters.
Thumbnail Image

"Bold Glamour" : pourquoi ce filtre Tiktok qui rend les gens beaux peut se révéler ultra dangereux ? - Grazia

2023-03-06
Grazia.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the "Bold Glamour" filter) that uses generative adversarial networks to alter images in a realistic way. The use of this AI system has directly led to psychological harm by promoting unrealistic beauty standards and negatively impacting users' self-esteem, particularly among young people. This fits the definition of an AI Incident as it causes harm to communities and individuals' health (mental health).
Thumbnail Image

The problems with TikTok's controversial 'beauty filters'

2023-03-01
BBC
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (beauty filters using AI technology) and discusses its societal and psychological effects. However, it does not report any realized harm (such as injury, rights violations, or disruption) nor does it describe a specific event where harm is occurring or likely to occur imminently. Instead, it provides a broader commentary on the implications of such AI systems, which aligns with the definition of Complementary Information. There is no direct or indirect harm described, nor a plausible future harm event detailed, so it is not an AI Incident or AI Hazard. It is not unrelated because it clearly involves AI technology and its societal impact.
Thumbnail Image

New 'bold glamour' TikTok filter blasted as 'psychological warfare and pure evil'

2023-02-28
Fox News
Why's our monitor labelling this an incident or hazard?
The TikTok 'bold glamour' filter is an AI system that modifies user images in a highly precise and unrealistic manner. Its use has directly led to psychological harm to individuals, particularly young users, by fostering unrealistic beauty standards and negatively impacting mental health. This constitutes harm to health and communities, fulfilling the criteria for an AI Incident. The article reports realized harm and societal concern, not just potential harm or general commentary, thus it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Why users are calling TikTok's 'Bold Glamour' filter problematic

2023-03-03
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the 'Bold Glamour' filter) that uses sophisticated AI to alter facial features realistically. The use of this AI system has led to psychological harm by reinforcing harmful beauty standards and negatively impacting users' self-image, particularly among young people. This harm to mental health and societal perceptions fits within the definition of harm to communities and individuals. Therefore, this qualifies as an AI Incident due to the realized psychological and social harms caused by the AI system's use.
Thumbnail Image

Woman claims TikTok's new 'Bold Glamour' filter goes 'too far'

2023-02-28
Yahoo Sports
Why's our monitor labelling this an incident or hazard?
The TikTok filter is an AI system that modifies images in a realistic manner. The article highlights concerns about mental health impacts, specifically body dysmorphia, which is a recognized harm. However, the article does not report any direct or indirect harm caused by the filter's use, only potential or perceived risks and a public conversation. There is no evidence of an AI Incident (harm realized) or AI Hazard (plausible future harm) as defined. The main focus is on societal reaction and discussion, fitting the definition of Complementary Information.
Thumbnail Image

TikTok face filters rack up millions of views while stirring up controversy

2023-02-28
ABC News
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (face filters) that modify user appearances and have directly led to mental health harms such as psychological distress and poor body image perception. These harms fall under injury or harm to health and harm to communities. The involvement of AI is clear as the filters perform complex image retouching and transformation tasks. The harms are realized and ongoing, not merely potential. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

New 'Bold Glamour' AI TikTok filter 'should be illegal': 'I don't...

2023-02-28
New York Post
Why's our monitor labelling this an incident or hazard?
The TikTok 'Bold Glamour' filter is an AI system that generates hyperrealistic makeup effects on users' faces. The article provides evidence that the use of this AI filter has led to mental health issues, including body dysmorphia and negative self-esteem, which are forms of injury or harm to health. The harm is realized and ongoing, as users report distress and psychological impact. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm to persons' health and communities.
Thumbnail Image

The TikTok 'bold glamour' filter is going viral for its wildly unrealistic beauty standard

2023-03-01
Mashable ME
Why's our monitor labelling this an incident or hazard?
The TikTok 'bold glamour' filter is an AI-powered real-time facial modification system that alters users' appearances to an unrealistic standard. This can be reasonably inferred as involving AI due to its real-time, adaptive facial reshaping capabilities. The harm described is indirect, relating to psychological and social impacts on users and communities, particularly young people, through the propagation of unrealistic beauty standards. This constitutes harm to communities and individuals' well-being, fitting the definition of an AI Incident. The event does not merely warn of potential harm but describes ongoing use and social consequences, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

The TikTok 'Bold Glamour' Filter Is Going Viral for Its Wildly Unrealistic Beauty Standard

2023-03-01
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The filter is an AI system as it performs real-time facial modification using AI techniques. The event highlights concerns about the potential harm of setting unrealistic beauty standards, which could plausibly lead to psychological harm or social harm to communities. However, the article does not report any actual injury, rights violation, or other direct harm caused by the filter at this time. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no incident has yet occurred.
Thumbnail Image

TikTok's 'Bold Glamour' and 'Teenage Look' Filters Are Terrifying Its Audience

2023-03-03
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (GAN-based filters) used in TikTok's face-altering filters. While the filters cause discomfort or unease among users, there is no indication of actual harm such as health injury, rights violations, or other significant harms as defined in the framework. The article does not describe any incident of harm or plausible future harm but rather provides information about the AI technology and user reactions. Therefore, this is best classified as Complementary Information, as it provides context and understanding about AI use in social media filters without reporting an AI Incident or AI Hazard.
Thumbnail Image

Why TikTok's Bold Glamour Filter Is So Controversial

2023-03-04
MakeUseOf
Why's our monitor labelling this an incident or hazard?
The TikTok Bold Glamour filter is an AI system that modifies facial features in a realistic way, which can plausibly lead to psychological harm (negative self-image) and social harm (deceptive use). The article focuses on these potential harms and controversies but does not describe any realized harm or incident. Hence, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm but no direct or indirect harm has yet been reported.
Thumbnail Image

New TikTok filter that retouches face can harm mental health, experts say

2023-03-02
TODAY.com
Why's our monitor labelling this an incident or hazard?
The TikTok filter is an AI system that modifies facial images, which can influence users' perceptions and mental health. While the article highlights concerns about potential harm to mental health and self-image, it does not describe any actual harm occurring or any malfunction or misuse of the AI system. The discussion is about possible future impacts and societal reactions, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard. There is no direct or indirect evidence of harm caused by the AI system at this time, only expert warnings and user opinions.
Thumbnail Image

'This Is a Problem': A New Hyper-Realistic TikTok Beauty Filter Is Freaking People Out

2023-02-28
VICE
Why's our monitor labelling this an incident or hazard?
The TikTok beauty filter is an AI system that generates altered facial images. While it raises concerns about promoting unrealistic beauty standards and potentially affecting users' perceptions, the article does not report any realized harm such as psychological injury, discrimination, or rights violations. The concerns are more about societal impact and user discomfort, which do not meet the threshold for an AI Incident or AI Hazard. Therefore, this is best classified as Complementary Information, providing context on societal reactions to AI-generated content.
Thumbnail Image

Warnings over filter being dubbed the 'most realistic' yet

2023-03-02
Metro
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that generates altered facial images in real-time. The article highlights concerns about the filter's potential to cause harm by negatively affecting users' self-esteem and mental health, especially those with body dysmorphia. However, the harm described is psychological and indirect, and while it is occurring, it is not framed as a direct violation of rights or physical harm. The article does not report a specific incident of harm caused by the AI system but discusses the broader societal impact and concerns. Therefore, this is best classified as Complementary Information, as it provides context and societal response to the use of AI-powered filters and their potential harms rather than reporting a discrete AI Incident or AI Hazard.
Thumbnail Image

What is the 'Bold Glamour' beauty filter going viral on TikTok?

2023-03-01
PhillyVoice
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the 'Bold Glamour' filter) that uses AI to alter facial features in real time. While there are worries about its impact on mental health and body image, the article does not document any actual injury, violation of rights, or other harms that have occurred due to the filter's use. The concerns are about plausible future harm to mental health and societal well-being, but no specific incident of harm is reported. Therefore, this qualifies as an AI Hazard, as the filter's use could plausibly lead to psychological harm, but no direct harm has been established yet.
Thumbnail Image

Explainer | TikTok's Bold Glamour filter is so good it might be bad for you. Here's why experts are concerned

2023-03-04
The Star
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Bold Glamour filter using generative AI) that modifies user images in a way that can cause psychological harm, a form of injury to health. Although no specific incident of harm is documented, the article presents credible concerns about the filter's impact on mental health and societal beauty standards, indicating ongoing harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm in the form of negative psychological effects and societal impacts on beauty norms. The article also references studies linking such filters to increased cosmetic procedures and body image disorders, supporting the classification as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

'Never felt uglier': Why experts fear new TikTok filter could spark mental health crisis

2023-03-04
7NEWS.com.au
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the TikTok filter) that modifies facial images using AI-based techniques. The use of this AI system has directly led to psychological harm to users, including lowered self-esteem and negative body image, which are recognized mental health harms. The article provides evidence of realized harm through user testimonies and expert concerns, fulfilling the criteria for an AI Incident. The harm is indirect but clearly linked to the AI system's use and its realistic facial modifications that set unattainable beauty standards.
Thumbnail Image

TikTok's Ultra-Realistic "Bold Glamour" Filter Is Coming For Your Teen's Self-Esteem

2023-03-01
Scary Mommy
Why's our monitor labelling this an incident or hazard?
The TikTok filter is an AI system that modifies facial images in real-time to produce ultra-realistic beauty enhancements. The article provides evidence that the filter's use has led to mental health harms among young users, including lowered self-esteem and increased risk of depression and anxiety. These harms are directly linked to the AI system's outputs influencing users' perceptions and mental health. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to harm to a group of people (mental health harm to young women).
Thumbnail Image

The 'reality' of TikTok's Bold Glamour filter proves we have a major problem on our hands -- here's why

2023-03-03
Marie Claire
Why's our monitor labelling this an incident or hazard?
The Bold Glamour filter is an AI system (augmented reality filter using AI to realistically modify facial features). Its use has directly led to significant psychological harms to individuals and communities, such as low self-esteem, anxiety, and body dysmorphia, which qualify as harm to health and harm to communities. The article provides evidence and expert opinions linking the AI system's outputs to these harms. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

What is TikTok's 'Bold Glamour' make-up filter and why is it controversial?

2023-02-28
indy100.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the TikTok 'Bold Glamour' filter) that generates realistic makeup effects on users' faces. The use of this AI system has indirectly led to psychological harm to users, such as lowered self-confidence and mental distress, which falls under harm to health (a). The controversy and user testimonies indicate realized harm rather than just potential harm. Therefore, this qualifies as an AI Incident due to the AI system's use causing direct psychological harm to individuals.
Thumbnail Image

Is TikTok's 'bold glamour' filter setting an impossible standard for beauty? - Softonic

2023-03-02
Softonic
Why's our monitor labelling this an incident or hazard?
The filter uses AI-based image processing to modify facial features in real time, which qualifies it as an AI system. The event describes concerns about the filter setting impossible and dangerous beauty standards, which can be considered harm to communities through psychological and social effects. Although no direct physical injury is reported, the indirect harm to mental health and societal well-being is plausible and recognized. Therefore, this event constitutes an AI Incident due to the indirect harm caused by the AI system's use.
Thumbnail Image

TikTok's new filter sparks controversy over unrealistic beauty standards - Softonic

2023-03-02
Softonic
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that processes facial images to generate ultra-realistic altered appearances. Its use has directly led to psychological harm to users, including emotional distress and negative impacts on self-esteem, which qualifies as harm to persons. Therefore, this constitutes an AI Incident due to the realized harm caused by the AI system's outputs affecting users' mental health and well-being.
Thumbnail Image

'Bold Glamour' filter ignites TikTok: 'It should be illegal'

2023-03-02
EL PAÍS English Edition
Why's our monitor labelling this an incident or hazard?
The filter is explicitly described as using artificial intelligence to modify facial features in real time with high realism. The article discusses the psychological harm caused by the filter's use among teenagers, including insecurity and mental health issues, which are direct harms to health. Therefore, the event involves an AI system whose use has directly or indirectly led to harm to a group of people (teenagers), fitting the criteria for an AI Incident.
Thumbnail Image

how TikTok is triggering the demand for cosmetic surgeries among young women

2023-03-04
Bullfrag
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using facial recognition and machine learning to alter facial features in a realistic way. The use of this AI system has directly led to psychological harm (injury to health) among young women, including increased insecurity, body dysmorphia, and demand for cosmetic surgery, which are harms to health and communities. The filter also perpetuates biased and sexist beauty standards, which can be considered harm to communities and violation of rights related to dignity and non-discrimination. These harms are realized and documented by multiple studies cited in the article. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

TikTok users say 'bold glamour' filter is 'terrifyingly realistic' - Internewscast

2023-02-28
Internewscast
Why's our monitor labelling this an incident or hazard?
The TikTok 'bold glamour' filter is an AI system that modifies facial features realistically. The article reports that users feel unsettled and negatively affected mentally by the filter's effects, indicating realized psychological harm. The filter's role is pivotal in creating unrealistic beauty standards and contributing to mental health issues such as lowered self-esteem and body dissatisfaction. Therefore, this event meets the criteria for an AI Incident due to indirect harm to health and communities caused by the AI system's use.
Thumbnail Image

Por filtro 'embellecedor' de TikTok, aumenta el número de cirugías estéticas

2023-03-06
PULZO
Why's our monitor labelling this an incident or hazard?
The AI system involved is the facial recognition and modification filter on TikTok, which uses AI to alter facial features. The harm arises indirectly as the AI's outputs influence users' self-perception, leading to increased cosmetic surgeries and treatments, which can have health risks and social implications. This fits the definition of an AI Incident because the AI system's use has indirectly led to harm to people (psychological and physical health risks). There is no indication that this is merely a potential risk (hazard) or a complementary information update; the article reports realized effects on behavior and health-related decisions.
Thumbnail Image

TikTok; los problemas que pueden generar los filtros de belleza

2023-03-04
El Universal
Why's our monitor labelling this an incident or hazard?
While the article highlights valid concerns about AI beauty filters potentially affecting mental health, reinforcing harmful beauty standards, and enabling misuse, it does not describe a concrete event where harm has occurred due to the AI system's development, use, or malfunction. The discussion is more about societal implications and risks rather than a specific AI Incident or Hazard. Therefore, it fits best as Complementary Information, providing context and raising awareness about AI's societal effects without reporting a direct or plausible immediate harm event.
Thumbnail Image

"Ponme este filtro": como TikTok está disparando la demanda de cirugías estéticas entre las jóvenes

2023-03-04
magnet.xataka.com
Why's our monitor labelling this an incident or hazard?
The TikTok filter uses AI-based facial recognition and modification technology to alter users' appearances in a realistic way. The article provides evidence from studies and expert observations that the use of such filters has directly led to psychological harms, including increased insecurity, body dysmorphia, and a rise in cosmetic surgery demand among young people. These harms fall under injury or harm to health (mental health) and harm to communities (psychological well-being). Therefore, the event meets the criteria for an AI Incident due to the AI system's use directly leading to significant harm.
Thumbnail Image

TikTok: los problemas que pueden generar los polémicos "filtros de belleza" de la app

2023-03-05
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The article centers on the societal and psychological concerns related to AI beauty filters but does not describe a specific AI Incident or AI Hazard event. There is no direct or indirect harm reported as having occurred due to the AI system's malfunction or misuse, nor is there a clear imminent risk of harm from a particular event. The discussion is more about potential and ongoing societal issues and concerns, making this a case of Complementary Information that provides context and understanding about AI's impact on society and mental health rather than reporting a discrete incident or hazard.
Thumbnail Image

Bold Glamour, el filtro de belleza extremo de TikTok que desató la polémica: "No debería ser legal"

2023-03-06
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that modifies facial features to create a stylized appearance. The reported harm is psychological, affecting users' self-perception and potentially causing insecurity and harm to identity and diversity, which can be considered harm to communities or individuals' well-being. Since the harm is occurring as users experience these effects, this qualifies as an AI Incident under the definition of harm to communities and individuals' health (mental health).
Thumbnail Image

TikTok: los problemas del uso de filtros de belleza en las redes sociales

2023-03-04
DEBATE
Why's our monitor labelling this an incident or hazard?
The TikTok beauty filter is an AI system that modifies user images based on learned beauty standards. The article describes realized psychological harms (emotional distress, lowered self-esteem) and social harms (reinforcement of racial biases and unrealistic beauty ideals). Since these harms are directly linked to the use of the AI filter, this qualifies as an AI Incident under the framework, specifically harm to health (psychological/emotional) and harm to communities (due to racial bias).
Thumbnail Image

Así afecta a la autoestima de la generación Z el nuevo filtro de belleza extrema de TikTok

2023-03-04
HOLA Mexico
Why's our monitor labelling this an incident or hazard?
The TikTok 'Bold Glamour' filter is an AI system that alters user images, which can influence users' perceptions and potentially affect mental health. However, the article does not describe any actual harm or incident caused by the AI system, nor does it describe a plausible future harm event. It mainly provides expert opinions and general observations about the psychological impact of such filters. This fits the definition of Complementary Information, as it provides context and understanding about AI's societal effects without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Un nuevo filtro en Tiktok revive la polémica por los inalcanzables ideales de belleza para adolescentes y jóvenes

2023-03-07
Diario EL PAIS Uruguay
Why's our monitor labelling this an incident or hazard?
The filter uses AI to modify images, which qualifies it as an AI system. The concerns raised relate to potential psychological harm and societal impact, which could plausibly lead to harm (AI Hazard). However, since no actual harm or incident is reported as having occurred, and the article mainly discusses the broader implications and debates around the filter, it does not meet the threshold for an AI Incident. It is also not merely complementary information because the main focus is on the potential harms and societal debate rather than updates or responses to a past incident. Therefore, the event is best classified as an AI Hazard due to the plausible risk of harm to young users' mental health and well-being from the use of this AI filter.
Thumbnail Image

El polémico filtro de belleza de TikTok: cómo usarlo y por qué hay grandes críticas

2023-03-06
Caracol Radio
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the filter uses AI to analyze and modify facial images in videos. The use of this AI system has led to concerns about harm to individuals' mental health and self-perception, which can be considered harm to groups of people. Although no direct physical injury is reported, the psychological harm and impact on self-acceptance constitute harm under the definition of AI Incident (harm to health of persons). Therefore, this event qualifies as an AI Incident due to the realized psychological harms linked to the AI filter's use.
Thumbnail Image

Polémica en las redes: está bien o no usar "filtros de belleza", los peligros del exceso

2023-03-05
https://www.iproup.com/economia-digital/595-emprendedor-startup-tecnologia-Mercado-Libre-va-de-compras-a-la-provincia-de-Santa-Fe
Why's our monitor labelling this an incident or hazard?
The article centers on the societal and psychological effects of AI-based beauty filters, which are AI systems that modify images to enhance appearance. While it outlines plausible harms such as impacts on self-esteem, reinforcement of biased beauty standards, and potential misuse for harmful purposes, it does not describe a concrete event where harm has occurred or a specific malfunction leading to harm. Therefore, it does not meet the criteria for an AI Incident. It also does not focus on a specific credible risk event or near miss that would qualify as an AI Hazard. The article is primarily an analysis and discussion of the broader implications and concerns related to these AI systems, making it Complementary Information that enhances understanding of AI's societal impact.
Thumbnail Image

TikTok: los problemas de los polémicos "filtros de belleza" de la app

2023-03-03
futbolred.com
Why's our monitor labelling this an incident or hazard?
While the article clearly involves AI systems (beauty filters using AI to alter images), it does not describe a specific AI Incident where harm has directly or indirectly occurred, nor does it report a particular AI Hazard event with plausible imminent harm. Instead, it discusses ongoing societal concerns and potential risks associated with these AI systems, reflecting broader issues and debates. Therefore, it fits best as Complementary Information, providing context and insight into the implications of AI beauty filters rather than documenting a discrete incident or hazard.
Thumbnail Image

TikTok: los problemas que pueden generar los polémicos "filtros de belleza" de la app

2023-03-06
El Mostrador
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, as beauty filters on TikTok use AI to modify images and videos. The concerns raised relate to potential harms such as mental health issues, reinforcement of biased beauty standards, and misuse for sexualization of minors. However, these harms are discussed in a general, speculative, or societal context without a specific incident of realized harm or a concrete event causing harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it serves as complementary information providing context and societal implications of AI technology in beauty filters.
Thumbnail Image

Los peligros que pueden desatar los "filtros de belleza" de TikTok

2023-03-04
El Diario Nueva York
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (beauty filters using AI technology) that have directly led to psychological harm and societal issues, including lowered self-esteem and increased risk of harmful behaviors. The harms are realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to persons and communities as defined in the framework.
Thumbnail Image

La Nación / Filtro viral de TikTok causa controversia en las redes

2023-03-04
La Nación
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the facial modification filter using AI algorithms) and discusses its use and potential psychological harm to users, especially youth. However, the article does not report a specific event where harm has directly or indirectly occurred due to the AI system's use. Instead, it presents concerns and expert opinions about plausible psychological harm and societal impact. Therefore, this qualifies as Complementary Information, as it provides context and discussion about AI's societal effects without describing a concrete AI Incident or an imminent AI Hazard.
Thumbnail Image

Los filtros 'Bold Glamour' y 'Teenage Look' de TikTok aterrorizan a su audiencia - Notiulti

2023-03-03
Notiulti
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI involvement (GANs) in TikTok filters, confirming the presence of AI systems. However, it does not describe any realized harm or incident resulting from these filters, nor does it highlight a credible risk of future harm. The main focus is on explaining the technology and user reactions, with no mention of violations, injuries, or disruptions. This fits the definition of Complementary Information, as it enhances understanding of AI applications and their societal impact without reporting an incident or hazard.
Thumbnail Image

Bold Glamour, il filtro della bellezza "finta" che spopola su TikTok spaventa gli esperti

2023-03-12
Tgcom24
Why's our monitor labelling this an incident or hazard?
The AI system (the generative beauty filter) is explicitly described as applying real-time modifications to users' faces, which fits the definition of an AI system. The use of this system has directly led to psychological harms such as insecurity, lowered self-esteem, and potential mental health issues among young users, which qualifies as injury or harm to health (a). The article provides evidence of realized harm through expert opinions and user experiences. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and harm to users' mental health.
Thumbnail Image

Sindrome dei corpi multipli, la moda degli adolescenti di modificare l'immagine sui social - Lifestyle

2023-03-09
ANSA.it
Why's our monitor labelling this an incident or hazard?
While the article highlights the societal and psychological impact of AI-based image filters, it does not describe a concrete AI Incident or AI Hazard. There is no mention of injury, rights violations, or other harms directly caused by the AI systems, nor a plausible future harm event. The content is primarily informative and contextual, focusing on the cultural phenomenon and research insights rather than a specific harmful event or credible risk scenario. Therefore, it fits best as Complementary Information, providing context and understanding about AI's role in social media image modification and its broader social implications.
Thumbnail Image

Sindrome dei corpi multipli: il 50% degli under 14 usa i filtri social per cambiare la propria immagine

2023-03-09
HuffPost Italia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enhanced filters used by adolescents on social media, which qualifies as AI systems. The research highlights potential psychological and social harms related to identity and self-image, but these are presented as observations and concerns rather than documented incidents of harm caused by AI. There is no report of injury, rights violations, or other harms directly linked to AI system malfunction or misuse. Nor does it describe a plausible future harm event or risk scenario. The main focus is on research findings and societal implications, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

You but better? Not so fast. Unpacking TikTok's controversial 'Bold Glamour' filter

2023-03-12
Yahoo News
Why's our monitor labelling this an incident or hazard?
The 'Bold Glamour' filter is an AI system that modifies facial images using machine learning technology. Its use has led to psychological harms such as unhealthy body image fixation, worsened depression, and anxiety, as described by mental health professionals in the article. These harms fall under injury or harm to health (mental health) and harm to communities (through propagation of unrealistic beauty standards). Since the harm is occurring as a direct consequence of the AI system's use, this qualifies as an AI Incident. The article provides evidence of realized harm rather than just potential harm, and the AI system's role is pivotal in causing these harms.
Thumbnail Image

Does the 'Bold Glamour' filter push unrealistic beauty standards? TikTokkers think so

2023-03-10
NPR
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI-powered filter modifies faces to create unrealistic, idealized images that can distort users' perceptions of their own appearance, potentially exacerbating mental health issues and feelings of alienation. This is a direct harm caused by the AI system's use, fulfilling the criteria for an AI Incident involving harm to communities and individuals' health. The filter's advanced AI capabilities are central to the harm described, and the harm is realized, not merely potential.
Thumbnail Image

TikTokers slam viral AI filter Bold Glamour and issue serious 'life warning'

2023-03-10
The Sun
Why's our monitor labelling this an incident or hazard?
The viral AI filter Bold Glamour uses AI to generate hyper-realistic facial modifications that influence users' self-image negatively, as reported by psychologists and cosmetic surgeons. This constitutes harm to health (mental health and well-being) and harm to communities (societal impact on body image norms). Since the AI system's use has directly led to these harms, this qualifies as an AI Incident under the framework.
Thumbnail Image

Gabrielle Union turns her back at Vanity Fair party in stand against Bold Glamour TikTok filter with beauty brand Dove

2023-03-13
The Sun
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (the Bold Glamour TikTok filter) that uses AI-generated imagery to alter users' appearances. While the filter's use could plausibly lead to psychological harm such as low self-esteem and appearance pressures, the article focuses on raising awareness and social response rather than reporting a concrete incident of harm. Therefore, this is best classified as Complementary Information, as it provides context and societal response to potential AI-related harms without describing a realized AI Incident or a direct AI Hazard event.
Thumbnail Image

Warnings over AI and toxic beauty on TikTok

2023-03-11
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the generative AI-powered Bold Glamour filter) whose use is widespread and has raised concerns about psychological harm and potential misuse. However, the article does not document any direct or indirect harm that has already occurred due to the AI system. Instead, it highlights plausible future harms such as misuse for deepfakes and scams. Therefore, the event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms, but no concrete incident has been reported yet.
Thumbnail Image

Picture Perfect: Why TikTok's latest 'Bold Glamour' beauty filter has been branded toxic

2023-03-11
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Bold Glamour filter) that uses machine learning to alter facial images realistically. The use of this AI system has directly led to harm, specifically mental health and self-esteem issues among users, particularly young people, as documented by experts and research cited in the article. This harm falls under the category of injury or harm to the health of persons (mental health). The article provides evidence of realized harm, not just potential harm, making it an AI Incident rather than a hazard or complementary information. The AI system's role is pivotal in causing this harm because the filter's realistic alterations set unrealistic beauty standards that negatively affect users' psychological well-being.
Thumbnail Image

Tiktok's 'Bold Glamour' filter sparks debate over toxic beauty standards

2023-03-10
The News International
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Bold Glamour filter using generative AI) and discusses its use and societal impact. While it highlights concerns about psychological harm and the potential for misuse (e.g., deepfakes, scams), no actual harm or incident is reported as having occurred. The article focuses on the plausible risks and debates surrounding the technology rather than a concrete harmful event. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm, but no direct or indirect harm has yet materialized according to the article.
Thumbnail Image

Toxic beauty myths dog TikTok's Bold Glamour filter - Taipei Times

2023-03-10
Taipei Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Bold Glamor filter) that uses generative AI to modify facial images in real-time. The article describes societal concerns about psychological harms related to beauty standards, which are indirect harms linked to the AI system's use. However, these harms are not documented as having directly occurred due to the filter, and the article mainly discusses potential misuse scenarios and broader societal implications. Therefore, the event fits best as Complementary Information, providing context on AI's societal impact and potential risks without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

TikTok's new Bold Glamour filter has gone viral for simulating 'mainstream beauty' but it's a reminder of how far AI has to go to integrate 'faces of color'

2023-03-11
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Bold Glamour filter) that uses generative AI technology to modify facial images. While there are concerns about potential psychological harm and social implications related to beauty standards and self-esteem, the article does not document any realized harm or incident resulting from the AI's use. The discussion centers on the broader societal impact and the need for improvement, which aligns with complementary information about AI's societal effects rather than an incident or hazard. Therefore, the classification is Complementary Information.
Thumbnail Image

Critics Complain About New Hyperrealistic 'Bold Glamour' TikTok Filter: 'Blurs The Lines Of Reality' - Conservative Angle

2023-03-10
Brigitte Gabriel
Why's our monitor labelling this an incident or hazard?
The filter uses a generative adversarial network (GAN), an AI system, to create realistic facial modifications. The widespread use of this filter has led to criticism that it harms users' mental health by distorting beauty perceptions and self-worth, which constitutes harm to communities and individuals' well-being. Since the AI system's use has directly led to these harms, this qualifies as an AI Incident under the framework.
Thumbnail Image

'Awaken the longing': Are beauty filters making you mentally ill? Influencer Diaz will ban them

2023-03-11
Aspetuck News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Bold Glamor beauty filter) that digitally alters users' facial features to conform to unrealistic beauty standards. This use of AI has directly led to psychological harm, such as increased longing, dissatisfaction with self-image, and mental health issues among young people, as described by a psychiatrist and affected individuals. The harm is social and psychological, impacting communities and individuals' well-being, fitting the definition of an AI Incident under harm to health and harm to communities. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

TikTok Has A New 'Bold Glamour' AI-Powered Filter, Here Are The Risks

2023-03-12
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Bold Glamour filter) that uses AI to modify facial images in a highly realistic manner. The article focuses on potential harms related to mental and emotional health and societal impacts, which could plausibly arise from widespread use of such a filter. Since no actual harm has been reported yet, but the risks are credible and foreseeable, this qualifies as an AI Hazard. The article does not describe a realized incident or direct harm, nor is it merely a product announcement without risk discussion. Therefore, the classification is AI Hazard.
Thumbnail Image

Gabrielle Union turns her back at Vanity Fair party in stand against Bold Glamour TikTok filter with beauty brand Dove

2023-03-13
The Scottish Sun
Why's our monitor labelling this an incident or hazard?
The AI system (Bold Glamour filter) is explicitly mentioned and uses AI to generate altered facial images. The concerns raised relate to plausible psychological harm (low self-esteem, appearance pressures) that could arise from widespread use of such filters. Since no actual harm or incident is reported, but there is a credible risk of harm, this qualifies as an AI Hazard. The article primarily discusses the potential negative impact and social response rather than a concrete incident or legal action, so it is not an AI Incident or Complementary Information. Therefore, the classification is AI Hazard.
Thumbnail Image

This TikTok Filter Has Sparked Warnings Over AI and Toxic Beauty Standards

2023-03-10
Gadgets 360
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Bold Glamour filter using generative AI) and discusses its use and potential harms. However, the harms described are currently speculative or societal concerns rather than documented incidents of harm. The article raises plausible future risks such as psychological harm and misuse for deception, which fits the definition of an AI Hazard. There is no direct or indirect evidence of actual harm having occurred yet, so it cannot be classified as an AI Incident. The article is not merely complementary information because it focuses on the potential harms and societal implications of the AI system rather than updates or responses to past incidents. Therefore, the appropriate classification is AI Hazard.
Thumbnail Image

Does TikTok's Bold Glamour filter harm users' mental health?

2023-03-15
CBS News
Why's our monitor labelling this an incident or hazard?
The Bold Glamour filter is an AI system that modifies user images using artificial intelligence. The article does not report any realized harm but highlights credible concerns from experts about potential emotional and psychological harm, particularly to adolescents. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm to health (mental health) of individuals or groups. There is no indication of an actual incident or realized harm yet, nor is the article primarily about governance or responses, so it is not Complementary Information. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

'Ridiculously narrow version of beauty': Shelly Horton hits back against filter

2023-03-16
honey.nine.com.au
Why's our monitor labelling this an incident or hazard?
The TikTok Bold Glamour filter is an AI system that modifies facial features to a specific beauty ideal. The article expresses concern about the filter's impact on users' self-esteem and mental health, particularly among children and teenagers, which could plausibly lead to harm. However, no concrete harm or incident is described as having occurred. The discussion is about potential psychological harm and societal impact, making this a plausible future risk rather than a realized incident. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Are Beauty Filters Actually That Bad for My Mental Health?

2023-03-15
The Cut
Why's our monitor labelling this an incident or hazard?
The article centers on the psychological and social effects of AI-based beauty filters used on social media platforms like TikTok. While it acknowledges concerns about mental health impacts, especially for vulnerable groups, it does not describe any concrete AI Incident (harm realized) or AI Hazard (plausible future harm event). Instead, it provides expert insights and recommendations for mental health resilience, which fits the definition of Complementary Information as it enhances understanding of AI's societal impacts without reporting a new harm or hazard.
Thumbnail Image

Why TikTok's Controversial Bold Glamour Filter Has Gabrielle Union, Dove and Therapists Concerned About Its Artificial Beauty Standards

2023-03-16
WWD
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Bold Glamour filter) that modifies user images in a way that could plausibly lead to psychological harm, such as lowered self-esteem and emotional distress. However, the article primarily reports concerns, opinions, and advocacy regarding potential negative impacts rather than documenting actual harm or incidents caused by the AI system. There is no description of a specific event where the AI system's use directly or indirectly caused harm. Therefore, this is best classified as Complementary Information, as it provides context and societal response to the AI system's impact rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

We forked out thousands on surgery to look like social media filters

2023-03-15
The Irish Sun
Why's our monitor labelling this an incident or hazard?
The AI-powered filters are explicitly mentioned and are central to the event. Their use has led to real psychological harm (body dysmorphia, depression) and physical harm (cosmetic surgeries with associated risks). The article documents actual cases where individuals have undergone multiple surgeries inspired by AI filter appearances, showing realized harm. Therefore, this qualifies as an AI Incident due to indirect harm to health caused by the AI system's use.