TikTok AI 'Chubby Filter' Sparks Outrage Over Body Shaming

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

TikTok's AI-driven 'chubby filter' that makes users appear heavier has sparked controversy. While some users share altered images for fun, critics including influencer Sadie condemn the filter as perpetuating body shaming and a toxic diet culture, potentially contributing to eating disorders. TikTok has yet to comment on the issue.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system involved is the 'chubby filter' on TikTok, which uses AI to modify images. Its use has caused harm by promoting fatphobia, reinforcing harmful stereotypes, and negatively impacting users' mental health, as reported by multiple users and experts. This harm to individuals' psychological well-being and the broader community's social environment fits the definition of harm to communities and health under AI Incident criteria. The harm is realized and ongoing, not just potential, so it is not a hazard or complementary information.[AI generated]
AI principles
Human wellbeingSafetyAccountabilityTransparency & explainabilityFairness

Industries
Media, social platforms, and marketingConsumer services

Affected stakeholders
Consumers

Harm types
Psychological

Severity
AI incident

Business function:
Other

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

A revolta no TikTok contra filtro que deixa pessoas gordas

2025-03-21
agazeta.com.br
Why's our monitor labelling this an incident or hazard?
The AI system involved is the 'chubby filter' on TikTok, which uses AI to modify images. Its use has caused harm by promoting fatphobia, reinforcing harmful stereotypes, and negatively impacting users' mental health, as reported by multiple users and experts. This harm to individuals' psychological well-being and the broader community's social environment fits the definition of harm to communities and health under AI Incident criteria. The harm is realized and ongoing, not just potential, so it is not a hazard or complementary information.
Thumbnail Image

A revolta no TikTok contra o filtro que deixa as pessoas gordas - BBC News Brasil

2025-03-21
BBC
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as the 'chubby filter' that uses AI to modify images. Its use has directly led to harm in the form of psychological distress, reinforcement of harmful stereotypes, and potential contribution to eating disorders, which are harms to health and communities. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons and communities. The article does not merely discuss potential harm or societal responses but reports realized harm from the AI system's outputs.
Thumbnail Image

A revolta no TikTok contra filtro que deixa pessoas gordas

2025-03-21
Terra
Why's our monitor labelling this an incident or hazard?
The article describes a viral AI filter on TikTok that alters images to make people look heavier. Users and experts report that this filter causes psychological harm by reinforcing negative stereotypes and body shaming, which can lead to mental health issues and social harm. The AI system's use directly leads to these harms, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and involves violation of rights to dignity and mental health, thus meeting the definition of an AI Incident.
Thumbnail Image

A revolta no TikTok contra filtro que deixa pessoas gordas

2025-03-21
jornalfloripa.com.br
Why's our monitor labelling this an incident or hazard?
The AI system involved is the 'chubby filter' that uses AI to modify images. Its use has directly led to harm by perpetuating harmful stereotypes, causing emotional distress, and potentially contributing to mental health issues like eating disorders. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and individuals' health. The article details realized harm rather than potential harm, so it is not an AI Hazard. It is not merely complementary information or unrelated news, as the harm is clearly articulated and linked to the AI system's use.
Thumbnail Image

A revolta no TikTok contra filtro que deixa pessoas gordas

2025-03-21
O POVO
Why's our monitor labelling this an incident or hazard?
The AI system in question is the TikTok filters that use AI to modify appearance. The use of the 'chubby filter' has directly led to mental health harms and social harms as reported by users, fulfilling the criteria for harm to health and harm to communities. The filter's recommendations and algorithmic promotion of such content have caused psychological distress and reinforced harmful societal norms, which qualifies as an AI Incident under the framework. The harm is realized and not merely potential, and the AI system's role is pivotal in generating and promoting the harmful content.
Thumbnail Image

Filtro que engorda pessoas viraliza e causa revolta no TikTok

2025-03-21
Olhar Digital
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the 'chubby filter') that uses AI to alter images. The use of this AI system has led to social harm, including the reinforcement of harmful stereotypes, potential psychological harm related to body image, and the promotion of toxic diet culture and eating disorders. These harms fall under harm to communities and health (a and d). Since the harm is occurring as a result of the AI system's use, this qualifies as an AI Incident. There is no indication that the article is merely about potential harm or a response to past incidents, so it is not a hazard or complementary information.
Thumbnail Image

A revolta no TikTok contra filtro que engorda - 21/03/2025 - Equilíbrio - Folha

2025-03-21
Folha de S.Paulo
Why's our monitor labelling this an incident or hazard?
The AI system involved is the 'chubby filter' that uses AI to modify images. Its use has directly led to psychological harm and social harm by promoting fatphobia and toxic diet culture, which are forms of harm to communities and individuals' mental health. The article provides multiple user testimonies and expert opinions confirming these harms have occurred. The AI system's development and use are central to the event, and the harms are clearly articulated and realized, meeting the criteria for an AI Incident.
Thumbnail Image

Após polêmica, TikTok remove filtro que deixava usuários mais gordos

2025-03-24
CNN Brasil
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that modifies user images to change physical appearance. Its use has directly led to harm in the form of negative impacts on users' mental health and reinforcement of harmful societal beauty norms, which can be considered harm to communities and individuals' well-being. Therefore, this qualifies as an AI Incident. The company's removal of the filter and content moderation are responses to this harm but do not change the classification of the event as an incident.
Thumbnail Image

A revolta no TikTok contra filtro que deixa pessoas gordas

2025-03-22
TNH1
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that modifies user images. Its use has caused social harm by promoting fatphobia and contributing to a toxic diet culture, which experts warn can lead to eating disorders and body dissatisfaction. The harm is indirect but clearly linked to the AI system's use on a large social platform, affecting communities and individuals' well-being. This fits the definition of an AI Incident as the AI system's use has directly or indirectly led to harm to communities and health.
Thumbnail Image

A revolta no TikTok contra filtro que deixa pessoas gordas

2025-03-21
Correio Braziliense
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as an AI-powered filter that modifies images to make people appear heavier. The use of this filter has directly led to psychological harm and social harm, including body shaming, fatphobia, and potential triggers for eating disorders, as reported by multiple users and experts. These harms fall under injury or harm to health (mental health) and harm to communities. The event involves the use of the AI system and its outputs causing real, realized harm, not just potential harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Após polêmica, TikTok segura filtro que engorda pessoas em fotos

2025-03-24
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The AI system involved is the image filter that uses AI to alter photos. The harm is indirect and social/psychological, relating to potential harm to communities and individuals' mental health due to body image issues and possible promotion of toxic diet culture. Since the filter's use has already caused discomfort and social harm, and the platform is taking mitigation steps, this qualifies as an AI Incident involving harm to communities and health. The event is not merely a product launch or general news but involves realized harm and platform response.
Thumbnail Image

Filtro que deixa pessoas gordas no Tik Tok causa revolta na web

2025-03-21
O Liberal
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that modifies images to alter appearance. Its use has directly led to psychological harm and social harm by reinforcing negative stereotypes and contributing to mental health issues, which fits the definition of an AI Incident involving harm to health and communities. The article reports realized harm rather than potential harm, so it is not a hazard. It is not merely complementary information or unrelated news because the harm is clearly articulated and linked to the AI system's use.
Thumbnail Image

TikTok sous les critiques à cause du filtre IA "Chubby" qui déforme les corps

2025-03-22
CNET France
Why's our monitor labelling this an incident or hazard?
The AI system (the 'Chubby' filter) is explicitly mentioned and used to alter body images in videos, which has caused harm to individuals' body image and mental health, particularly among vulnerable groups like adolescents. This harm falls under injury or harm to health and harm to communities. The filter's deployment and use have directly led to these harms, fulfilling the criteria for an AI Incident. The company's mitigation measures are responses to the incident, not the main focus of the article, so this is not Complementary Information.
Thumbnail Image

TikTok : Les utilisateurs demandent l'interdiction du " filtre pour gros " - BBC News Afrique

2025-03-22
BBC
Why's our monitor labelling this an incident or hazard?
The AI system (the 'chubby filter') modifies images to simulate weight gain, which has caused users to experience body shaming and negative mental health effects. Experts warn about its contribution to toxic diet culture and eating disorders, indicating harm to health and communities. The harm is directly linked to the AI system's use on the platform, fulfilling the criteria for an AI Incident. The article describes actual harm occurring, not just potential harm, and the AI system's role is pivotal in causing this harm.
Thumbnail Image

"Chubby filter" : ce filtre grossophobe suscite l'indignation sur Tiktok

2025-03-25
TF1 INFO
Why's our monitor labelling this an incident or hazard?
The 'chubby filter' is an AI system that alters images to add body weight. Its use has directly led to social harm by promoting fatphobia, stigmatization, and body image issues, which are harms to communities and individuals' well-being. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused significant, clearly articulated harm.
Thumbnail Image

Du blackface au fatshaming: c'est quoi le problème avec les filtres TikTok?

2025-03-25
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (image-altering filters using AI) whose use has directly led to social harms including discrimination (blackface filter) and mental health issues (body image filters). The harms are realized and ongoing, with the filters spreading on the platform and causing indignation and harm to communities. TikTok's banning of certain filters is a response but does not negate the incident. Therefore, this is an AI Incident due to realized harm caused by AI system use.
Thumbnail Image

C'est quoi le problème avec les filtres TikTok?

2025-03-26
Le Journal de Québec
Why's our monitor labelling this an incident or hazard?
The filters described are AI systems that generate altered images of users, such as the "Si j'étais noir" filter that darkens skin tone, which is considered a form of digital blackface and is discriminatory. The use of these filters has caused social harm by promoting racist stereotypes and negatively impacting users' mental health, especially adolescents. The harms are realized and ongoing, fulfilling the criteria for an AI Incident. The article discusses direct harms caused by the AI filters' use, not just potential risks or responses, so the classification is AI Incident.
Thumbnail Image

TikTok removes controversial 'chubby' filter after backlash

2025-03-25
INQUIRER.net USA
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that modifies images to simulate weight gain. Its use has directly led to emotional harm to users and communities by reinforcing negative stereotypes and body shaming, which constitutes harm to communities and individuals' well-being. Therefore, this qualifies as an AI Incident. The platform's response to remove the filter and restrict its visibility is a mitigation measure but does not change the fact that harm occurred.
Thumbnail Image

Videos of AI 'chubby filter' removed from TikTok after critics call out body shaming

2025-03-25
NBC News
Why's our monitor labelling this an incident or hazard?
An AI system (the AI-driven 'chubby' filter) was used in a way that led to harm related to body image and stigmatization, which can be considered harm to communities and individuals' mental health. The filter's use perpetuated fatphobia and negative body standards, contributing to psychological harm. TikTok's removal of the filter and content moderation are responses to this harm. Since the harm is realized and directly linked to the AI system's use, this qualifies as an AI Incident.
Thumbnail Image

Viral 'chubby filter' videos removed from TikTok after outcry over body shaming

2025-03-27
The Independent
Why's our monitor labelling this an incident or hazard?
The 'chubby filter' is an AI system that modifies images to change body size. Its use caused harm to communities by promoting body shaming and negative social attitudes towards larger bodies, which is a form of harm to communities under the AI Incident definition. The platform's removal of the filter and content moderation are responses to this realized harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use in social media content.
Thumbnail Image

TikTok and CapCut's Viral 'Chubby Filter' Receives Massive Backlash

2025-03-29
WRIF Rocks Detroit
Why's our monitor labelling this an incident or hazard?
The 'Chubby Filter' is an AI system that modifies images/videos to change body appearance. Its use has indirectly led to harm by promoting negative body image and potentially worsening mental health issues, which fits the definition of harm to groups of people (a). The removal of the filter by CapCut after backlash indicates recognition of this harm. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

TikTok and CapCut's Viral 'Chubby Filter' Receives Massive Backlash

2025-03-28
Sunny 94.3
Why's our monitor labelling this an incident or hazard?
The filter is an AI system (image processing with AI-based modification). The backlash is about potential social harm (body image issues), but no direct or indirect harm event is described. The article focuses on public disapproval and advocacy concerns, which fits the definition of Complementary Information rather than an Incident or Hazard. There is no indication of realized harm or a credible risk of harm leading to an AI Incident or AI Hazard classification.
Thumbnail Image

'Sick' filter that makes users appear fat is pulled after backlash on TikTok: 'Fuels toxic diet culture'

2025-03-23
Yahoo
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that modifies images to produce a specific visual effect. Its deployment and use led to direct harm by reinforcing negative stereotypes and contributing to toxic diet culture, which harms individuals' mental health and body image. The removal of the filter and content moderation actions by TikTok confirm that harm was recognized and addressed. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

TikTokers call for 'chubby filter' to be banned

2025-03-21
BBC
Why's our monitor labelling this an incident or hazard?
The 'chubby filter' is an AI system that modifies images to simulate weight gain. Its use on TikTok has led to social harms, including body shaming and potential exacerbation of eating disorders, which are harms to health and communities. These harms are directly linked to the AI system's outputs and use, fulfilling the criteria for an AI Incident. The article reports actual use and social consequences, not just potential risks, so it is not merely a hazard or complementary information.
Thumbnail Image

Netizens call for a ban on the 'chubby filter' on TikTok, here's why - The Times of India

2025-03-21
The Times of India
Why's our monitor labelling this an incident or hazard?
The 'chubby filter' is an AI system that modifies user images to simulate weight gain. Its use has directly led to harm in the form of mental health issues and body image dissatisfaction among users, which qualifies as harm to health and harm to communities. The article documents realized harm through user reactions and expert opinions linking the filter to eating disorders and negative societal impacts. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

TikTokers call for 'chubby filter' to be banned

2025-03-21
Yahoo
Why's our monitor labelling this an incident or hazard?
The AI system (the 'chubby filter') is explicitly mentioned and is used to manipulate images of people, which fits the definition of an AI system. The use of this AI system has directly led to psychological harm to users, including body shaming and mental health impacts, which are harms to groups of people. The article provides multiple testimonies and expert opinions confirming these harms. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

From a now-deleted 'chubby filter' to What I Eat in a Day videos, TikTok has a problem with body image content

2025-03-21
Yahoo
Why's our monitor labelling this an incident or hazard?
The AI system (CapCut's AI-generated filter) was used to create content that directly led to harm by promoting negative body image and fatphobia, which are forms of harm to communities and individuals' health. The article details realized harm, including psychological impacts and social backlash. Therefore, this qualifies as an AI Incident. The article also discusses TikTok's responses, but the primary focus is on the harm caused by the AI system's use and the content it generated, not just on the responses, so it is not merely Complementary Information.
Thumbnail Image

TikTok Called Out for AI 'Chubby Filter' Critics Say Could Lead to Body Negativity

2025-03-21
CNET
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI filter) whose use is raising concerns about potential psychological harm related to body image. However, the article does not report any direct or realized harm resulting from the filter's use, only criticism and warnings about possible negative effects. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm but no incident has been confirmed yet.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
Magic Valley
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-powered 'chubby' filter) whose use has directly led to harm in the form of negative impacts on users' mental health and self-esteem, which qualifies as injury or harm to a group of people. The filter's promotion of body shaming and unhealthy beauty standards constitutes a clear harm linked to the AI system's use. Therefore, this qualifies as an AI Incident.
Thumbnail Image

TikTok Removes 'Chubby Filter' That Made Users Obese After Criticism

2025-03-22
NDTV
Why's our monitor labelling this an incident or hazard?
The AI system involved is the 'chubby filter' that uses AI to alter images. Its use led to social harm by perpetuating body shaming and potentially harming users' mental health, which fits the definition of harm to communities or groups of people. The harm is realized as users expressed criticism and concern about the filter's impact. TikTok's response indicates acknowledgment of the harm caused. Therefore, this event qualifies as an AI Incident due to the direct use of an AI system causing social harm.
Thumbnail Image

TikTok bans controversial AI 'chubby' filter after users slam it for promoting bodyshaming

2025-03-22
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that modifies user images to simulate weight gain, influencing perceptions and potentially causing psychological harm. The controversy and expert commentary highlight the harm caused by the filter's use, fulfilling the criteria for an AI Incident due to harm to communities and health. The removal of the filter is a response but does not negate the fact that harm occurred while it was in use. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
Roanoke Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-powered 'chubby' filter) whose use has led to harm in the form of promoting body shaming and reinforcing unhealthy beauty standards, which can be considered harm to communities and individuals' mental health. The filter's presence and use have caused social harm, and TikTok's removal of the filter is a response to this harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to communities and individuals' well-being.
Thumbnail Image

'A Huge Step Backwards': TikTok Slammed For 'Chubby' AI Filter, Now Banned - News18

2025-03-22
News18
Why's our monitor labelling this an incident or hazard?
The AI system (the 'chubby' filter) is explicitly mentioned and is used to generate altered images influencing users' perceptions of body image. The harms described include negative impacts on mental health, promotion of toxic diet culture, and potential contribution to eating disorders, which are harms to health and communities. TikTok's banning of the filter and content moderation indicate recognition of these harms. Although the harm is indirect and societal rather than immediate physical injury, it fits within the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to health and communities. The event is not merely a product launch or general news, nor is it a future risk without current impact, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

'Chubby filter' pulled from TikTok after outraged users slammed it

2025-03-21
The Sun
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that modifies images to simulate weight gain. Its use led to indirect harm by reinforcing harmful stereotypes and contributing to toxic diet culture, which can affect mental health and well-being. The platform's removal of the filter acknowledges the harm caused. This fits the definition of an AI Incident because the AI system's use directly or indirectly led to harm to communities and individuals' health.
Thumbnail Image

TikTok removes AI based 'Chubby filter' after users flag it as 'damaging' and 'toxic' | Today News

2025-03-22
mint
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-based 'Chubby filter') whose use led to social harm, specifically harm to communities through body shaming and toxic cultural effects. The harm is realized as users experienced negative impacts and backlash, prompting TikTok to remove the filter and restrict related content. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and individuals' well-being.
Thumbnail Image

TikTok removes AI 'chubby' filter after body-shaming criticism

2025-03-21
Mashable
Why's our monitor labelling this an incident or hazard?
The AI system involved is the 'chubby' filter, an AI-driven image alteration tool that modifies user appearances. Its use has directly led to harm by reinforcing harmful beauty ideals and body shaming, which negatively affect individuals' mental health and social well-being, constituting harm to communities. The platform's response to remove the filter and restrict its spread confirms the recognition of this harm. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly caused harm.
Thumbnail Image

'Fat shaming' filter removed from TikTok over mental health concerns

2025-03-22
The Telegraph
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that manipulates images to alter body appearance, which can influence users' mental health negatively. Although the harm is psychological and societal, it is significant and linked to the AI system's use. However, the article focuses on the potential and ongoing harm rather than a specific incident of injury or violation that has already occurred. The removal of the filter and disclaimers are responses to these concerns, making this event primarily complementary information about societal and governance responses to AI-related harms rather than a new AI Incident or Hazard.
Thumbnail Image

Viral 'chubby filter' pulled from TikTok amid fears of eating disorder risks

2025-03-24
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that digitally alters images to make users appear overweight. Its use led to widespread backlash because it encouraged harmful stereotypes and toxic diet culture, which can cause harm to individuals' mental health and well-being, especially among vulnerable groups like young users. The harm is realized and direct, as evidenced by users reporting negative effects and calls for removal. Hence, this event meets the criteria for an AI Incident due to harm to health and communities caused by the AI system's use.
Thumbnail Image

TikTok's 'dangerous' chubby filter is gone -- but it's there's more to be done

2025-03-21
Metro
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (the filters) that manipulate images to alter users' appearances. The use of these AI filters has directly led to psychological harm and mental health consequences for individuals, including stress, anxiety, depression, and body dysmorphic disorder, which qualifies as injury or harm to health (a). The article details realized harm caused by the AI system's outputs and the platform's response to mitigate these harms. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to persons' health and well-being.
Thumbnail Image

What is the controversial 'chubby filter' that TikTok has taken down?

2025-03-22
Firstpost
Why's our monitor labelling this an incident or hazard?
The 'chubby filter' is an AI system that modifies images to change body size. Its use has directly led to harm by promoting unrealistic and harmful beauty standards, contributing to body dissatisfaction, eating disorders, and fatphobia, which are forms of harm to communities and individuals' mental health. The filter's removal and content moderation are responses to this harm. Therefore, this event qualifies as an AI Incident because the AI system's use has directly caused significant social and psychological harm.
Thumbnail Image

From a 'chubby filter' to 'what I eat in a day' videos: TikTok's body image problem

2025-03-21
Evening Standard
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as the 'chubby filter' is AI-generated and modifies user images. The filter's use led to public outcry and its deletion, indicating recognition of potential harm to body image and mental health. However, the article does not provide evidence of direct or indirect harm occurring, nor does it describe a plausible future harm scenario beyond the public reaction. Therefore, this event is best classified as Complementary Information, as it provides context on societal response to an AI system's impact on body image but does not document an AI Incident or AI Hazard.
Thumbnail Image

TikTok Removes Chubby Filter After Backlash Over Body Shaming Concerns

2025-03-22
Deccan Chronicle
Why's our monitor labelling this an incident or hazard?
The AI system (the 'chubby filter') was used in a way that directly led to harm by promoting body shaming and potentially triggering mental health issues such as eating disorders. This constitutes harm to health and communities as defined in the framework. Therefore, this event qualifies as an AI Incident because the AI system's use caused realized harm, even if the harm is social and psychological rather than physical.
Thumbnail Image

TikTok removes controversial filter

2025-03-24
KTLA 5
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the 'chubby' AI filter) that was used in a way that caused harm to communities by promoting body-shaming and potentially affecting users' mental health. The filter's use directly led to social harm, fulfilling the criteria for an AI Incident under harm to communities. TikTok's removal of the filter and addition of disclaimers and support resources are responses to this harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

All the backlash around the viral AI 'chubby filter' on TikTok, explained

2025-03-21
The Tab
Why's our monitor labelling this an incident or hazard?
An AI system (the TikTok AI chubby filter) is explicitly mentioned as being used to alter images. The use of this AI system has directly led to social and psychological harm, including body shaming, negative mental health impacts, and community harm through perpetuating harmful stereotypes and diet culture. These harms fall under harm to communities and individuals' health (mental health). Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to realized harm.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
Richmond Times-Dispatch
Why's our monitor labelling this an incident or hazard?
The 'chubby' filter is an AI system that modifies user images to alter body appearance. Its use has directly led to harm by promoting body shaming and reinforcing unhealthy beauty standards, which affects users' mental health and community well-being. The harm is realized and documented through user backlash and expert criticism. TikTok's response to remove the filter and restrict its exposure further confirms the recognition of harm. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
The Buffalo News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the 'chubby' filter) that was used to alter images in a way that caused harm by promoting body-shaming and reinforcing unhealthy beauty standards, which are harms to communities and individuals' mental health. The harm is realized as users expressed distress and backlash, and the filter's presence on the platform contributed to this harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
Tucson
Why's our monitor labelling this an incident or hazard?
The AI system (the 'chubby filter') was used to alter images in a way that has caused harm to users' mental well-being by reinforcing negative body image and potentially contributing to body shaming. This constitutes harm to health (mental health) of a group of people, fulfilling the criteria for an AI Incident. The withdrawal of the filter and content moderation are responses to this harm. Therefore, this event is classified as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

Popular 'chubby' TikTok trend criticised for fuelling 'toxic diet culture'

2025-03-21
Bristol Post
Why's our monitor labelling this an incident or hazard?
The 'chubby AI filter' is an AI system that edits images to simulate weight gain. Its use has directly led to mental health harms and the promotion of toxic diet culture, as evidenced by users reporting negative impacts and calls for banning the filter. The harms are social and psychological, affecting communities and individuals' well-being, fitting the definition of an AI Incident. The article does not merely discuss potential harm or responses but reports realized harm linked to the AI system's use.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
JournalStar.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the 'chubby' AI filter) that was used to alter images of users. The use of this AI filter has directly led to harm in the form of negative impacts on users' mental health and self-esteem, which falls under harm to health (a). The filter's promotion of body shaming and unhealthy beauty standards constitutes a form of harm to individuals and communities. Since the harm has occurred and TikTok is responding by removing the filter and restricting its spread, this qualifies as an AI Incident. The article focuses on the harm caused and the company's response, not just general AI news or future risks.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
Press of Atlantic City
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the 'chubby' AI filter) that was used to alter images of users. The use of this AI filter led to social harm, specifically harm to communities and individuals' mental health through body shaming and reinforcement of unhealthy beauty standards. This harm has materialized as users expressed distress and backlash, indicating realized harm. TikTok's removal of the filter and content moderation are responses to this harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm to communities and individuals' well-being.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
Greensboro News and Record
Why's our monitor labelling this an incident or hazard?
The AI system involved is the 'chubby' filter, an AI-powered image alteration tool on TikTok. Its use has directly led to harm in the form of body shaming and negative impacts on users' mental health and self-esteem, especially among vulnerable groups like teenagers. This constitutes harm to communities and individuals' health, fitting the definition of an AI Incident. The article details the harm occurring and the company's response, confirming the realized impact rather than a potential future risk.
Thumbnail Image

Why Has TikTok Pulled Down Its 'Chubby Filter'?

2025-03-22
NewsX
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the 'chubby filter') that was used and led to harm in the form of psychological impact and promotion of toxic diet culture, which can be considered harm to communities and individuals' well-being. The filter's use directly contributed to this harm, fulfilling the criteria for an AI Incident. TikTok's response to remove and restrict the filter is a mitigation measure but does not negate the fact that harm occurred.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
Statesville.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-powered 'chubby' filter) whose use has indirectly led to harm in the form of negative impacts on users' mental health and self-esteem, which falls under harm to health of groups of people. The filter's presence and use caused social harm, prompting its removal. Since harm has occurred and the AI system's use is central to the issue, this qualifies as an AI Incident.
Thumbnail Image

Calls to ban 'damaging and disheartening' chubby filter on TikTok

2025-03-21
Her.ie
Why's our monitor labelling this an incident or hazard?
The 'chubby filter' is an AI system that modifies images to simulate weight gain. Its use on TikTok has caused emotional and psychological harm to users by promoting body shaming and negative self-perception, which constitutes harm to groups of people. This harm is directly linked to the AI system's outputs and their reception by users, fulfilling the criteria for an AI Incident under harm to health and communities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
The Quad-City Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the 'chubby' AI filter) that was used to alter images of users. The use of this AI filter has directly led to harm in the form of negative impacts on users' mental health and self-esteem, which falls under harm to health of persons. The filter's promotion of body shaming and unhealthy beauty standards constitutes a violation of well-being and could be considered harm to communities. Since the harm has occurred and TikTok is responding by removing the filter and restricting its exposure, this qualifies as an AI Incident.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
pantagraph.com
Why's our monitor labelling this an incident or hazard?
The AI system (the 'chubby filter') was actively used and led to harm in the form of body shaming and negative impacts on users' mental health and self-esteem, especially among teens. The harm is social and psychological, affecting communities and individual wellbeing, which fits within the definition of harm to communities or violation of rights. TikTok's withdrawal of the filter and content moderation are responses to this harm but do not negate the fact that harm occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
Waterloo Cedar Falls Courier
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI-based filter that modifies images to alter body appearance, which is an AI system by definition. The filter's use has directly led to harm by promoting body shaming and unhealthy beauty standards, which affect users' mental health and community wellbeing. The removal of the filter and content moderation are responses to this harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
La Crosse Tribune
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the 'chubby' filter) that was used to alter images of users' bodies. The use of this AI filter has directly led to harm in the form of promoting body shaming and reinforcing unhealthy beauty standards, which can negatively impact users' mental health and wellbeing. This constitutes harm to groups of people (harm to health and communities). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm. The article also describes the company's response to the harm, but the primary focus is on the harm caused by the AI filter's use.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
Sioux City Journal
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the 'chubby' AI filter) that was used to alter images of users. The use of this AI filter has directly led to harm in the form of negative impacts on users' mental health and self-esteem, which falls under harm to health of persons. The filter's promotion of body shaming and reinforcement of unhealthy beauty standards constitutes a violation of wellbeing and can be considered harm to individuals. TikTok's removal of the filter is a mitigation response but does not negate the fact that harm occurred while the filter was active. Therefore, this qualifies as an AI Incident.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
KION546
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that modifies user images to change body shape. Its use has led to harm in the form of negative impacts on mental health and reinforcement of harmful social norms (body shaming and unhealthy beauty standards), which can be considered harm to groups of people. The event reports that the filter was actively used and caused backlash due to these harms, thus constituting an AI Incident. The company's removal of the filter and content moderation are responses to this incident but do not change the classification of the original harm caused.
Thumbnail Image

How body positivity activists feel about TikTok's 'chubby filter' trend

2025-03-20
Fashion Journal
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as the TikTok 'chubby' and 'skinny' filters that generate altered images of users' bodies. The article details how these filters cause psychological and social harm by reinforcing fatphobia and body shaming, which are violations of dignity and can be considered harm to communities and individuals' well-being. The harms are direct and ongoing, as users experience negative emotional responses and societal stigma fueled by the AI-generated content. Thus, the event meets the criteria for an AI Incident, as the AI system's use has directly led to significant, clearly articulated harms.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
WAAY TV 31
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that modifies images to change users' appearances. Its use has directly contributed to harm by promoting body shaming and unhealthy beauty standards, which affect users' mental health and self-esteem, particularly among vulnerable groups like teens. This constitutes harm to health and communities as defined in the framework. The event reports realized harm and the company's response, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
SCNow
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-powered 'chubby' filter) whose use has directly led to harm in the form of body shaming and negative impacts on users' mental health and self-esteem, which qualifies as harm to communities and individuals. The filter's deployment and its social consequences meet the criteria for an AI Incident because the AI system's use has directly caused harm. The company's response to remove the filter and restrict its exposure to teens is a mitigation step but does not negate the fact that harm occurred.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
North Platte Nebraska's Newspaper
Why's our monitor labelling this an incident or hazard?
The AI system (the 'chubby' filter) was used to modify user images, leading to widespread criticism and concerns about body shaming and negative impacts on users' mental health, particularly among teens. This constitutes indirect harm to health and communities. The withdrawal of the filter and content moderation are responses to this harm. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
McDowellNews.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-powered 'chubby' filter) whose use directly led to harm to communities and individuals by promoting body-shaming and unhealthy beauty standards, which are forms of psychological and social harm. The filter's deployment and the resulting negative social impact meet the criteria for an AI Incident, as the AI system's use has directly led to harm. The company's response to remove the filter and restrict its exposure to teens is a mitigating action but does not negate the occurrence of harm.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
Culpeper Star-Exponent
Why's our monitor labelling this an incident or hazard?
The AI system (the 'chubby' filter) was used to modify user images, leading to social and psychological harm through body shaming and reinforcing harmful beauty standards. This constitutes indirect harm to the health of individuals, especially mental health, fitting the definition of an AI Incident. The company's withdrawal of the filter and content moderation are responses to this harm. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

TikTokers call for 'chubby filter' to be banned

2025-03-21
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The AI system (the 'chubby filter') is explicitly mentioned and is used to manipulate images of people to appear overweight. The use of this filter has directly led to harm, including body shaming and negative mental health impacts, as evidenced by users reporting discomfort, deletion of the app, and expert warnings about its contribution to eating disorders and toxic diet culture. These harms fall under injury or harm to health and harm to communities. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

TikTokers Demand Chubby Filter Ban - News Directory 3

2025-03-21
News Directory 3
Why's our monitor labelling this an incident or hazard?
The AI system (the 'chubby filter') is explicitly mentioned and is used to alter images. The concerns raised relate to potential psychological and social harms (body shaming, toxic culture, eating disorders). While these harms are serious, the article does not document a specific event where harm has already occurred due to the filter's use, but rather ongoing societal concerns and advocacy for its ban or warning labels. Therefore, this situation represents a plausible risk of harm from the AI system's use rather than a documented incident. It is not merely general AI news or product launch, but a discussion of potential harms and societal response, fitting the definition of an AI Hazard.
Thumbnail Image

TikTok withdraws controversial 'chubby' filter

2025-03-24
Omaha.com
Why's our monitor labelling this an incident or hazard?
The filter is an AI system that modifies images to alter users' appearances. Its use has caused harm by promoting body shaming and unhealthy beauty standards, which affects users' mental health and well-being, especially among teens. This constitutes harm to communities and individuals' health, fitting the definition of an AI Incident. The event describes realized harm rather than potential harm, and the AI system's role is pivotal in causing this harm. Hence, the classification is AI Incident.
Thumbnail Image

Influencers blast new TikTok 'chubby' filter that 'makes you look fat'

2025-03-18
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The TikTok 'chubby' filter is an AI system that generates altered images of users to simulate a heavier body size. The use of this AI system has directly led to social harms, including body shaming, fatphobia, and the promotion of harmful stereotypes and attitudes that can contribute to mental health issues and eating disorders. These harms affect communities and individuals' rights to dignity and non-discrimination. The article documents these harms as occurring and causing distress, not merely potential or hypothetical. Hence, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

'Chubby' filter gives influencers an appalling way to fat-shame...

2025-03-19
New York Post
Why's our monitor labelling this an incident or hazard?
The generative AI filter is explicitly mentioned and used to alter images, which is an AI system. The use of this AI system has directly led to harm in the form of body shaming, social stigma, and psychological harm to individuals and communities, fulfilling the criteria for harm to communities and violations of rights. The article documents realized harm, not just potential harm, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

If you laugh at this video, you're the problem

2025-03-18
News.com.au
Why's our monitor labelling this an incident or hazard?
The AI system here is the digital filter that uses AI to modify images/videos to add weight to people's faces and bodies. Its use has directly led to social harm by promoting fatphobia and body shaming, which are violations of rights related to dignity and respect, and cause harm to communities through negative impacts on mental health and societal attitudes. The article documents realized harm through expert warnings and public criticism, indicating this is an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

WEIGHING IN: Experts slam TikTok's 'chubby filter' trend

2025-03-19
Perth Now
Why's our monitor labelling this an incident or hazard?
The AI system (the 'chubby filter') is directly used to create altered images that have led to social and psychological harms, including fatphobia, fat shaming, and potential encouragement of eating disorders. These harms fall under harm to communities and harm to health. Since the harm is occurring and linked to the AI system's use, this qualifies as an AI Incident. The article details the realized harm and expert condemnation, not just potential or future harm, nor is it merely complementary information or unrelated news.
Thumbnail Image

AI "chubby" filter causes outrage on TikTok

2025-03-17
Newsweek
Why's our monitor labelling this an incident or hazard?
The AI chubby filter is explicitly described as an AI system that alters images. The harm described is social and psychological, including reinforcing harmful beauty ideals and fatphobia, which affect communities and individuals' rights to dignity and non-discrimination. The article documents actual use and social backlash, indicating realized harm rather than potential harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities and violations of rights. The event is not merely complementary information or unrelated, as the filter's AI nature and its social impact are central to the report.
Thumbnail Image

TikTok users hit out at 'offensive and fatphobic' trend

2025-03-16
Extra.ie
Why's our monitor labelling this an incident or hazard?
The filter uses AI-based image manipulation to alter users' photos, which qualifies as an AI system. The use of this filter has led to social harm, specifically emotional and psychological harm related to body image and fatphobia, affecting communities and individuals' well-being. This harm is realized and ongoing as users express discomfort and offense, indicating a violation of social norms and potential harm to communities. Therefore, this event constitutes an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

The Latest TikTok Trend Is ... Fat-Shaming

2025-03-18
The Cut
Why's our monitor labelling this an incident or hazard?
The AI system (the AI filter) is explicitly mentioned and is used to generate altered images. The harm discussed relates to social and psychological impacts (harm to health and communities) linked to the trend and TikTok's algorithmic promotion of related content. However, the article does not describe a specific event where the AI filter's use directly or indirectly caused a concrete harm incident, nor does it describe a plausible future harm scenario distinct from ongoing social dynamics. Instead, it provides analysis and context about the trend and its implications, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Are we back to mocking bigger bodies? TikTok's 'Chubby' filter has people worried

2025-03-18
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The 'Chubby' filter is an AI system that modifies images to create a specific visual effect. Its use has directly led to social harms, including reinforcing fatphobia, body shaming, and undermining size inclusivity movements. These harms affect communities and individuals' rights to dignity and non-discrimination. The article describes realized harm through widespread use and social backlash, not just potential harm. Hence, this event meets the criteria for an AI Incident due to the AI system's use causing direct social harm.
Thumbnail Image

Does TikTok's 'chubby' filter prove fatphobia is impossible to outrun?

2025-03-19
Herald Sun
Why's our monitor labelling this an incident or hazard?
The TikTok 'chubby' filter is an AI system that generates altered images of users to simulate a larger body size. The article describes how this filter is widely used in a manner that ridicules larger bodies, reinforcing fatphobia and contributing to mental health harms and social stigma. These harms fall under harm to communities and health. The AI system's use directly contributes to these harms by enabling and amplifying the harmful content. Although the article is more reflective and does not describe a single discrete event, the widespread use and social impact of the AI filter causing harm meets the criteria for an AI Incident due to the significant, clearly articulated harms where the AI system's role is pivotal. It is not merely a potential risk (hazard) or complementary information, but an ongoing harm linked to the AI system's use.
Thumbnail Image

TikTok says this AI filter is 'hilarious'. Not everyone is laughing

2025-03-20
Brisbane Times
Why's our monitor labelling this an incident or hazard?
The AI system (the image filter) is used to generate altered images of users' bodies, which can influence users' perceptions and mental health. The criticism highlights potential harm to mental health due to reinforcing negative body image stereotypes. This constitutes harm to health (mental health) indirectly caused by the AI system's use. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to a group of people (users) through mental health impacts.
Thumbnail Image

Why are people mad about the 'Chubby' filter that's trending on TikTok?

2025-03-17
indy100.com
Why's our monitor labelling this an incident or hazard?
The 'chubby' filter is an AI system that modifies images to simulate weight gain. Its use in a way that mocks or disrespects certain body types has led to emotional harm and social backlash, which fits the definition of an AI Incident due to harm to communities and violation of rights. The harm is realized and ongoing as evidenced by the comments and reactions from affected individuals. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

WEIGHING IN: Experts slam TikTok's 'chubby filter' trend

2025-03-19
The West Australian
Why's our monitor labelling this an incident or hazard?
The AI system in question is the 'chubby filter' on the CapCut app, which uses AI to modify images. The use of this AI filter has directly led to harm in the form of fat-shaming, reinforcement of fatphobia, and potential mental health issues such as eating disorders and body dysmorphia. These harms fall under violations of human rights and harm to communities. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to significant harm as defined in the framework.
Thumbnail Image

وداعًا للتنمر الالكتروني.. نهاية "فلتر السمنة" على تيك توك

2025-03-24
صحيفة عكاظ
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-based 'Chubby' filter) whose use directly led to social harm, specifically cyberbullying and negative impacts on mental health related to body image. This constitutes harm to communities and individuals' health, fitting the definition of an AI Incident. The platform's removal of the filter is a response to this realized harm, not merely a precautionary measure, confirming the incident classification.
Thumbnail Image

مستخدمو تيك توك يطالبون بحظر الفلتر المثير للجدل... - الوكيل الإخباري

2025-03-24
الوكيل الاخباري
Why's our monitor labelling this an incident or hazard?
The filter is explicitly described as relying on AI to alter images, thus involving an AI system. The concerns raised relate to potential harm to individuals' mental health, such as contributing to eating disorders and promoting toxic diet culture. Although no direct physical harm is reported, the psychological harm to users is a form of injury to health (mental health). Therefore, the use of this AI system has indirectly led to harm, qualifying this as an AI Incident.
Thumbnail Image

مستخدمون لتطبيق تيك توك يطالبون بحظر "فلتر البدانة"

2025-03-21
https://www.alanba.com.kw/newspaper/
Why's our monitor labelling this an incident or hazard?
The filter is explicitly described as AI-based and is used widely on TikTok. Its use has directly caused psychological harm to users, including feelings of distress, body shaming, and potential exacerbation of eating disorders, which constitute harm to health and communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm. The article documents realized harm rather than just potential harm, and the AI system's role is pivotal in causing this harm through its image manipulation and algorithmic promotion.
Thumbnail Image

وكالة سرايا : تيك توك يحذف فلتر Chubby بعد انتقادات واسعة

2025-03-22
(وكالة أنباء سرايا (حرية سقفها السماء
Why's our monitor labelling this an incident or hazard?
The AI system (the Chubby filter) is explicitly mentioned and its use is linked to potential psychological harms. However, the article focuses on the public reaction, the platform's removal of the filter, and the broader implications for AI's social impact. There is no indication that harm has directly occurred or that a specific incident involving the AI system caused injury or rights violations. The event is about the response to concerns and the responsible use of AI, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

تيك توك تواجه انتقادات حادة بعد انتشار فلتر Chubby.. ما القصة؟

2025-03-24
صدى البلد
Why's our monitor labelling this an incident or hazard?
The AI system (the Chubby filter) was used to modify images in a way that led to psychological and social harm, as evidenced by expert warnings and public criticism about reinforcing negative body stereotypes and potentially encouraging unhealthy habits. The harm is indirect but clearly linked to the AI system's use and outputs. TikTok's removal of the filter and disclaimers confirm recognition of this harm. Hence, this event meets the criteria for an AI Incident involving harm to health and communities.
Thumbnail Image

اخبارك نت | "تيك توك: مستخدمون يطالبون بحظر "فلتر البدانة - BBC News عربي

2025-03-21
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The AI system (the AI-based filter) is explicitly mentioned and is used to alter images of people. The harm described is indirect psychological and social harm to individuals and communities, such as body shaming and fostering harmful cultural attitudes, which falls under harm to communities. Since the harm is occurring as users report negative impacts and call for banning the filter, this qualifies as an AI Incident. There is no indication that the harm is only potential; it is already realized through the social effects and user reactions.
Thumbnail Image

مستخدمون لتطبيق تيك توك يطالبون بحظر "فلتر البدانة"

2025-03-22
@Elaph
Why's our monitor labelling this an incident or hazard?
The AI system (the 'fat filter') is explicitly mentioned and is used to alter images of people, which has caused real psychological harm to users, as reported by multiple individuals and experts. The harms include body shaming, reinforcement of negative stereotypes, and potential contribution to eating disorders, which are recognized forms of injury to health and harm to communities. The article describes actual harm occurring due to the AI system's use, not just potential harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

مستخدمون لتطبيق تيك توك يطالبون بحظر "فلتر البدانة" #عاجل

2025-03-21
Cedar News Newspaper
Why's our monitor labelling this an incident or hazard?
The filter is explicitly described as AI-based and manipulates images to change appearance. The widespread use of this AI system has caused direct psychological harm to users, including feelings of body shaming, negative self-image, and potential eating disorders, which constitute harm to health and communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm as defined in the framework.
Thumbnail Image

تيك توك يحذف فلترًا يجعل الأشخاص يبدون بدناء بعد انتقادات واسعة

2025-03-22
الفرات نيوز
Why's our monitor labelling this an incident or hazard?
The AI system (the 'chubby' filter) was used to modify images in a way that led to social and psychological harm, as evidenced by widespread criticism and concerns about body shaming. The platform's removal of the filter and addition of disclaimers indicate recognition of the harm caused. The AI's role in generating altered images that contributed to this harm is direct and material. Therefore, this event meets the criteria for an AI Incident due to realized harm to communities and health.
Thumbnail Image

تيك توك يزيل فلتر السمنة بعد اعتراضات واسعة

2025-03-23
Asharq News
Why's our monitor labelling this an incident or hazard?
The AI system (the fat filter) was used and caused indirect harm to individuals and communities by reinforcing harmful stereotypes and potentially exacerbating mental health issues such as eating disorders. This constitutes harm to communities and individuals' psychological health, fitting the definition of an AI Incident. The event reports realized harm and the platform's response to mitigate it, so it is not merely a hazard or complementary information.