Whistleblowers Expose Meta and TikTok's AI Algorithms Amplifying Harmful Content

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Whistleblowers revealed that Meta and TikTok deliberately weakened content moderation and prioritized engagement, allowing their AI recommendation algorithms to amplify harmful content such as violence, exploitation, and extremism. Internal research showed these decisions directly exposed users to harm, as companies competed for user attention and business growth.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly discusses AI systems (algorithms for content recommendation and feed ranking) developed and used by Meta and TikTok. These AI systems have directly led to harms including psychological injury to users (especially adolescents), exposure to harmful and extremist content, and violations of user safety and rights. The harms are realized and documented through internal research and whistleblower accounts. The AI systems' prioritization of engagement over safety is a direct cause of these harms. Therefore, this event qualifies as an AI Incident because the development and use of AI systems have directly led to significant harm to individuals and communities.[AI generated]
AI principles
SafetyAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
PsychologicalPublic interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Como TikTok e Meta ignoraram segurança para ganhar disputa por engajamento, segundo ex-funcionários - BBC News Brasil

2026-03-17
BBC
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (algorithms for content recommendation and feed ranking) developed and used by Meta and TikTok. These AI systems have directly led to harms including psychological injury to users (especially adolescents), exposure to harmful and extremist content, and violations of user safety and rights. The harms are realized and documented through internal research and whistleblower accounts. The AI systems' prioritization of engagement over safety is a direct cause of these harms. Therefore, this event qualifies as an AI Incident because the development and use of AI systems have directly led to significant harm to individuals and communities.
Thumbnail Image

Denunciantes acusam TikTok e Meta de permitirem conteúdos nocivos para competir por atenção dos utilizadores

2026-03-16
SAPO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of recommendation algorithms used by TikTok and Meta to curate and promote content. The whistleblowers reveal that these AI systems were deliberately tuned or allowed to promote harmful content to increase user engagement, leading to real harm such as exposure to violent and hateful content and user radicalization. This meets the criteria for an AI Incident because the AI systems' use has directly or indirectly led to harm to individuals and communities, including mental health harm and violations of user safety rights. The presence of internal documents and user testimonies further supports the direct link between AI system use and harm.
Thumbnail Image

Como TikTok e Meta ignoraram segurança para ganhar disputa por engajamento, segundo ex-funcionários

2026-03-17
Correio Braziliense
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems in the form of recommendation algorithms that influence content feeds on social media platforms. These AI systems were used in ways that knowingly or negligently amplified harmful content, causing direct harm to users, including psychological harm and exposure to dangerous material. The whistleblower evidence and internal documents confirm that the AI systems' outputs were a pivotal factor in these harms. Therefore, this event meets the definition of an AI Incident due to direct harm caused by the use of AI systems in content recommendation and moderation failures.
Thumbnail Image

Tiktok e Meta permitiram conteúdos impróprios a fim de competirem pela atenção dos utilizadores

2026-03-16
SIC Notícias
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of recommendation algorithms that select and promote content to users. The companies' deliberate decisions to allow harmful content to be amplified by these AI systems directly caused harm to users, including psychological harm and exposure to violent and hateful content. The harm is realized and documented through whistleblower testimonies and internal investigations. This meets the criteria for an AI Incident because the AI system's use led directly to violations of user safety and harm to communities.
Thumbnail Image

Como TikTok e Meta ignoraram segurança para ganhar disputa por engajamento, segundo ex-funcionários

2026-03-17
agazeta.com.br
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of recommendation algorithms that influence content feeds on social media platforms. The harms described include psychological harm, exposure to harmful and radicalizing content, and violations of user safety, especially for minors, which fall under harm to communities and individuals. The companies' internal knowledge and decisions to allow harmful content for engagement gains indicate direct and indirect causation of harm by the AI systems. This meets the criteria for an AI Incident rather than a hazard or complementary information, as the harms are ongoing and documented through whistleblower evidence and internal research.
Thumbnail Image

Meta e TikTok ignoraram segurança dos utilizadores para combater quebras na bolsa | TugaTech

2026-03-18
TugaTech
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of recommendation algorithms and AI-based moderation tools used by Meta and TikTok. These AI systems have been used in ways that have directly or indirectly led to harm to communities and individuals, including exposure to harmful content, radicalization, and failure to protect minors. The harms are realized and documented through internal studies and whistleblower testimonies. The companies' decisions to prioritize engagement and stock prices over safety demonstrate the AI systems' role in causing these harms. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

Ex- funcionários acusam Meta e TikTok de fecharam os olhos a conteúdos tóxicos para crescer

2026-03-16
Executive Digest
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems—recommendation algorithms used by Meta and TikTok—that have been used in ways that directly or indirectly led to harm to communities and individuals by promoting harmful content. The whistleblower testimonies and internal documents indicate that these harms are ongoing and were knowingly tolerated or insufficiently mitigated by the companies. This fits the definition of an AI Incident because the AI systems' use has directly or indirectly caused harm (harmful content exposure, radicalization, hate speech proliferation). The event is not merely a potential risk or a governance response but reports realized harms linked to AI system use.
Thumbnail Image

Redes sociais em alerta: algoritmos podem estar a colocar jovens em risco com Meta e TikTok a priorizaram conteúdos polémicos

2026-03-17
Marketeer
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (content recommendation algorithms) whose use directly led to harm to minors and communities through exposure to harmful content and radicalization. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident under the framework, as the AI systems' use has directly led to harm to groups of people (minors) and communities through the spread of harmful content and radicalization.
Thumbnail Image

TikTok and Meta compromise user security

2026-03-16
Today.Az
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions algorithmic strategies, which reasonably infer the involvement of AI systems in content recommendation and moderation. The harmful content spread includes violence, sexual harassment, terrorism, and divisive political content, which constitute harm to communities and potentially violations of rights. The AI systems' use to prioritize engagement over safety directly leads to these harms. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI systems' deployment and management decisions.
Thumbnail Image

TikTok and Meta risked safety to win algorithm arms race, whistleblowers say

2026-03-16
BBC
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of content recommendation algorithms that influence what users see. The companies' decisions to allow more harmful content to increase engagement directly led to harm to users and communities, fulfilling the criteria for an AI Incident. The harms include exposure to harmful content such as violence, sexual blackmail, misogyny, conspiracy theories, bullying, and hate speech, which affect the health and well-being of individuals and communities. The involvement of AI in these harms is direct, as the algorithms determine content visibility and engagement. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

TikTok and Meta risked safety to win algorithm arms race, whistleblowers say

2026-03-16
BBC
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of recommendation algorithms that influence content feeds on social media platforms. The harms described include injury to mental health, radicalization, exposure to harmful and violent content, and normalization of hate speech, which affect individuals and communities. The AI systems' development and use decisions, such as allowing more harmful content to increase engagement, directly contributed to these harms. The whistleblower evidence and internal research confirm the causal link between the AI algorithms and the harms. Hence, this is an AI Incident as the AI systems' use has directly and indirectly led to significant harms.
Thumbnail Image

Tiktok and Meta 'pushed harmful content' on people's feeds for views

2026-03-16
Metro
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-powered recommendation algorithms at TikTok and Meta that have been used to maximize user engagement by promoting 'borderline harmful content' and harmful posts. The whistleblowers provide evidence that these AI systems have directly contributed to the spread of harmful content, including sexual blackmail, cyberbullying, and hate speech, causing real harm to individuals and communities. The AI systems' development and use have led to violations of rights and harm to users, meeting the criteria for an AI Incident. The involvement of AI is clear from the description of recommendation engines and algorithmic content promotion. The harms are realized and ongoing, not merely potential.
Thumbnail Image

Meta and TikTok let harmful content rise after evidence outrage drove engagement, say whistleblowers

2026-03-17
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The platforms' AI recommendation algorithms are AI systems that infer from user data to generate content feeds. The whistleblowers reveal that management decisions led to the AI systems being tuned to promote more harmful content, which directly caused harm to users and communities by exposing them to violence, sexual blackmail, and extremist content. This constitutes an AI Incident because the AI systems' use directly led to violations of safety and harm to communities. The event involves realized harm, not just potential risk, and thus is not merely a hazard or complementary information.
Thumbnail Image

Inside TikTok-Meta algorithm war: How the race for engagement is putting users at risk

2026-03-16
The News International
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of social media recommendation algorithms that select and promote content to users. The harm includes psychological injury, radicalization, normalization of violence, and exposure to harmful content, which affect individuals and communities, fulfilling the harm criteria. The AI systems' use and management decisions (e.g., instructions to allow harmful content) have directly led to these harms. Although companies deny or claim mitigation efforts, the reported harms are ongoing and documented. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta and TikTok let harmful content rise after evidence outrage drove engagement, say whistleblowers - MyJoyOnline

2026-03-17
MyJoyOnline.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of recommendation algorithms that select and promote content on social media platforms. The whistleblowers provide evidence that these AI systems were deliberately allowed or designed to promote harmful content to maximize engagement and profits, despite known risks. The harms include psychological harm to individuals (e.g., radicalization, exposure to sexual blackmail), harm to communities (normalization of hate speech, violence), and violations of user safety rights. The AI systems' outputs directly contributed to these harms, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a governance response but documents realized harms caused by AI system use and management decisions.
Thumbnail Image

Whistleblowers claim TikTok, Meta sacrificed safety for engagement

2026-03-16
NewsBytes
Why's our monitor labelling this an incident or hazard?
Instagram's Reels uses AI algorithms to recommend and display content. The internal research showing increased harmful content in Reels indicates that the AI system's outputs contributed to harm to communities and individuals (bullying, harassment, hate speech). The company's decision to prioritize engagement growth despite these findings and insufficient safety staffing implies a failure to mitigate known harms. Therefore, the AI system's use directly led to realized harm, fitting the definition of an AI Incident.
Thumbnail Image

TikTok and Meta Accused of Prioritizing Engagement Over Safety in Algorithm Race

2026-03-16
Analytics Insight
Why's our monitor labelling this an incident or hazard?
The social media platforms use AI systems (algorithms) to curate and prioritize content in users' feeds. The whistleblower reports indicate that these AI systems were deliberately tuned or allowed to promote harmful content to increase engagement, which has led to real harm such as exposure to misogynistic content, conspiracy theories, and neglect of reports of sexual exploitation. This meets the definition of an AI Incident as the AI system's use has directly or indirectly led to harm to communities and user safety. The involvement is in the use of AI systems for content recommendation and moderation decisions that resulted in harm.
Thumbnail Image

Meta loosened safety standards because 'the stock price is down': whistleblowers detail Big Tech's engagement-over-safety playbook - Silicon Canals

2026-03-16
Silicon Canals
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-powered recommendation algorithms at Meta and TikTok that optimize for engagement, leading to increased exposure to harmful content such as bullying, hate speech, and radicalizing material. The whistleblower accounts and internal research confirm that these harms are occurring and are a direct consequence of the AI systems' operation and the companies' decisions to loosen safety standards. The harms to individuals' mental health and to communities through the spread of extremist content fall under harm to health and harm to communities. Therefore, this event meets the criteria for an AI Incident due to the direct and indirect harms caused by the AI systems' use and malfunction (inability to control outputs).
Thumbnail Image

TikTok, Meta compromised safety in algorithm race -- report

2026-03-16
Mobile World Live
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-powered recommendation systems and algorithms used by TikTok and Meta that promote harmful content to maximize engagement and profits, despite internal knowledge of the risks. The whistleblower accounts indicate that these AI systems' outputs have directly led to harm through the spread of harmful and dangerous content, including hate speech and harassment, which harms communities and individuals. This meets the criteria for an AI Incident as the AI systems' use has directly led to harm. The event is not merely a potential hazard or complementary information but a report of realized harm linked to AI system use.
Thumbnail Image

TikTok and Meta risked safety to win algorithm arms race, whistleblowers say

2026-03-16
azertag.az
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-powered recommendation algorithms used by Meta and TikTok that have been shown internally to promote harmful content to maximize engagement and profits, despite known risks to user safety. The whistleblowers provide evidence that these AI systems' outputs have directly led to harm by exposing users to harmful and dangerous content. The involvement of AI systems in causing harm to users and communities through content amplification fits the definition of an AI Incident, as the harm is realized and directly linked to the AI systems' use and management decisions.
Thumbnail Image

Meta, TikTok Put Engagement Ahead Of Safety, Exposing Users To Harmful Content, Say Whistleblowers

2026-03-16
arise.tv
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-powered recommendation algorithms at Meta and TikTok that amplified harmful content, causing direct harm to users, including exposure to violence, harassment, and radicalization. The whistleblowers describe internal decisions to prioritize engagement metrics over safety, leading to real and ongoing harm to individuals and communities. The involvement of AI systems in content recommendation and moderation is clear, and the harms described fall under injury or harm to persons and harm to communities. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta And TikTok Let Harmful Content Rise After Learning Outrage Drives Engagement

2026-03-17
2oceansvibe News | South African and international news
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (machine-learning recommendation algorithms) used by Meta and TikTok that have been manipulated to prioritize engagement over safety, resulting in the amplification of harmful content. This has directly led to harms such as radicalization, normalization of hate speech, and exposure of users, including children and teenagers, to harmful content. The AI systems' role is pivotal in causing these harms by shaping what content users see. The harms include violations of community safety and potentially human rights related to exposure to harmful content. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Social Media Algorithms 'Prioritised Engagement Over Safety' |

2026-03-17
Silicon UK
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of social media content recommendation algorithms that influence what content users see. The use and tuning of these AI systems directly led to the dissemination of harmful content, causing harm to users' wellbeing and communities. The whistleblower accounts and internal studies confirm that the AI systems' development and use prioritized profit over safety, resulting in real harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI systems' outputs and their role in amplifying harmful content.
Thumbnail Image

Meta and TikTok boosted harmful content to compete for users, former insiders allege

2026-03-17
Computing
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered recommendation systems and algorithms that promoted harmful content to maximize engagement, leading to real harms such as mental health issues among teenagers, radicalization, and exposure to extremist content. The involvement of AI in content curation and amplification is central to the harm described. The harms include injury to health (mental health harms), harm to communities (spread of extremist and harmful content), and violations of user safety. The presence of lawsuits and whistleblower evidence confirms that harm has occurred, not just potential harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Whistleblowers Reveal How Meta, TikTok Cut Safety to Chase Engagement

2026-03-18
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The platforms' AI recommendation algorithms, which infer user preferences and optimize content delivery, were intentionally tuned to maximize engagement at the expense of safety, directly causing harm to users, especially vulnerable groups like minors. The documented increase in bullying, harassment, hate speech, and exposure to extremist content constitutes harm to communities and individuals' health. The whistleblower evidence and internal research confirm that these harms are a direct consequence of AI system use and corporate policy decisions. Therefore, this qualifies as an AI Incident because the AI systems' use has directly led to significant harms as defined in the framework.
Thumbnail Image

Digest: Whistleblowers Say Meta & TikTok Amplify Harmful Content for Engagement; Amazon Rolls Out 1-Hour Delivery; UK Gov Launches Local Media Support Initiative - ExchangeWire.com

2026-03-18
exchangewire.com
Why's our monitor labelling this an incident or hazard?
The whistleblower revelations indicate that AI systems used by Meta and TikTok for content recommendation and moderation are directly linked to the amplification of harmful content, causing harm to communities through increased exposure to violence, exploitation, extremism, and hate speech. This meets the criteria for an AI Incident as the AI systems' use has directly led to violations of rights and harm to communities. The other topics in the article do not involve AI systems or related harms and are therefore unrelated.
Thumbnail Image

Meta and TikTok let harmful content rise after evidence outrage drove engagement, say whistleblowers

2026-03-18
OSNews: Exploring the Future of Computing
Why's our monitor labelling this an incident or hazard?
The article explicitly implicates AI-driven content recommendation systems used by Meta and TikTok in promoting harmful content that causes real harm to communities, including the spread of extremism and dangerous misinformation. The involvement of AI systems in content moderation and feed curation is reasonably inferred, and the resulting harm to communities is direct and ongoing. Therefore, this event qualifies as an AI Incident due to the direct link between AI system use and realized harm.
Thumbnail Image

"Radicalizat de algoritm la 14 ani". Cum algoritmii Meta și TikTok împing utilizatorii spre ură, violență și teorii ale conspirației

2026-03-16
Jurnalul Naţional
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as recommendation algorithms that influence content exposure on social media platforms. The use of these AI systems has directly led to harms including radicalization, exposure to hate speech, and psychological harm to users, fulfilling the criteria for harm to communities and individuals. The article provides evidence from whistleblowers and internal research showing that the AI systems prioritize engagement over user well-being, causing significant societal harm. Hence, it meets the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Avertizori: TikTok și Meta au permis mai mult conținut dăunător pentru a câștiga "războiul algoritmilor" - GAZETA de SUD

2026-03-16
GAZETA de SUD
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of recommendation algorithms that influence content exposure. The harm includes exposure to harmful content, radicalization, and insufficient safety measures, which are direct consequences of the AI systems' operation and prioritization. These harms affect user health and safety and community well-being, fitting the definition of an AI Incident. The whistleblower reports and internal documents confirm the AI systems' role in causing these harms, not just potential future risks.
Thumbnail Image

Avertizori: TikTok și Meta au sacrificat siguranța utilizatorilor pentru a câștiga războiul algoritmilor

2026-03-16
ziarulnational.md
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-powered recommendation algorithms that influence the content users see, which is a clear AI system. The harm described includes exposure to harmful content, radicalization, cyberbullying, and normalization of hate and violence, which constitute harm to health, communities, and violations of rights. The platforms' decisions to allow or amplify such content for engagement and profit show the AI systems' use directly leading to these harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Peste 12 avertizori acuză platformele de social media că permit conținut nociv pentru a crește engagementul

2026-03-16
rador.ro
Why's our monitor labelling this an incident or hazard?
Social media platforms employ AI systems for content recommendation and moderation. The whistleblowers' claims indicate that these AI systems were deliberately configured or allowed to promote harmful content, causing harm to users and communities. This constitutes an AI Incident because the AI systems' use directly or indirectly led to harm through the spread of harmful content, fulfilling the criteria for harm to communities under the AI Incident definition.
Thumbnail Image

Platformele sociale majore confruntă reglementări globale în contextul impactului asupra veniturilor din publicitate

2026-03-16
Business24
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI-powered recommendation algorithms on social media platforms that have increased the spread of harmful content, leading to real and significant harms such as exposure to violence, hate speech, and harassment. The AI systems' development and use have directly contributed to these harms by prioritizing engagement over safety. This fits the definition of an AI Incident because the AI systems' use has directly led to harm to communities and user safety. The article does not merely warn of potential harm but documents ongoing harm and internal acknowledgment of these risks, confirming the incident classification.
Thumbnail Image

Meta și Tik Tok au permis conținut dăunător, spun avertizorii

2026-03-16
Economedia.ro
Why's our monitor labelling this an incident or hazard?
The social media platforms use AI systems (content recommendation algorithms) that influence what content users see. The article details how these AI systems were used or allowed to promote harmful content, leading to actual harm to users (bullying, harassment, exposure to violent and hateful content). This constitutes harm to communities and potentially violations of user rights. The involvement of AI in content curation and the resulting harm meets the criteria for an AI Incident, as the AI systems' use directly led to harm. Therefore, this event is classified as an AI Incident.