Twitter's AI Algorithm Promotes Child Sexual Abuse Content Despite Moderation Claims

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple reports reveal that Twitter's AI-driven recommendation algorithm has promoted child sexual abuse imagery (CSAM), leading to widespread viewing and sharing of illegal content. Despite Elon Musk's assurances and some content removals, significant CSAM persists and is amplified by the platform's AI systems, causing ongoing harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

Twitter employs AI systems, including recommendation algorithms, to curate and suggest content. The article details how these AI-driven features have indirectly facilitated the spread of child sexual abuse material, causing significant harm to victims and communities. The failure to adequately detect and remove this content, partly due to reduced use of detection software and staff cuts, has allowed the harm to persist. This constitutes an AI Incident because the AI system's use and malfunction have directly and indirectly led to violations of human rights and harm to communities through the dissemination of illegal and harmful content.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsHuman wellbeingTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Musk Pledged to Cleanse Twitter of Child Abuse Content. It's Been Rough Going.

2023-02-06
San Francisco Gate
Why's our monitor labelling this an incident or hazard?
Twitter employs AI systems, including recommendation algorithms, to curate and suggest content. The article details how these AI-driven features have indirectly facilitated the spread of child sexual abuse material, causing significant harm to victims and communities. The failure to adequately detect and remove this content, partly due to reduced use of detection software and staff cuts, has allowed the harm to persist. This constitutes an AI Incident because the AI system's use and malfunction have directly and indirectly led to violations of human rights and harm to communities through the dissemination of illegal and harmful content.
Thumbnail Image

Musk Pledged to Cleanse Twitter of Child Abuse Content. It's Been Rough Going.

2023-02-06
The New York Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Twitter's recommendation algorithm promoting child sexual abuse content and the failure of AI-based detection systems to effectively remove such content. The AI system's outputs have directly contributed to the dissemination and visibility of harmful material, causing significant harm to victims and communities. The involvement of AI in content recommendation and detection, combined with the realized harm of child exploitation material being widely viewed and shared, meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Twitter Struggles to Remove Child Porn Despite Elon Musk's Promise to Clean It Up

2023-02-06
Breitbart
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically Twitter's recommendation algorithm, which is designed to suggest accounts and content to users. The algorithm's role in promoting child sexual abuse material directly contributes to harm to communities and violates fundamental rights. The failure to remove such content and the reduction in resources for detection software further exacerbate the issue. Therefore, the AI system's use and malfunction have directly and indirectly led to significant harm, qualifying this as an AI Incident.
Thumbnail Image

The NYT and Canadian experts say Twitter is not doing enough to curb child exploitation

2023-02-06
The Verge
Why's our monitor labelling this an incident or hazard?
The event involves Twitter's AI-powered recommendation algorithm and content moderation systems, which are AI systems by definition as they infer from input data to generate outputs influencing user experience and content visibility. The persistence and promotion of CSAM on Twitter, despite stated efforts to curb it, constitute direct harm to children and communities, fulfilling the criteria for an AI Incident. The AI system's malfunction or inadequate performance in detecting and limiting harmful content, combined with policy changes reducing moderation effectiveness, directly contributes to this harm. Hence, this is classified as an AI Incident.
Thumbnail Image

Musk pledged to cleanse Twitter of child abuse content. It's been rough going

2023-02-06
The Straits Times
Why's our monitor labelling this an incident or hazard?
The event involves Twitter's recommendation algorithm, an AI system that suggests accounts to users based on activity. This AI system indirectly promoted accounts containing child sexual abuse material, which is illegal and harmful, thus causing harm to victims and communities. The failure of Twitter's AI-driven content moderation and recommendation systems to effectively detect and remove such content, along with delayed responses to abuse reports, has resulted in ongoing harm. The AI system's role is pivotal in the dissemination and persistence of this harmful content, meeting the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Elon Musk pledged to cleanse Twitter of child abuse content. It's been rough going

2023-02-07
NZ Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that Twitter's recommendation engine, an AI system, suggested content related to child exploitation, thereby promoting harmful material. The AI system's outputs have directly led to the widespread dissemination of child sexual abuse imagery, causing significant harm to victims and communities. The failure to promptly remove such content and the AI's role in recommending it constitute a direct link between the AI system's use and realized harm. This meets the criteria for an AI Incident as defined, involving violations of human rights and harm to communities caused directly by the AI system's operation and malfunction in content moderation.
Thumbnail Image

Twitter struggling to contain child abuse content despite Musk's promises

2023-02-06
Washington Examiner
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Twitter's algorithms recommending CSAM content to users, which is a direct involvement of an AI system in causing harm. The harm includes violations of human rights and harm to communities through the spread of child abuse material. The AI system's malfunction or misuse (algorithm recommending illegal content) has directly led to this harm. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Twitter Under Elon Musk Not Doing Enough To Curb Child Abuse Content, Child Sex Abuse Imagery Still Persists: Report | 📲 LatestLY

2023-02-07
LatestLY
Why's our monitor labelling this an incident or hazard?
The report explicitly states that Twitter's recommendation algorithm, an AI system, is promoting child sexual abuse imagery, which is a serious violation of human rights and legal protections. The harm is ongoing and significant, with documented instances of CSAM being viewed and shared widely. The AI system's role in amplifying this harmful content directly contributes to the incident. Although Twitter has taken some content down after notifications, the persistence and promotion of such content indicate a failure in the AI system's moderation and content filtering functions, leading to realized harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Twitter still isn't doing enough to combat CSAM: report

2023-02-06
MobileSyrup
Why's our monitor labelling this an incident or hazard?
The Twitter recommendation algorithm is an AI system that influences what content users see. The report shows that this AI system is promoting CSAM, which is illegal and harmful content causing significant harm to children and communities. The failure to adequately moderate and remove such content, especially after layoffs of the trust and safety team, means the AI system's outputs have directly or indirectly led to violations of human rights and legal obligations. This meets the criteria for an AI Incident as the AI system's use and malfunction have caused realized harm.
Thumbnail Image

New report suggests Twitter is not doing enough to tackle child abuse content

2023-02-07
Android Headlines
Why's our monitor labelling this an incident or hazard?
Twitter's recommendation algorithm, an AI system, has been found to promote CSAM content, which is illegal and harmful. The promotion of such content by the AI system directly contributes to the dissemination and visibility of child abuse imagery, causing harm to communities and violating legal protections. The report indicates that despite efforts to remove such content, the AI system's role in promoting it remains significant. This meets the criteria for an AI Incident as the AI system's use has directly led to harm and legal violations.
Thumbnail Image

Twitter under Musk not doing enough to curb child abuse content: Report

2023-02-07
Techlusive
Why's our monitor labelling this an incident or hazard?
Twitter's recommendation algorithm, which can be reasonably inferred to involve AI systems for content curation and promotion, is reported to promote child sexual abuse imagery. This results in the spread and visibility of harmful content, causing violations of human rights and harm to communities. The AI system's role in promoting such content, despite notifications and moderation efforts, indicates a failure or misuse leading to realized harm. Therefore, this event qualifies as an AI Incident under the framework.