Pornhub Deploys AI Chatbot to Deter Searches for Child Sexual Abuse Material

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Pornhub has implemented an AI-powered chatbot to detect and intercept searches for child sexual abuse material (CSAM) on its platform. Triggered by 28,000 keywords, the chatbot interrupts users, informs them of the illegality, and directs them to support services, aiming to prevent harm and reduce access to illegal content.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (the chatbot) is explicitly involved, designed to detect searches related to CSAM and intervene by offering help. The use of the AI system aims to prevent harm (illegal child sexual abuse material consumption) by disrupting harmful behavior. Although no direct harm is reported as occurring, the system's use is intended to reduce significant harm to individuals and communities. This is a proactive use of AI to prevent harm, not a hazard or unrelated news. It is not merely complementary information because the main focus is on the AI system's deployment and its role in harm prevention, which is a direct use of AI to address a serious social harm. Therefore, this qualifies as an AI Incident due to the AI system's direct involvement in addressing and mitigating harm related to child sexual abuse material.[AI generated]
Industries
Media, social platforms, and marketing

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Interaction support/chatbotsEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

This Chatbot Aims to Steer People Away From Child Abuse Material

2022-09-28
Wired
Why's our monitor labelling this an incident or hazard?
An AI system (the chatbot) is explicitly involved, designed to detect searches related to CSAM and intervene by offering help. The use of the AI system aims to prevent harm (illegal child sexual abuse material consumption) by disrupting harmful behavior. Although no direct harm is reported as occurring, the system's use is intended to reduce significant harm to individuals and communities. This is a proactive use of AI to prevent harm, not a hazard or unrelated news. It is not merely complementary information because the main focus is on the AI system's deployment and its role in harm prevention, which is a direct use of AI to address a serious social harm. Therefore, this qualifies as an AI Incident due to the AI system's direct involvement in addressing and mitigating harm related to child sexual abuse material.
Thumbnail Image

This Chatbot Aims to Steer People Away From Child Abuse Material

2022-09-28
WIRED UK
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (the chatbot) that is used to detect and intervene in searches related to CSAM, which is a serious societal harm. However, the chatbot's role is preventive and supportive, aiming to reduce harm rather than causing it. There is no report of injury, rights violations, or other harms caused by the chatbot. Nor is there a credible risk that the chatbot itself could cause harm. Instead, the article focuses on the deployment and early usage statistics of the chatbot as a tool to combat CSAM. This fits the definition of Complementary Information, as it details a governance and societal response involving AI, enhancing understanding of AI's role in harm prevention.
Thumbnail Image

Pornhub is now using AI to persuade people not to search for illegal content - SiliconANGLE

2022-09-29
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The AI system (chatbot) is explicitly described as being used to detect searches for illegal content (CSAM) and to intervene by persuading users not to continue. This use of AI directly relates to preventing harm to children and communities by disrupting access to illegal and harmful content. The event involves the use of AI in a way that directly addresses a serious societal harm, fulfilling the criteria for an AI Incident. The harm is realized in the sense that the AI system is responding to actual searches for illegal content, and the intervention aims to reduce ongoing harm. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

PornHub deploys 'groundbreaking' bot to fight child abuse - Digital TV Europe

2022-09-29
Digital TV Europe
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system used in the platform's operation to detect and prevent illegal activity related to child sexual abuse material. Its deployment directly aims to reduce harm to children by preventing access to abusive content and supporting prevention efforts. Since the AI system's use is directly linked to preventing harm and is actively deployed, this qualifies as an AI Incident involving the use of AI to mitigate harm related to child abuse material.
Thumbnail Image

Pornhub partners with child abuse charities to intercept illegal activity - Pehal News

2022-09-28
Pehal News
Why's our monitor labelling this an incident or hazard?
An AI system (the chatbot) is explicitly mentioned as being used to intercept illegal activity related to child sexual abuse on a major adult website. The AI system's use is directly linked to preventing harm to children by deterring access to abusive content and encouraging users to seek help, thus addressing potential and ongoing harm. Since the chatbot is actively deployed and influencing user behavior to prevent illegal activity, this constitutes an AI Incident involving harm prevention related to child abuse, a serious violation of human rights and protection of vulnerable groups. The event is not merely a potential risk but an active intervention addressing ongoing harm, so it qualifies as an AI Incident rather than a hazard or complementary information.