Search Engines Promote Non-Consensual AI Deepfake Porn

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An NBC News investigation revealed that Google and Bing’s AI-powered search algorithms frequently promote non-consensual deepfake pornography featuring female celebrities at the top of image and web results. Using generative AI to swap real faces onto porn actors, these platforms amplify privacy violations and ongoing harm while failing to filter out illicit content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems in the generation and dissemination of nonconsensual deepfake pornography, which is a clear violation of rights and causes harm to individuals and communities. The search engines' AI-powered ranking algorithms are directly surfacing this harmful content, and generative AI tools are enabling its creation. The harm is realized, not just potential, as the content is actively accessible and ranked highly. Microsoft's AI assistant refuses to generate such content but still links to it, showing partial but insufficient mitigation. The event meets the criteria for an AI Incident because the AI systems' use has directly and indirectly led to significant harm, including violations of rights and harm to communities.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsAccountabilitySafetyRobustness & digital securityTransparency & explainabilityFairnessHuman wellbeing

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
Women

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

Business function:
Monitoring and quality controlCompliance and justice

AI system task:
Organisation/recommendersContent generation


Articles about this incident or hazard

Thumbnail Image

Google and Bing under fire for promoting nonconsensual deepfake porn, as AI continues to brew more trouble

2024-01-12
Windows Central
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the generation and dissemination of nonconsensual deepfake pornography, which is a clear violation of rights and causes harm to individuals and communities. The search engines' AI-powered ranking algorithms are directly surfacing this harmful content, and generative AI tools are enabling its creation. The harm is realized, not just potential, as the content is actively accessible and ranked highly. Microsoft's AI assistant refuses to generate such content but still links to it, showing partial but insufficient mitigation. The event meets the criteria for an AI Incident because the AI systems' use has directly and indirectly led to significant harm, including violations of rights and harm to communities.
Thumbnail Image

Report: Deepfake porn consistently found atop Google, Bing search results

2024-01-11
Ars Technica
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems—generative AI used to create deepfake pornographic content—and the use of search engine AI algorithms that rank and display this content. The harm is direct and realized: nonconsensual deepfake pornography causes psychological and reputational harm to victims, violating their rights and dignity. The presence of this content at the top of search results facilitates its dissemination and harm. The article also notes ongoing efforts by Google to mitigate this harm, but the problem persists. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals and communities.
Thumbnail Image

Google, Bing Under Fire For Serving AI Deepfake Porn In Some Search Results

2024-01-12
MediaPost
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake pornography being served in search results by major search engines, causing harm by violating the rights of the individuals depicted without their consent. The AI system's use in generating and distributing this harmful content directly leads to violations of human rights. The harm is realized and ongoing, not merely potential, which fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google and Bing Put AI Deepfake Porn at Top of Some Search Results

2024-01-12
PetaPixel
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake pornographic content without consent, which is a clear violation of human rights and privacy, constituting harm to individuals. The AI-generated content is actively disseminated and surfaced by search engines, directly leading to harm. The presence of AI tools enabling the creation of such content and the search engines' role in promoting it make the AI system's involvement pivotal. The harm is realized and ongoing, not merely potential, thus classifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Bing and Google criticized for showing AI deepfake porn prominently in search

2024-01-12
Neowin
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to create deepfake pornography by automatically swapping faces using AI. The search engines' AI-driven ranking algorithms prominently display this harmful content, leading to violations of rights and harm to individuals depicted without consent. The harm is realized and ongoing, as the content is accessible and prominently shown. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated content and AI system use in search result ranking promoting such content.
Thumbnail Image

Deepfake Porn Rise to Top of Search Results from Google, Bing -- Here's How to Submit a Report

2024-01-12
Tech Times
Why's our monitor labelling this an incident or hazard?
The event involves generative AI systems used to create deepfake pornographic content, which is nonconsensual and harms the individuals depicted, including sexual exploitation and violation of privacy and rights. The AI system's use has directly led to harm to persons and communities through the dissemination of this content. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and harm to communities. The article does not merely discuss potential future harm or responses but documents ongoing harm caused by AI-generated content.
Thumbnail Image

Google and Bing put nonconsensual deepfake porn at the top of some search results

2024-01-12
NBC Southern California
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate deepfake pornographic content without consent, which constitutes a violation of rights and harm to individuals and communities. The search engines' AI ranking algorithms further exacerbate the harm by prominently displaying this content, increasing its visibility and impact. Since the harm is occurring and directly linked to the AI systems' use and outputs, this qualifies as an AI Incident under the framework definitions.
Thumbnail Image

Non-consensual deepfake porn infects search engines like Google, Bing: new investigation - MSPoweruser

2024-01-12
MSPoweruser
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake pornographic images without consent, which is a clear violation of human rights and causes harm to individuals and communities. The AI-generated content is actively accessible on major search engines, indicating realized harm rather than just potential risk. Therefore, this qualifies as an AI Incident due to the direct link between AI-generated content and harm to rights and communities. The mention of legislative efforts is complementary but does not change the primary classification.
Thumbnail Image

Google and Bing show 'deepfakes' porn in their searches

2024-01-12
Bullfrag
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake content that is non-consensual and pornographic, causing harm to the privacy and dignity of individuals, primarily women celebrities. The AI systems used to create these deepfakes are central to the harm. Additionally, the search engines' AI-driven ranking and recommendation systems prioritize this harmful content, further amplifying the damage. The harm is ongoing and realized, including violations of human rights and harm to communities. The presence of AI in both content creation and content dissemination meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google, Bing reportedly shows non-consensual deepfake porn at top of search results

2024-01-19
Mashable ME
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-generated deepfake pornography, which is a clear violation of individuals' rights and privacy, causing harm to the affected persons. The AI systems involved include deepfake generation tools and the AI-driven search engine ranking algorithms that prominently display such harmful content. The harm is realized as the content is actively accessible and promoted, leading to direct violations of human rights. The search engines' failure to adequately monitor and prevent this misuse further implicates the AI systems in causing harm. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Google, Bing Search Shows Non-Consensual 'Deepfake Porn' At Top Of Search Results, Says Report

2024-01-18
Jagran English
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake pornography, which is a direct violation of individuals' rights and privacy, constituting harm under the framework. The search engines' AI ranking systems contribute indirectly by prominently displaying this harmful content, increasing its visibility and impact. The harm is realized and ongoing, as the content is currently accessible and causing distress. Therefore, this qualifies as an AI Incident due to the direct and indirect role of AI systems in causing harm to individuals through non-consensual deepfake content.
Thumbnail Image

Non-consensual deepfake porn surfaces at the top of Google and Bing search results | Report

2024-01-18
India TV News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is a clear use of AI systems for creating manipulated media. The search engines' AI ranking algorithms are directly responsible for surfacing this harmful content prominently, thereby facilitating its dissemination and causing harm to the individuals depicted without consent. The harm is realized and significant, involving violations of rights and harm to communities (victims and society). The report also highlights insufficient mitigation efforts by the platforms, reinforcing the incident nature. Hence, the event meets the criteria for an AI Incident as the AI systems' development, use, and malfunction (inadequate content moderation) have directly and indirectly led to harm.
Thumbnail Image

Deepfakes: Google and Bing News Show 'Non-Consensual' Deepfake Porn at Top of Search Results, Says Report | 📲 LatestLY

2024-01-19
LatestLY
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake pornographic content that uses individuals' likenesses without consent, which is a clear violation of human rights and causes harm to the persons depicted. The AI systems involved include deepfake generation tools and search engine ranking algorithms that promote such content. The harm is realized and ongoing, as the content is actively shown at the top of search results, facilitating further dissemination and harm. This meets the criteria for an AI Incident due to direct harm to individuals' rights and communities caused by AI system use and dissemination.
Thumbnail Image

Google, Bing shows non-consensual deepfake porn at top of search results: Report

2024-01-18
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to create deepfake pornography, which is non-consensual and harmful. The search engines' algorithms are facilitating the dissemination of this harmful AI-generated content by ranking it highly in search results. This leads to violations of human rights and harm to individuals and communities, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's development and use are central to the incident.