AI-generated Taylor Swift deepfake porn prompts platform bans and US legislation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A surge of AI-generated non-consensual pornographic deepfakes of Taylor Swift spread on X, Reddit, Meta and Telegram. X blocked searches and removed posts, while the White House urges Congress to pass a bill. Senators led by Dick Durbin introduced the DEFIANCE Act, allowing victims to sue creators.[AI generated]

Why's our monitor labelling this an incident or hazard?

AI-generated pornographic images of Taylor Swift have been widely spread on X, which constitutes harm to the individual and community through misinformation and non-consensual explicit content. The AI system's use in generating this content directly led to this harm. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated content dissemination. The platform's monitoring and removal efforts are responses to this incident but do not change the classification.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceSafetyAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
Human or fundamental rightsReputationalPsychological

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

X (Twitter) відновив пошук по запиту Тейлор Свіфт. Всі згенеровані порнофото будуть видаляти

2024-01-30
LIGA
Why's our monitor labelling this an incident or hazard?
AI-generated pornographic images of Taylor Swift have been widely spread on X, which constitutes harm to the individual and community through misinformation and non-consensual explicit content. The AI system's use in generating this content directly led to this harm. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated content dissemination. The platform's monitoring and removal efforts are responses to this incident but do not change the classification.
Thumbnail Image

Згенеровані ШІ оголені фото Тейлор Свіфт стали вірусними, X заблокувала пошук зірки

2024-01-29
espreso.tv
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated nude images of Taylor Swift being spread on a social media platform, leading to significant harm including violation of privacy and rights. The AI system's use in creating these images directly led to the harm. The platform's response to block searches and remove content confirms the recognition of harm. This fits the definition of an AI Incident as the AI system's use has directly led to violations of rights and harm to the individual and community. The harm is realized, not just potential, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Соцмережа X заборонила пошук за запитом "Тейлор Свіфт" після скандалу з її фейковими фотографіями

2024-01-30
InternetUA
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate fake pornographic images, which constitutes a violation of personal rights and causes harm to the individual and community. The AI-generated content's active dissemination led to the platform restricting search functionality to mitigate harm. This is a direct harm caused by the use of AI, fitting the definition of an AI Incident due to violation of rights and harm to communities.
Thumbnail Image

Соцмережа Х заблокувала пошукові запити про Тейлор Свіфт через поширення дипфейків про неї

2024-01-29
ms.detector.media
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to generate deepfake pornographic content, which has been widely spread, causing harm to the individual depicted and violating rights. The harm is realized, not just potential, as the content was viewed millions of times and led to platform interventions. The AI system's use directly led to violations of rights and harm to the community. The social media platform's blocking of search queries is a response to this harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Сенатори США представили законопроєкт проти діпфейк-порно у відповідь на скандал із Тейлор Свіфт

2024-02-01
@ www.BIN.com.ua Business Information Network
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as it concerns AI-generated deepfake images. The use of AI to create non-consensual explicit content has directly led to harm to the individual depicted (Taylor Swift) and potentially to communities by enabling harassment and reputational damage. This constitutes a violation of rights and harm to communities. The article describes realized harm from the AI system's use, not just potential harm. Therefore, this qualifies as an AI Incident. The legislative response is complementary information but the primary focus is on the incident of AI-generated harmful content and its consequences.
Thumbnail Image

Мережу заполонило діпфейк-порно з Тейлор Свіфт. X та Meta намагаються боротися з цим, але марно

2024-01-29
techno.nv.ua
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating non-consensual deepfake pornographic content, which is a clear violation of individual rights and causes harm to the person depicted and potentially to communities. The harm is realized and ongoing, as the content is actively spreading on major platforms. The involvement of AI in generating the content is explicit, and the harm includes violation of rights and reputational damage. Therefore, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Оце так прогалина. Лазівка в ШІ від Microsoft дозволяла створювати фейковий порноконтент з Тейлор Свіфт, її закрили

2024-01-30
techno.nv.ua
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Microsoft's Designer) used to generate harmful deepfake pornographic images of a celebrity, which constitutes a violation of personal rights and causes harm to the individual and community. The misuse of the AI system has directly led to the dissemination of harmful content, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. Microsoft's ongoing efforts to close loopholes and investigate are complementary but do not negate the realized harm already caused.
Thumbnail Image

Божевілля зупинилося? Twitter скасував обмеження, запроваджені через бум діпфейк-порно з Тейлор Свіфт

2024-01-31
techno.nv.ua
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake pornographic images causing harm by spreading false and harmful content about a public figure, which is a violation of rights and harms communities. The AI system (Microsoft Designer) was used to create the harmful content, and the platform's actions to restrict search and remove content confirm the harm occurred. This fits the definition of an AI Incident as the AI system's use directly led to harm (violation of rights and reputational damage).
Thumbnail Image

X 平台禁止搜尋 Taylor Swift 特別原因曝光

2024-01-30
ezone.hk 即時科技生活
Why's our monitor labelling this an incident or hazard?
The AI system's use in generating fake pornographic images of a public figure has directly caused harm by spreading non-consensual, harmful content. The platform's action to block searches is a response to this realized harm. The incident involves AI-generated content causing reputational and emotional harm, which fits the definition of harm to communities and individuals under AI Incident criteria.
Thumbnail Image

Taylor Swift深偽照片網上流傳 有美議員促立法規管

2024-01-27
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology) used to create harmful content (explicit images) that have been widely disseminated, causing harm to the individual and potentially to communities (harm to rights and dignity). This meets the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm. The legislative response is complementary information but the main event is the harm caused by the AI-generated content.
Thumbnail Image

泰勒絲「不雅照」驚動白宮! 擬推行審核AI約束法案

2024-01-31
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article describes an AI Incident because AI-generated deepfake images of Taylor Swift have been widely disseminated, causing reputational harm and harassment, which constitutes harm to individuals and communities. Additionally, AI-generated deepfake voice calls aimed at influencing voters represent misuse of AI with direct harm to democratic processes. The involvement of AI in generating harmful content and the resulting real-world consequences meet the criteria for an AI Incident. The legislative push to regulate AI use is a response to this incident, not the primary event itself.
Thumbnail Image

巨星也難逃深偽技術!女性成AI色情目標 泰勒絲也遭殃還驚動白宮 | 聯合新聞網

2024-01-30
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create and spread fake explicit images of Taylor Swift and other women, causing harm through privacy violations and online harassment. The harm is direct and realized, as the images have been widely circulated and have drawn official attention, including from the White House. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The article also discusses societal responses and legal challenges, but the primary focus is on the incident itself.
Thumbnail Image

Taylor Swift虛假不雅照網上瘋傳 白宮關注Deepfake亂象促立法 (18:40) - 20240127 - 熱點

2024-01-27
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates manipulated images. The creation and dissemination of non-consensual explicit deepfake images constitute a violation of personal rights and can be considered harm to the individual (a form of harm to persons and violation of rights). The event reports that these images have been widely viewed and circulated, indicating realized harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Taylor Swift深偽照瘋傳 白宮促打擊 - 20240128 - 國際

2024-01-27
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI deepfake technology generating non-consensual explicit images that have been widely spread online, causing harm to Taylor Swift and raising societal concerns. The harm includes violation of privacy, emotional distress, and reputational damage, which fall under violations of human rights and harm to communities. The AI system's use directly led to these harms. The White House's response and legislative discussions confirm the seriousness and realized nature of the harm. Hence, this qualifies as an AI Incident.
Thumbnail Image

偽照風波|Tylor Swift深偽照瘋傳 白宮督促立法規管

2024-01-30
EJ Tech
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake technology) used maliciously to create and distribute harmful content. The harm includes violation of personal rights and reputational damage, which falls under violations of human rights or breach of applicable laws protecting fundamental rights. The widespread dissemination of these deepfake images constitutes realized harm. Therefore, this qualifies as an AI Incident. The legislative and platform responses are complementary information but do not change the primary classification.
Thumbnail Image

X geht gegen gefälschte Nacktbilder von Taylor Swift vor

2024-01-29
Abendzeitung München
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the pornographic images of Taylor Swift were AI-generated deepfakes, which were widely shared on social media, causing harm to the individual concerned. The platform's intervention to remove the images and restrict search indicates recognition of the harm caused. The use of AI to create and spread non-consensual explicit content is a violation of rights and harms the individual, fitting the definition of an AI Incident. The involvement of AI in the creation and dissemination of harmful content is direct and leads to realized harm.
Thumbnail Image

KI-generierte Nacktfotos von Taylor Swift sorgen für Empörung - Onlinedienst X reagierte

2024-01-30
Donaukurier
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created using AI systems that manipulate or generate realistic images. The harm is realized as the images were publicly shared and viewed millions of times, constituting a violation of rights and harm to the individual and community. The platform's delayed removal and the political outcry further confirm the incident's significance. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Deepfakes beim Nachrichtendienst "X": Nacktbilder auf Taylor Swift kann nicht mehr

2024-01-29
Zweites Deutsches Fernsehen
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images, which are created using AI systems capable of generating realistic but fake content. The harm here is reputational and psychological harm to Taylor Swift and potential misinformation to the public, which falls under harm to communities or violation of rights. Since the AI-generated content was actively disseminated and viewed, the harm is realized, not just potential. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in creating harmful content that has been distributed and caused harm.
Thumbnail Image

X geht gegen gefälschte Nacktbilder von Taylor Swift vor

2024-01-29
inFranken.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the images are AI-generated deepfakes, which are a product of AI systems capable of manipulating digital media. The spread of these fake pornographic images constitutes a violation of personal rights and causes harm to the individual depicted, fulfilling the criteria for harm to persons and communities. The platform's actions to remove the content and restrict search indicate recognition of the harm caused. Hence, the event is an AI Incident as the AI system's use directly led to harm.
Thumbnail Image

Deepfakes: X geht gegen gefälschte Nacktbilder von Taylor Swift vor

2024-01-29
stuttgarter-nachrichten.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the images are AI-generated deepfakes, which have been widely viewed and spread on social media, causing reputational and emotional harm to Taylor Swift. This fits the definition of an AI Incident as the AI system's use (generation of deepfake content) has directly led to harm to a person and communities. The platform's partial mitigation and the public and governmental concern further support the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift Suche bei X vorübergehend eingeschränkt

2024-01-29
Radio Hamburg
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake images of Taylor Swift, which are manipulated digital media causing reputational and privacy harm. The spread of these images on a social media platform has led to direct harm to the individual and potentially to the community by disseminating false and harmful content. The platform's temporary restriction of search and removal of images is a response to this harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to community).