AI-Generated Taylor Swift Deepfake Porn Sparks Social Media Outcry

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A wave of nonconsensual AI-generated pornographic deepfake images of Taylor Swift circulated on X, Facebook, and Instagram, forcing platform moderators to suspend accounts while trolls reuploaded content. Fans rallied under #ProtectTaylorSwift to suppress the images as lawmakers demand tougher AI regulations to curb such abuses.[AI generated]

Why's our monitor labelling this an incident or hazard?

The images were explicitly created using AI, and their dissemination caused harm to the individual's reputation and dignity, which can be considered harm to communities and a violation of rights. The AI system's use in generating these fake images directly led to this harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI-generated content.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceAccountabilitySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
Other

Harm types
Human or fundamental rightsReputationalPsychological

Severity
AI incident

AI system task:
Content generation

In other databases


Articles about this incident or hazard

Thumbnail Image

Imágenes falsas de Taylor Swift causan indignación en EUA

2024-01-27
Diario El Heraldo
Why's our monitor labelling this an incident or hazard?
The images were explicitly created using AI, and their dissemination caused harm to the individual's reputation and dignity, which can be considered harm to communities and a violation of rights. The AI system's use in generating these fake images directly led to this harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI-generated content.
Thumbnail Image

Taylor Swift: swifties piden regular la IA tras la difusión de imágenes falsas de la cantante

2024-01-25
El Heraldo de M�xico
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake images that have been disseminated online, causing harm to the individual's reputation and personal integrity, which constitutes a violation of rights. The harm is realized and ongoing, as the images are circulating and causing distress. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

El sindicato de actores condena la creación con IA de imágenes sexuales de Taylor Swift | Minuto30

2024-01-27
Minuto30.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake sexual images, which directly harm the individual's privacy and dignity, constituting a violation of rights under the framework. The dissemination of these images caused real harm, as evidenced by public concern and platform actions (removal and account suspensions). Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's misuse.
Thumbnail Image

Taylor Swift busca demandar a sitio web para adultos por imágenes falsas, generadas por IA

2024-01-26
as
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system generating deepfake images, which are false and explicit, causing harm to Taylor Swift by violating her rights and exposing her to abusive content without consent. This constitutes a violation of human rights and personal rights, fitting the definition of an AI Incident. The article describes the harm as already occurring and the victim seeking legal recourse, confirming the realized harm rather than a potential one. Therefore, this is classified as an AI Incident.
Thumbnail Image

Difunden imágenes explícitas falsas de Taylor Swift generadas con AI

2024-01-26
Quién
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of AI-generated explicit deepfake images of a public figure without consent. This constitutes a violation of personal rights and can be considered a form of harm to the individual and the community. The AI system's use in generating these images directly led to the harm through misinformation and non-consensual exploitation. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

Difunden imágenes explícitas de Taylor Swift generadas por inteligencia artificial en las redes sociales

2024-01-26
CNN Español
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to generate pornographic deepfake images of Taylor Swift, which were widely shared and viewed millions of times, causing harm by violating her rights and potentially damaging her reputation and privacy. The AI system's use directly led to the dissemination of harmful content, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The failure of social media platforms to effectively moderate and remove this content further contributed to the harm. Although the article also discusses broader governance and societal responses, the primary focus is on the realized harm caused by the AI-generated images, not just potential or future harm or complementary information.
Thumbnail Image

Taylor Swift está furiosa: ¿Por qué la cantante podría emprender acciones legales según reportes?

2024-01-26
MARCA
Why's our monitor labelling this an incident or hazard?
The AI system's use in generating explicit fake images of a real person without consent directly leads to harm, specifically violations of personal rights and potentially intellectual property rights. The content is abusive and offensive, impacting the individual's dignity and privacy, which fits the definition of an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights. The event involves the use of AI systems to create harmful content that has been published and spread, causing realized harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Distintas redes sociales bloquean las búsquedas de Taylor Swift tras la última polémica de la IA

2024-01-29
MARCA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated manipulated images (deepfakes) of a public figure, which have been widely disseminated, causing reputational and personal harm. The AI system's use directly led to this harm. The social media platforms' blocking and removal efforts confirm the harm is ongoing and significant. This fits the definition of an AI Incident as the AI system's use has directly led to harm to individuals and communities through misinformation and violation of rights.
Thumbnail Image

La red se llena de porno 'deepfake' de Taylor Swift y muestra el peligro de la IA para las mujeres

2024-01-26
EL PAÍS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate deepfake pornographic images, which have been widely disseminated online, causing harm to the individuals depicted (violation of rights) and to communities (harm through non-consensual sexual content). The AI system's use directly led to this harm. The event involves the use and misuse of AI-generated content, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

X bloqueó las búsquedas de Taylor Swift tras las filtraciones de fotos explícitas de la cantante creadas con IA

2024-01-28
infobae
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that have been widely shared, causing harm to the individual depicted (Taylor Swift) through non-consensual explicit content, which is a violation of rights and harmful to communities. The AI system's use in generating these images is central to the incident. The harm is realized, not just potential, as the images were viewed millions of times before removal. The platform's moderation challenges and the spread of such content confirm the direct link between AI use and harm. Hence, this is classified as an AI Incident.
Thumbnail Image

Taylor Swift y sus desnudos hechos con IA: CEO de Microsoft dice "hay que actuar ya" ante los deepfakes

2024-01-27
infobae
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of AI-generated deepfake images that are sexual and non-consensual, which constitutes a violation of rights and harm to the individual and community. The AI system's use (Microsoft's Designer tool) is directly linked to the harm caused by these images. The harm is realized, not just potential, as the images have been widely viewed and reported. Therefore, this qualifies as an AI Incident under the framework, as it involves harm to rights and communities caused by AI-generated content.
Thumbnail Image

Taylor Swift y sus imágenes pornográficas, la convierten en la nueva víctima de la inteligencia artificial

2024-01-25
infobae
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating manipulated explicit images (deepfakes) of a real person without consent, which is a direct violation of privacy and personal rights. The harm is realized as the images have been widely viewed and shared, causing reputational and emotional damage. The AI system's use in creating these images is central to the incident. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content violating fundamental rights and privacy.
Thumbnail Image

El sindicato de actores condena la creación con IA de imágenes sexuales de Taylor Swift

2024-01-27
Listin diario
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate fake sexual images of a public figure without consent, which were then distributed widely, causing harm to her privacy and dignity. This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The involvement of AI in creating harmful content and the resulting real harm to the person affected justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Inundan las redes de fotos íntimas de Taylor Swift creadas con inteligencia artificial

2024-01-26
Listin diario
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated intimate images of Taylor Swift being spread on social media, which is a clear case of AI misuse causing harm to the individual's rights and potentially to communities by spreading false and harmful content. The involvement of AI in creating these images is explicit, and the harm (violation of rights and reputational damage) is realized. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

X (exTwitter) bloquea búsquedas sobre Taylor Swift por la difusión de imágenes porno generadas por IA

2024-01-29
EL UNIVERSO
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI tool) used to create deepfake images that have been widely disseminated, causing harm to the individual depicted (Taylor Swift) and raising concerns about non-consensual explicit content. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The harm is realized, not just potential, as the images were viewed millions of times and caused public outrage. The platform's actions to remove content and block searches are responses to this incident but do not negate the classification as an AI Incident.
Thumbnail Image

Taylor Swift, bloqueada en redes sociales por la difusión de fotos porno con su cara creadas por deepfake

2024-01-29
La Nacion
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI generative systems (e.g., Stable Diffusion, Midjourney, DALL-E) used to create deepfake images that sexually exploit and harm Taylor Swift without consent. The widespread sharing of these images on social media platforms constitutes a violation of rights and causes harm to the individual and communities. The harm is realized and ongoing, meeting the criteria for an AI Incident. The article also discusses platform responses and legislative efforts, but the primary focus is on the harm caused by the AI-generated content, not just the responses, so it is not merely Complementary Information.
Thumbnail Image

Taylor Swift no se puede encontrar en X en medio de los problemas por las fotos generadas con IA

2024-01-27
TMZ
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating explicit images without consent, which is a direct violation of personal rights and can cause harm to the individual and community. The AI-generated content has already been disseminated widely, causing harm, not just a potential risk. Therefore, this qualifies as an AI Incident due to the realized harm stemming from the use of AI to create and spread non-consensual pornographic images, impacting the subject's rights and well-being.
Thumbnail Image

Indignación en EEUU por imágenes pornográficas falsas de Taylor Swift

2024-01-26
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create false pornographic images (deepfakes) of Taylor Swift, which were widely disseminated and viewed millions of times. This constitutes a violation of rights and harm to the individual and community, fulfilling the criteria for an AI Incident. The involvement of AI is clear, the harm is realized (not just potential), and the event involves the use of AI systems leading to harm. Hence, it is classified as an AI Incident.
Thumbnail Image

Taylor Swift AI Porn obliga a X a finalmente moderar su contenido

2024-01-29
Gizmodo en Español
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating explicit images (AI-generated pornographic content) without consent, which constitutes a violation of rights (privacy and possibly intellectual property). The widespread sharing of this content on X caused harm to the individual and the community, triggering platform moderation and public outcry. The AI system's use directly led to harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. Therefore, this is classified as an AI Incident.
Thumbnail Image

La pornografía AI de Taylor Swift está volviendo balísticos a los fanáticos

2024-01-25
Gizmodo en Español
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create deepfake images, which are AI-generated synthetic media. The harm is realized as these images are non-consensual and pornographic, violating the individual's rights and causing harm to her reputation and privacy. The widespread circulation and the platform's slow response exacerbate the harm. Therefore, this qualifies as an AI Incident due to violations of rights and harm to the individual and community.
Thumbnail Image

La Casa Blanca: La pornografía AI de Taylor Swift es "alarmante"

2024-01-27
Gizmodo en Español
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated pornographic images of a real person without consent, which constitutes a violation of rights and harm to individuals and communities. The AI system's use in creating and disseminating these images has directly led to harm. The White House's involvement and statements confirm the seriousness of the issue. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's use.
Thumbnail Image

X (Twitter) suspende la búsqueda Taylor Swift tras deepfake y así reaccionan sus fans

2024-01-27
Mag.
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of AI-generated explicit deepfake images constitute a violation of privacy and can be considered harm to the individual and their community, fitting the definition of an AI Incident. The AI system's use directly led to the harm through malicious content generation and distribution. The platform's actions to suspend search terms and remove content are responses to this incident. Therefore, this event qualifies as an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

El porno con IA de Taylor Swift está volviendo balísticos a los fanáticos

2024-01-25
Gizmodo en Español
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created using AI text-to-image generation systems. The images are non-consensual and pornographic, constituting a violation of rights and causing harm to the individual and community. The widespread dissemination and the platform's delayed removal of the content demonstrate that harm has occurred. The AI system's use in generating and spreading abusive content directly led to harm, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

La reina del pop, Taylor Swift, víctima de la IA

2024-01-29
RFI
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images, which are explicitly mentioned. The harm is realized as the images caused indignation and harm to Taylor Swift, a person, fulfilling the criterion of injury or harm to a person. The event also discusses the platform's response and potential legislative changes, but the primary focus is on the harm caused by the AI-generated content. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated deepfake content.
Thumbnail Image

Falsas fotografías sexuales de Taylor Swift inunda X sin que Elon Musk sea capaz de controlarlo

2024-01-26
El Español
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely generative AI used to create deepfake images. The use of these AI-generated images caused harm by violating the privacy and rights of Taylor Swift, constituting a breach of fundamental rights. The harm is realized as the images were widely viewed and shared before removal. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated content and its dissemination on a social media platform.
Thumbnail Image

Taylor Swift y las imágenes subidas de tono que causan polémica en redes sociales

2024-01-25
Mundo Deportivo
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to generate pornographic images of Taylor Swift without her consent, which is abusive and exploitative. This directly violates her rights and causes harm to her reputation and privacy. The involvement of AI in creating and disseminating these images meets the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's use. The legal considerations and public outrage further support the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift vs IA: Twitter da la cara por los nudes falsos que no fue capaz de detener a tiempo en X

2024-01-26
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake nude images, which are non-consensual and harmful to the individual depicted, constituting a violation of rights (privacy and possibly intellectual property). The AI system's use directly led to the harm by creating and enabling the spread of these images. The platform's delayed response further contributed to the harm. The mention of potential legal action underscores the seriousness of the incident. Hence, this is an AI Incident as the harm has materialized and is directly linked to the AI system's use.
Thumbnail Image

¿Por qué no puedes buscar a Taylor Swift en X? Imágenes con Inteligencia Artificial serían la razón

2024-01-29
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate false sexual videos of Taylor Swift, which have been disseminated on social media, causing harm to her reputation and emotional distress. The platform's response to block searches indicates recognition of the harm caused. The AI system's use directly led to violations of rights and harm to the individual, fitting the definition of an AI Incident. The article describes realized harm, not just potential harm, and the AI system's role is pivotal in creating the false content.
Thumbnail Image

X suspende las búsquedas sobre Taylor Swift por imágenes pornográficas

2024-01-29
Euronews Español
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake images that are sexually explicit and abusive, directly causing harm to Taylor Swift by violating her rights and dignity. The dissemination of such AI-generated content constitutes a clear harm to the individual and community, fulfilling the criteria for an AI Incident. The platform's blocking of searches is a response to this harm but does not negate the incident itself. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Difunden fotos "explícitas" de Taylor Swift generadas con IA; "Swiftie" pide esto para proteger a cantante

2024-01-26
El Universal
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated explicit images of Taylor Swift, which are being shared and causing reputational and personal harm. The AI system's use directly leads to a violation of rights and harm to the individual and community. The article confirms the harm is occurring (not just potential), with social media actions taken to suspend accounts but the problem persists. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to community).
Thumbnail Image

Fanáticos y hasta políticos, indignados por imágenes falsas de Taylor Swift generadas por inteligencia artificial

2024-01-27
El Universal
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI generative systems creating deepfake images that are pornographic and non-consensual, which constitutes a violation of rights and harm to the individual and communities. The harm is realized as the images were widely viewed and caused public outrage and concern. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Microsoft y X toman medidas tras deepfakes de Taylor Swift

2024-01-29
El Universal
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI (Microsoft's Designer) was used to create sexually explicit deepfake images of Taylor Swift, which were then widely shared, causing harm to the individual and potentially to the community by spreading misleading and harmful content. This meets the criteria for an AI Incident as the AI system's use directly led to violations of rights and harm. The measures taken by Microsoft and X are responses to an ongoing incident rather than preventive or speculative, confirming that harm has occurred.
Thumbnail Image

Sindicato de actores rechaza creación de imágenes sexuales de Taylor Swift mediante inteligencia artificial

2024-01-27
El Universal
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated sexual images without consent, which directly harms the individual's privacy and dignity, constituting a violation of rights under the framework. The dissemination of these images and the resulting distress and legal considerations confirm that harm has occurred. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse in generating and spreading harmful content.
Thumbnail Image

Indignación en EE. UU. por divulgación de imágenes pornográficas falsas de Taylor Swift

2024-01-28
El Tiempo
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative technology was used to create false pornographic images of Taylor Swift, which were widely circulated and viewed millions of times. This constitutes a violation of rights (privacy and consent) and causes harm to the community by spreading harmful and toxic content. The AI system's use directly led to these harms, qualifying this as an AI Incident under the framework.
Thumbnail Image

La furia de los swifties tras las imágenes explícitas de Taylor Swift creadas por IA

2024-01-26
Mag.
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-generated explicit deepfake images of a real person without consent. This constitutes a violation of personal rights and can be considered harm to the individual and the community. The AI system's use in generating these images is central to the harm caused. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

X, Instagram y Threads bloquean las búsquedas de Taylor Swift

2024-01-29
El Tiempo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools to create deepfake images of Taylor Swift without her consent, which were widely disseminated on social media platforms. This dissemination caused harm to the individual (privacy violation and reputational harm) and to the community by spreading false and manipulated content. The platforms' response to block searches and remove content confirms the recognition of harm. The AI system's use directly led to these harms, meeting the criteria for an AI Incident under violations of rights and harm to communities. The event is not merely a potential risk or complementary information but a realized harm caused by AI-generated content.
Thumbnail Image

X (Twitter) bloquea las búsquedas de Taylor Swift para evitar la difusión de sus imágenes pornográficas hechas con IA

2024-01-29
Vandal
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI and deepfake technology to create and spread pornographic images of a public figure without consent, which is a violation of human rights and causes harm to the individual's reputation and dignity. The dissemination of these images on Twitter (X) and other platforms has led to significant harm, prompting institutional responses including from the White House and industry unions. The AI system's role in generating these images is central to the harm caused, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The blocking of searches is a response but does not negate the incident itself.
Thumbnail Image

Taylor Swift víctima de un 'deepfake' pornográfico generado con IA y estudia denunciar a X (Twitter) por consentirlo

2024-01-26
Vandal
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake pornographic images, which directly harms the individual depicted by violating her rights and causing reputational and emotional damage. The content was distributed on social media platforms, causing further harm. The AI system's use here is central to the harm, fulfilling the criteria for an AI Incident. The article also discusses potential legal actions and the lack of legislation, but the primary focus is on the realized harm from the AI-generated content.
Thumbnail Image

X se ha llenado de imágenes porno de Taylor Swift generadas por IA. La única solución: apagón informativo

2024-01-29
Xataka
Why's our monitor labelling this an incident or hazard?
The article describes explicit AI-generated deepfake images of a public figure being spread online, which constitutes a violation of rights and harm to the individual and community. The AI system (generative AI models) was used to create these images, and their dissemination has caused realized harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use. The article also mentions societal and platform responses, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Taylor Swift estalla contra la IA por unas imágenes subidas de tono que han generado polémica en redes sociales

2024-01-26
okdiario.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate explicit deepfake images of a public figure, which constitutes a violation of rights and causes harm to the individual and communities by spreading offensive and sexualized content. The article mentions ongoing harm through dissemination and public outrage, indicating realized harm rather than just potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

Taylor Swift: el curioso motivo por el que X (Twitter) ha bloqueado sus búsquedas

2024-01-29
okdiario.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images that are false and harmful, causing significant distress and leading to platform intervention to block searches and remove content. This constitutes a violation of rights (non-consensual use of likeness) and harm to communities (spread of disturbing misinformation). The AI system's use in generating and disseminating these images directly led to these harms, qualifying this as an AI Incident under the framework.
Thumbnail Image

Taylor Swift, víctima de los desnudos con IA: ¿puede ser acusado de abuso y violencia digital quien las crea?

2024-01-26
20 minutos
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to generate deepfake images without consent, causing direct harm to individuals through violations of privacy, autonomy, and emotional distress. The dissemination of these AI-generated fake images is ongoing and has led to real harm, including reputational damage and potential psychological effects. This meets the criteria for an AI Incident as the AI system's use has directly led to violations of human rights and harm to individuals. The discussion of legal and regulatory responses further supports the recognition of this as a realized harm rather than a potential one.
Thumbnail Image

Taylor Swift vs. la IA: estudia emprender acciones legales por unas falsas imágenes explícitas

2024-01-26
20 minutos
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that are false and sexually explicit, causing harm to Taylor Swift's rights and dignity. The harm is realized as the images have gone viral and caused distress, meeting the criteria for an AI Incident under violations of human rights and harm to communities. The article also discusses potential legal actions and legislative responses, but the primary focus is on the existing harm caused by the AI system's outputs. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift. Búsqueda del nombre de la cantante falla en-X

2024-01-28
Milenio.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems used maliciously to create non-consensual explicit content. This has caused direct harm to Taylor Swift's rights and reputation, as well as broader harm to online communities through the spread of toxic content. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident due to the direct link between AI-generated content and violations of rights and harm to communities.
Thumbnail Image

SAG-AFTRA respalda a Taylor Swift tras imágenes creadas con IA

2024-01-27
Milenio.com
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of AI-generated explicit images (deepfakes) of a public figure without consent, which is a direct violation of privacy and rights. The AI system's use directly caused harm by producing and spreading these images, leading to reputational and emotional harm to the individual and broader societal harm related to privacy violations. The removal of images and suspension of accounts occurred after harm was realized, confirming the incident's materialization. Hence, it meets the criteria for an AI Incident due to direct harm caused by AI-generated content violating rights.
Thumbnail Image

Indignación en Estados Unidos por la difusión de imágenes pornográficas falsas de Taylor Swift hechas con IA | La artista iniciaría una demanda

2024-01-27
Página/12
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems capable of generating realistic fake content. The dissemination of these images caused harm to the individual's privacy and dignity, fulfilling the criteria for harm to human rights under the framework. The widespread viewing and the public outcry confirm that harm has materialized. The platform's response to remove the content and take action against responsible accounts is a reaction to the incident, not the incident itself. Hence, this is classified as an AI Incident.
Thumbnail Image

Imagenes falsas de Taylor Swift se esparcen por redes sociales - La Prensa Gráfica

2024-01-28
Noticias de El Salvador - La Prensa Gráfica | Informate con la verdad
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create fake explicit images of Taylor Swift, which have been widely disseminated on social media, causing harm to her personal rights and dignity. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The involvement of AI in generating the harmful content is clear, and the harm is realized, not just potential. The legislative and platform responses are complementary information but do not change the classification of the core event as an AI Incident.
Thumbnail Image

Indignación en EE.UU. por imágenes falsas de Taylor Swift

2024-01-27
Noticias de El Salvador - elsalvador.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI generative technology to create false but realistic pornographic images (deepfakes) of a public figure, Taylor Swift. These images were widely viewed and shared, causing harm to the individual’s rights and dignity, which constitutes a violation of human rights and a breach of protections against non-consensual explicit content. The AI system's role is pivotal as it generated the harmful content. The harm is realized and ongoing, not merely potential. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

X y Meta (por fin) toman medidas contra los 'deep fakes' pornográficos de Taylor Swift

2024-01-29
El Confidencial
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake images, which are AI-generated synthetic media. The use and dissemination of these images caused harm by violating Taylor Swift's rights and spreading non-consensual explicit content, which constitutes a violation of human rights and personal dignity. The platforms' delayed response allowed the harmful content to circulate for 19 hours, indicating the AI system's outputs directly led to harm. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated content violating rights and causing reputational and personal harm.
Thumbnail Image

Taylor Swift, "furiosa" por las imágenes pornográficas de IA que se han viralizado en X

2024-01-26
El Confidencial
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created and shared without consent, causing harm to Taylor Swift's personal rights and dignity. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to the community (fans and public). The viral spread of these images on a social media platform further amplifies the harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

X, Instagram y Threads bloquean las búsquedas de Taylor Swift tras la difusión de imagenes explícitas hechas con IA

2024-01-29
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI systems (deepfake technology) to create non-consensual explicit images of a public figure, leading to reputational and privacy harm, which falls under violations of human rights and harm to communities. The widespread dissemination and the platforms' active removal confirm that harm has occurred. The AI system's use is central to the incident, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La Casa Blanca alarmada por imágenes pornográficas de Taylor Swift en X

2024-01-26
Excélsior
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create deepfake pornographic images, which are false but highly realistic. These images were widely circulated on X, causing harm to the individual depicted and raising concerns about online harassment and violation of rights, particularly affecting women and girls. The AI system's role in generating and enabling the spread of this harmful content directly links it to realized harm, fitting the definition of an AI Incident. The article also discusses the platform's response and legislative considerations, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Taylor Swift ya no se puede buscar en X tras escándalo por fotos explicitas de IA

2024-01-28
Excélsior
Why's our monitor labelling this an incident or hazard?
The event describes the creation and viral spread of explicit images generated by AI without consent, which is a clear violation of personal rights and causes harm to the individual and community. The AI system's role in generating these images is central to the harm caused. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm (violation of rights and harm to community).
Thumbnail Image

Taylor Swift 'furiosa' por imágenes de desnudos generadas por IA

2024-01-25
Excélsior
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated nude images created without consent, which constitutes a violation of rights and exploitation. The widespread sharing of these images caused harm to the individual and her community, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The AI system's use directly led to this harm, and the article describes ongoing legal actions and platform responses, confirming the realized harm rather than a potential risk.
Thumbnail Image

Taylor Swift, la nueva víctima de la inteligencia artificial; filtran imágenes falsas

2024-01-27
Excélsior
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake images that have been widely disseminated, causing harm to the individual depicted and raising concerns about online harassment and rights violations. The harm is realized and ongoing, as the images circulated widely before removal. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to communities. The article also includes complementary information about responses, but the primary focus is on the incident itself.
Thumbnail Image

Deepfakes: publicaron en Twitter imágenes sexuales falsas de Taylor Swift creadas con inteligencia artificial

2024-01-27
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the creation and dissemination of AI-generated deepfake images that are pornographic and non-consensual, targeting Taylor Swift. This constitutes a violation of rights and harm to the individual and community. The AI system's use is central to the harm caused, fulfilling the criteria for an AI Incident. The harm is realized and ongoing as the images were widely viewed and circulated before removal, and the incident has led to public and governmental responses. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Twitter frenó las búsquedas sobre Taylor Swift para "priorizar la seguridad", luego del bombardeo de deepfakes

2024-01-29
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the creation and viral spread of deepfake images generated by AI systems (neural network generative algorithms) that have caused harm by disseminating non-consensual pornographic content. This constitutes a violation of rights and harm to the individual and community. The platform's intervention to limit search results is a response to an ongoing AI Incident. Therefore, this event qualifies as an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Viralizan fotos explícitas de Taylor Swift generadas por Inteligencia Artificial en redes sociales

2024-01-26
Periodico Correo
Why's our monitor labelling this an incident or hazard?
The article describes the creation and viral dissemination of AI-generated explicit images of a public figure without consent. This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The AI system's use in generating these images is central to the harm, and the event involves actual realized harm rather than just potential risk. Therefore, this is classified as an AI Incident.
Thumbnail Image

Taylor Swift es la última víctima de la IA y Twitter toma medidas. La red social corta de raíz toda la polémica con las imágenes falsas

2024-01-29
3D Juegos
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images of a public figure, Taylor Swift, which are non-consensual and harmful, thus constituting a violation of rights and harm to the individual. The AI system's use in generating these images is central to the harm. The social media platform's active removal of such content further confirms the recognition of harm. Therefore, this event is an AI Incident as it involves realized harm caused by AI-generated manipulated content infringing on personal rights and dignity.
Thumbnail Image

Deepfake alcanza a Taylor Swift y hasta la Casa Blanca se indigna; ¿qué dijo el gobierno de Biden?

2024-01-28
El Financiero
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake images that sexually exploit a person without consent, constituting a violation of rights and harm to the individual and community. The harm is realized as these images have proliferated online, causing distress and public outcry. The AI system's role is pivotal in generating these manipulated images. Therefore, this qualifies as an AI Incident under the framework, as it directly leads to harm (violation of rights and harm to communities).
Thumbnail Image

¿Por qué X ha bloqueado las búsquedas de Taylor Swift?

2024-01-28
ComputerHoy.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake images, which are manipulated sexual content featuring Taylor Swift's face. This AI-generated content has been widely disseminated, causing harm to the individual’s rights and to the community by spreading non-consensual explicit material. The platform's response to block searches is a direct consequence of the AI-generated harm. Therefore, the AI system's use has directly led to violations of rights and harm to communities, fitting the definition of an AI Incident.
Thumbnail Image

Difunden cientos de imágenes explícitas de Taylor hechas por IA: esta ha sido la respuesta de X

2024-01-29
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate fake explicit images (deepfakes) of Taylor Swift and other women, including minors, which have been widely shared online. This constitutes a violation of rights and harm to communities. The AI system's use directly led to the harm through the creation and dissemination of these images. The platform's slow moderation response further contributed to the impact. Hence, the event meets the criteria for an AI Incident as it involves realized harm caused by AI-generated content.
Thumbnail Image

X bloquea las búsquedas de Taylor Swift para frenar la difusión de imágenes explícitas de la artista generadas con IA

2024-01-28
La Voz de Galicia
Why's our monitor labelling this an incident or hazard?
The event describes the generation and viral spread of AI-generated explicit deepfake images of a public figure without consent, which constitutes a violation of rights and harm to the individual and community. The AI system's role in creating these images is central to the harm. The platform's actions to block searches and remove content are responses to an ongoing AI Incident. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

La razón por la que X, Instagram y Threads bloquean las búsquedas de Taylor Swift | Tecnología | La Voz del Interior

2024-01-29
La Voz
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as the deepfakes are created using AI tools like generative adversarial networks (GANs) and Microsoft Designer. The use of AI to create non-consensual, manipulated images constitutes a violation of rights and harms the reputation and dignity of the person depicted, fulfilling the criteria for harm under violations of human rights and breach of obligations protecting fundamental rights. The platforms' active removal and blocking efforts confirm the recognition of harm caused by the AI-generated content. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs.
Thumbnail Image

Imágenes pornográficas de Taylor Swift generadas con IA inundan la redes sociales y causan indignación en EU

2024-01-26
El Economista
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI generative systems creating deepfake pornographic images, which are false but highly realistic. The distribution of these images constitutes a violation of rights (non-consensual pornography) and causes harm to the individual and communities by spreading harmful content. The harm is realized, not just potential, as the images went viral and were viewed millions of times before removal. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

X tomó esta decisión drástica ante la difusión de imágenes pornográficas de Taylor Swift creadas con inteligencia artificial

2024-01-29
Iprofesional.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and widespread dissemination of AI-generated pornographic deepfake images of Taylor Swift, which constitutes a violation of her rights and causes harm to her reputation and privacy. The AI systems (generative AI models like ChatGPT, Bard, Midjourney) were used maliciously to produce this content. The harm is realized as the images have been viewed millions of times and caused significant concern, prompting platform intervention and potential legal action. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and harm to the individual and community).
Thumbnail Image

X bloqueó las búsquedas de Taylor Swift tras las filtraciones de fotos explícitas de la cantante creadas con IA

2024-01-29
Diario La Página
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that are sexually explicit and non-consensual, which constitutes a violation of personal rights and harm to the individual and community. The widespread dissemination of these images on a major social media platform caused direct harm. The AI system's use in generating these images is central to the incident. The platform's moderation limitations and the need to block searches further highlight the impact. Hence, this is an AI Incident as per the definitions provided, involving harm to rights and communities caused by AI misuse.
Thumbnail Image

X, Instagram y Threads bloquean las búsquedas de Taylor Swift tras...

2024-01-29
europa press
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools to create manipulated images (deepfakes) of a public figure without consent, which has been widely disseminated causing reputational harm and violation of rights. The AI system's use directly led to harm (violation of rights and harm to community perception). The platforms' blocking and removal actions are responses to an ongoing incident, not the incident itself. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Taylor Swift: se viralizan en redes sociales imágenes explícitas y falsas de la cantante

2024-01-27
El Informador :: Noticias de Jalisco, México, Deportes & Entretenimiento
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the images are generated by AI and are false, explicit deepfakes of Taylor Swift. The dissemination of such non-consensual deepfake pornography is a recognized harm involving violation of rights and harm to the community. The AI system's role in generating these images is pivotal to the harm occurring. Therefore, this event qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to the individual and community.
Thumbnail Image

Taylor Swift: así responden X, Meta y otras plataformas ante polémicas fotos de la artista

2024-01-27
El Informador :: Noticias de Jalisco, México, Deportes & Entretenimiento
Why's our monitor labelling this an incident or hazard?
The article describes an incident where AI generative models were used to create explicit, non-consensual images of a person, constituting a violation of rights and harm to the individual and communities. The harm is realized and ongoing, as the images have been widely shared and platforms are actively removing them. The AI system's development and use are directly linked to the harm. Therefore, this qualifies as an AI Incident under the framework, as it involves direct harm caused by AI-generated content violating rights and causing reputational and privacy harm.
Thumbnail Image

Taylor Swift: imágenes explícitas generadas por IA circulan por internet

2024-01-27
El Informador :: Noticias de Jalisco, México, Deportes & Entretenimiento
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create sexually explicit deepfake images of Taylor Swift, which have been widely circulated online. This constitutes a violation of rights (privacy, dignity) and causes harm to the individual and community. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. The involvement of AI in generating the harmful content and the resulting real harm to the victim and community is clear and direct.
Thumbnail Image

X (Twitter) bloquea las búsquedas sobre Taylor Swift por la difusión de imágenes porno generadas por IA: otra víctima del deepfake

2024-01-29
Genbeta
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI (Microsoft Designer) to generate deepfake pornographic images without consent, which is a violation of rights and causes harm to the individual and community. The harm is realized as the images are actively disseminated online, prompting platform intervention. The AI system's misuse directly leads to this harm, fitting the definition of an AI Incident involving violation of rights and harm to communities.
Thumbnail Image

Los 'deepfakes' porno de Taylor Swift inundan X (Twitter): la violencia contra las mujeres en la era de la IA

2024-01-26
Hipertextual
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI generative models (diffusion models) to create deepfake pornographic images without consent, which have been widely disseminated causing harm to the individuals depicted. This fits the definition of an AI Incident because the AI system's use has directly led to violations of privacy and dignity (a form of harm to persons and communities). The harm is realized, not just potential, as the images have been viewed millions of times and have caused distress to victims. The involvement of AI in generating the content and its role in the harm is clear and central to the event.
Thumbnail Image

Los 'deepfakes' sexuales de Taylor Swift obligan a X (Twitter) a censurar ciertas búsquedas

2024-01-29
Hipertextual
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake sexual images, which have been widely disseminated causing harm to the individual (Taylor Swift) and potentially to broader community norms and rights. The AI system's outputs have directly led to harm through non-consensual explicit content distribution. The platform's moderation actions are reactive to this harm. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

Microsoft limita Designer tras los deepfakes sexuales de Taylor Swift

2024-01-29
Hipertextual
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Microsoft's Designer and other AI image generation tools) used to create sexually explicit deepfake images of a real person without consent, which constitutes a violation of rights and harm to the individuals depicted. The misuse of AI to generate and disseminate such content has already occurred, causing reputational and personal harm. Microsoft's and Twitter's mitigation efforts are responses to an ongoing AI Incident rather than a mere hazard or complementary information. The harm is direct and realized, not just potential, thus classifying this as an AI Incident.
Thumbnail Image

X bloqueó las búsquedas de Taylor Swift; aquí las razones

2024-01-28
Hoy Digital
Why's our monitor labelling this an incident or hazard?
The incident clearly involves AI systems used to create deepfake images, which are realistic synthetic media generated by AI. The harm is direct and realized, as the images are non-consensual, sexually explicit, and widely viewed, constituting a violation of personal rights and causing reputational and emotional harm. The platform's blocking of searches is a response to this harm. Therefore, this qualifies as an AI Incident due to violations of rights and harm to the community resulting from the AI-generated content.
Thumbnail Image

¿Por qué se hizo viral el hashtag #PROTECTTAYLORSWIFT? | Noticias de México | El Imparcial

2024-01-27
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the explicit images were generated by AI and shared without consent, which is a direct violation of rights and constitutes harm to the individual. The dissemination of such AI-generated explicit content is abusive and exploitative, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized as the content is actively circulating and causing distress, and the AI system's role in generating the images is pivotal to the incident.
Thumbnail Image

SAG-AFTRA rechaza rotundamente el uso de IA para crear imágenes explicitas falsas de Taylor Swift | Noticias de México | El Imparcial

2024-01-27
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake explicit images of a public figure, Taylor Swift, which were widely shared and caused harm by violating privacy and dignity. The dissemination of such images is a clear violation of rights and harms the individual and potentially others similarly affected. The involvement of AI in generating these images is central to the harm. The event describes realized harm, not just potential harm, and includes responses such as account suspensions and legislative support to criminalize such acts. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

X bloquea búsquedas de 'Taylor Swift' tras controversia por fotos con IA

2024-01-27
Aristegui Noticias
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated altered images that caused harm to the individual (Taylor Swift) and distress to the community, fulfilling the criteria for harm to communities and violation of rights. The AI system's use in generating and disseminating these images directly led to the harm. The platform's blocking of searches and content removal are responses but do not negate the incident. Therefore, this is an AI Incident.
Thumbnail Image

¡Poder Swiftie! Sale sindicato de actores en defensa de Taylor Swift por falsas imágenes íntimas creadas con IA

2024-01-27
Vanguardia
Why's our monitor labelling this an incident or hazard?
The article describes the creation and spread of AI-generated fake intimate images of Taylor Swift, which were shared widely on social media, causing harm to her privacy and dignity. The use of AI to generate such images without consent is a direct violation of rights and constitutes harm. The event meets the criteria of an AI Incident because the AI system's use directly led to harm (violation of privacy and rights) and the dissemination of harmful content. The involvement of the actors' union and calls for legal action further confirm the recognition of harm caused by AI misuse.
Thumbnail Image

Caso del deep fake explícito de Taylor Swift refleja los peligros en los que se encuentran las mujeres con la IA

2024-01-26
Vanguardia
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images that are non-consensual and abusive, directly harming the subject (Taylor Swift) and potentially other women targeted similarly. The AI system's use in generating and distributing these images constitutes a violation of rights and causes significant harm. The article details realized harm, legal violations, and societal impact, fitting the definition of an AI Incident. Although it also discusses legislative and platform responses, the primary focus is on the harm caused by the AI system's use, not just complementary information.
Thumbnail Image

Bloquean búsquedas de Taylor Swift en redes sociales tras difusión de fotos íntimas creadas con IA

2024-01-29
www.elcolombiano.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake images of a public figure, which are intimate and non-consensual, constituting a violation of rights and harm to the individual and community. The dissemination of these images on social media and their viral spread demonstrate realized harm. The platforms' removal and blocking efforts are responses to this harm. Therefore, this qualifies as an AI Incident because the AI-generated content has directly led to harm through privacy violations and reputational damage.
Thumbnail Image

X bloqueó búsquedas de Taylor Swift por fotos hechas con IA

2024-01-28
Últimas Noticias
Why's our monitor labelling this an incident or hazard?
The event describes the creation and viral spread of AI-generated sexually explicit images without consent, causing harm to Taylor Swift's privacy and reputation. The AI system's use in generating these images directly led to the harm. The platform's response to remove the images and block searches confirms the recognition of harm. This fits the definition of an AI Incident as it involves violations of rights and harm to communities caused by the use of AI systems.
Thumbnail Image

X, antes Twitter, bloquea las búsquedas sobre Taylor Swift tras difusión de imágenes creadas con IA

2024-01-29
Forbes México
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI was used to create sexually explicit fake images of Taylor Swift, which is a direct violation of her rights and causes harm to her reputation and privacy. The social media platform's blocking of searches is a response to this harm. The AI system's use directly led to the dissemination of harmful content, fulfilling the criteria for an AI Incident involving violations of rights and harm to communities. The harm is realized, not just potential, so it is not a hazard or complementary information.
Thumbnail Image

Actores en EU condenan imágenes sexuales de Taylor Swift creadas con IA

2024-01-27
Forbes México
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated sexual images of a person without consent, which is a direct violation of privacy and personal rights. The dissemination of these images caused harm to the individual and potentially to communities by normalizing such violations. The involvement of AI in creating these images and their harmful impact meets the criteria for an AI Incident, as the AI system's use directly led to harm (violation of rights and privacy).
Thumbnail Image

'Deepfake' pornográficos de Taylor Swift inundan las redes sociales

2024-01-26
LaSexta
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is a clear use of AI systems to create realistic but fake images. The circulation of these images on social media has caused harm to the individual (Taylor Swift) and her community, constituting a violation of rights and harm to communities. The article mentions the harm is ongoing and that the victim is considering legal action, confirming the harm has materialized. Therefore, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights and harm to communities.
Thumbnail Image

El sindicato de actores condena la creación con IA de imágenes sexuales de Taylor Swift

2024-01-29
El HuffPost
Why's our monitor labelling this an incident or hazard?
The creation and distribution of AI-generated non-consensual sexual images directly harms the individual's rights to privacy and dignity, fitting the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The AI system's use in generating these images and their dissemination online has directly led to harm, including emotional distress and privacy violations. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Sindicato de actores condena creación de imágenes de Taylor Swift con IA | Periódico Zócalo | Noticias de Saltillo, Torreón, Piedras Negras, Monclova, Acuña

2024-01-27
Zócalo Saltillo
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated fake images that have been disseminated without consent, causing harm to Taylor Swift's privacy and dignity. The use of AI to create these images directly led to harm (violation of privacy and potential emotional distress), fitting the definition of an AI Incident under violations of human rights and harm to communities. The public and institutional responses further confirm the recognition of harm caused by the AI system's misuse.
Thumbnail Image

Bloquea X búsquedas relacionadas con Taylor Swift | Periódico Zócalo | Noticias de Saltillo, Torreón, Piedras Negras, Monclova, Acuña

2024-01-29
Zócalo Saltillo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake content, which is a form of AI-generated misinformation and image manipulation. The creation and dissemination of non-consensual deepfake images directly violate personal rights and can cause harm to individuals' reputations and privacy. Since the article describes the actual circulation of such AI-generated harmful content, this qualifies as an AI Incident due to violation of rights and harm to the individual involved. The mention of platform efforts and government statements supports the context but does not change the classification.
Thumbnail Image

Taylor Swift 'furiosa' por imágenes de desnudos generadas por IA | Periódico Zócalo | Noticias de Saltillo, Torreón, Piedras Negras, Monclova, Acuña

2024-01-25
Zócalo Saltillo
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated nude images of Taylor Swift that were widely shared, causing distress and harm to her and her circle. The AI system's role in creating these images is explicit, and the harm includes violation of privacy and potential reputational damage, which are breaches of personal rights. The harm is realized, not just potential, and legal actions are being considered, confirming the seriousness of the incident. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to a person.
Thumbnail Image

Inteligencia Artificial casi fuera de la ley | Periódico Zócalo | Noticias de Saltillo, Torreón, Piedras Negras, Monclova, Acuña

2024-01-29
Zócalo Saltillo
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to create deepfake videos, which are manipulated media generated by AI. The harms described include violations of personal rights, reputational damage, and psychological harm to victims, which fall under violations of human rights and harm to communities. The harm is realized and ongoing, as evidenced by the large number of deepfake videos published and the impact on victims. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm.
Thumbnail Image

X bloqueó las búsquedas de Taylor Swift

2024-01-28
Cooperativa
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated sexually explicit images, which constitute a violation of privacy and potentially human rights, causing harm to the individual and the community by spreading false and harmful content. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The platform's blocking of searches is a response to this harm. Therefore, this is classified as an AI Incident due to the realized harm from AI-generated content dissemination.
Thumbnail Image

Taylor Swift: Imágenes pornográficas falsas de la cantante causan indignación en la Casa Blanca

2024-01-27
La Nación, Grupo Nación
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of AI-generated deepfake pornographic images constitute a direct violation of rights and cause harm to the individual and community by spreading non-consensual explicit content. The AI system's role in generating these images is pivotal to the harm occurring. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Taylor Swift, víctima de la Inteligencia Artificial: difunden imágenes suyas de contenido pornográfico en redes

2024-01-29
Antena3
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of AI-generated fake pornographic images of Taylor Swift, which is a clear violation of her rights and causes harm to her reputation and dignity. The AI system was used to generate these images, and their dissemination on the social media platform led to significant harm. The platform's response to remove the content and suspend the user confirms the harm occurred. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

X frena algunas búsquedas de Taylor Swift por imágenes explícitas falsas

2024-01-29
San Diego Union-Tribune
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating explicit deepfake images that have been widely circulated, causing harm to Taylor Swift's reputation and privacy. The use of AI to create non-consensual pornographic images constitutes a violation of rights and harm to the individual and community. The platform's blocking of searches is a response to this harm but does not negate the fact that harm has occurred. Hence, this is an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

Filtran imágenes falsas de Taylor Swift desnuda creadas con IA

2024-01-26
TV Azteca
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to create fake images (deepfakes) of Taylor Swift without her consent. The harm caused includes violation of personal rights and reputational damage, which falls under violations of human rights and harm to communities. The images were actively shared and viewed millions of times, indicating realized harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

X suspendió las búsquedas de Taylor Swift tras difusión de imágenes explicitas de la cantante creadas con IA - La Opinión

2024-01-28
La Opinión Digital
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated explicit images of Taylor Swift being spread on social media, which is a direct violation of her rights and causes harm to her reputation and privacy. The platform's response to limit searches and remove content confirms the harm's materialization. The involvement of AI in creating the images is explicit, and the harm is realized, not just potential. This fits the definition of an AI Incident as it involves violation of rights and harm to a person caused by AI-generated content.
Thumbnail Image

Imágenes pornográficas falsas de Taylor Swift circulan en la web y generan ola de indignación

2024-01-26
Diario EL PAIS Uruguay
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create deepfake pornographic images of a public figure, which have been widely disseminated and caused harm by violating privacy and dignity. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to communities). The event is not merely a potential risk or a complementary update but a realized harm caused by AI-generated content.
Thumbnail Image

Taylor Swift y sus fotos con IA por las que buscaría una demanda

2024-01-26
Sopitas.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake pornographic images of a real person without consent, which is a direct violation of human rights and privacy. The harm is realized as the images are being shared and causing indignation and reputational damage. The AI system's use in generating these images is central to the incident. Therefore, this is classified as an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

Taylor Swift es bloqueada en las búsquedas dentro de X (antes Twitter) por un escándalo de IA

2024-01-28
FayerWayer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and viral spread of AI-generated deepfake images that depict Taylor Swift in abusive ways, constituting harm to her reputation and potentially violating her rights. The AI system's use in generating these images directly led to this harm. The platforms' responses (blocking searches, removing content) confirm the recognition of harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm to a person and communities through abusive content dissemination.
Thumbnail Image

Bloquea X búsquedas relacionadas con Taylor Swift

2024-01-29
El Diario de Juárez
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake sexual images of Taylor Swift, which constitute a violation of her rights and cause harm. The AI system's use in generating and spreading these images directly leads to harm. The platform's response to block searches is a mitigation effort but does not negate the fact that harm has occurred. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-generated content.
Thumbnail Image

Preocupa a Casa Blanca fotos de Taylor Swift hechas con IA

2024-01-27
El Diario de Juárez
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake images that are non-consensual and intimate, which is a clear violation of rights and causes harm to the individual targeted. The widespread dissemination of these images on social media platforms constitutes harm to communities and individuals. The AI system's use in generating these images is directly linked to the harm caused. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

¡Furiosa! Taylor Swift demandará por porno hecho con IA

2024-01-25
El Diario de Juárez
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are created using AI systems capable of generating realistic fake content. The harm is direct and realized, as the videos are abusive, offensive, exploitative, and non-consensual, violating Taylor Swift's rights and causing reputational and emotional harm. The viral spread on social media platforms further amplifies the harm to the community and individual. The mention of potential legal actions and platform suspensions are responses to this AI Incident. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Indignación por contenido pornográfico de Taylor Swift creado con inteligencia artificial: Cantante estaría evaluando acciones legales

2024-01-26
T13 (teletrece)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated pornographic content, which is a direct product of AI system use. The harm caused is a violation of privacy and potentially other rights, fitting the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The dissemination of such content has already occurred, causing realized harm, not just potential harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Preocupa imágenes de Taylor Swift generadas por IA

2024-01-27
El Diario de Yucatán
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake pornographic images of a public figure that were widely shared online, constituting a violation of rights and causing harm to the individual and potentially to communities by spreading non-consensual explicit content. The AI system's role in generating these images and their dissemination on social media platforms directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, as it involves realized harm caused by AI-generated content violating rights and causing social harm.
Thumbnail Image

Difunden imágenes explícitas de Taylor Swift generadas por inteligencia artificial en las redes sociales - WTOP News

2024-01-26
WTOP
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating harmful synthetic images (deepfakes) of a person without consent, which constitutes a violation of rights and harm to the individual and community. The widespread sharing of these images on social media platforms caused direct harm, fulfilling the criteria for an AI Incident. The article also discusses the failure of content moderation systems, which rely on AI and user reports, to prevent or quickly remove such harmful content, further supporting the classification as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

¿No aparece? X bloquea las búsquedas de Taylor Swift por fotos creadas por la IA

2024-01-29
Canal RCN | Nuestra Tele - Televisión y Entretenimiento
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated explicit deepfake images of Taylor Swift, which have caused harm by spreading false and damaging content about her, impacting her personal and public reputation. The AI system's use in generating these images directly led to this harm. The platform's blocking of searches is a response to this incident. The harm includes violation of personal rights and reputational damage, fitting the definition of an AI Incident under violations of rights and harm to communities. Hence, this event is classified as an AI Incident.
Thumbnail Image

Taylor Swift es víctima de mal uso de la Inteligencia Artificial: crearon fotos de la cantante desnuda

2024-01-28
Correo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI generative tools to create false nude images of Taylor Swift, constituting a direct misuse of AI technology. This misuse leads to violations of rights (privacy and dignity) and harm to the individual and communities through online harassment and toxic content dissemination. The involvement of AI in generating these images and the resulting harm meets the criteria for an AI Incident rather than a hazard or complementary information. The harm is realized, not just potential, and the AI system's role is pivotal in creating the harmful content.
Thumbnail Image

X bloquea las búsquedas de Taylor Swift para evitar aumentar el problema

2024-01-29
WWWhat's new
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake content, which is a form of AI-generated synthetic media. The harm includes violation of privacy and potential reputational damage to Taylor Swift, a public figure, which constitutes harm to individuals and communities. The platform's response to block searches indicates the harm was realized and significant. Therefore, this is an AI Incident as the AI system's use directly led to harm through the creation and dissemination of non-consensual deepfake content.
Thumbnail Image

Imágenes IA de Taylor Swift: X bloquea búsquedas relacionadas con DeepFakes

2024-01-29
sipse.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake sexual content of Taylor Swift, which is a form of non-consensual pornography and defamation, causing harm to the individual. The AI system's role in creating these images is direct and pivotal to the harm. The platform's response to block related searches is a mitigation effort but does not negate the occurrence of harm. Hence, this event meets the criteria for an AI Incident due to violation of rights and harm to the individual caused by AI-generated content.
Thumbnail Image

Taylor Swift, nueva víctima de DeepFake con IA; alista demanda

2024-01-25
sipse.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are created using AI systems capable of generating realistic fake content. The harm is realized as the videos falsely depict Taylor Swift in sexual acts, violating her rights and causing reputational and emotional harm. The dissemination of these videos on social media platforms further amplifies the harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in generating and spreading non-consensual deepfake content.
Thumbnail Image

Las swifties se levantan contra el uso de la Inteligencia Artificial para crear videos XXX de Taylor Swift

2024-01-26
Diario El Día
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake pornographic images without consent, which constitutes a violation of rights and harm to the individual depicted. The widespread circulation of such content on social media platforms and the resulting distress and potential legal actions confirm that harm has occurred. Therefore, this qualifies as an AI Incident due to violations of human rights and harm to the individual caused directly by the AI-generated content.
Thumbnail Image

Preocupa a la Casa Blanca las imágenes falsas de Taylor Swift generadas con Inteligencia Artificial - El Diario NY

2024-01-27
El Diario Nueva York
Why's our monitor labelling this an incident or hazard?
The article describes the creation and spread of AI-generated deepfake images that caused harm to Taylor Swift, a real person, by damaging her reputation and privacy. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The involvement of political figures and calls for legal action further confirm the recognition of harm caused by the AI system's outputs. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Taylor, víctima de la IA: escándalo por fotos porno creadas por computadora

2024-01-27
Diario El Día
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create deepfake pornographic images of Taylor Swift, which were widely circulated online. This constitutes a violation of rights and harm to the individual and community, fitting the definition of an AI Incident. The harm is realized, not just potential, as the images were viewed millions of times and caused public outcry and concern from authorities. The AI system's use directly led to the harm through the creation and spread of non-consensual explicit content.
Thumbnail Image

Indignación en EEUU por falsas imágenes explícitas de Taylor Swift

2024-01-26
Tiempo Digital
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI to create and disseminate false explicit images (deepfakes) of a public figure, which directly harms the individual's reputation and privacy, and contributes to online harassment and toxic content proliferation. The harm is realized as the images were viewed millions of times and caused public indignation and concern from authorities, including the White House. Therefore, this is an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Imágenes pornográficas falsas de Taylor Swift generan indignación en Estados Unidos

2024-01-27
El Observador
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative tools were used to create realistic but fake pornographic images of Taylor Swift, which were widely circulated online. This constitutes a violation of rights and causes harm to the individual and communities targeted by such content. The harm is realized and ongoing, as the images were viewed millions of times before removal. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use in generating and disseminating harmful deepfake content.
Thumbnail Image

Redes sociales bloquean búsquedas de deepfake porno de Taylor Swift tras difusión masiva

2024-01-29
El Observador
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools to create deepfake images of Taylor Swift, which were widely disseminated without her consent. This constitutes a violation of rights and harm to the individual and communities. The AI system's development and use directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of rights and harm to reputation and privacy).
Thumbnail Image

Sindicato de actores condena creación de imágenes de Taylor Swift con IA

2024-01-27
López-Dóriga Digital
Why's our monitor labelling this an incident or hazard?
The creation and distribution of AI-generated non-consensual explicit images directly harms the individual depicted, violating her privacy and potentially other rights. The AI system's role in generating these images is central to the harm caused. Therefore, this qualifies as an AI Incident due to the realized harm to a person's rights and dignity through the misuse of AI technology.
Thumbnail Image

Difunden imágenes explícitas de Taylor Swift generadas IA

2024-01-27
Periódico Noroeste
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of AI-generated explicit images of Taylor Swift without her consent, which constitutes a violation of her rights and causes harm. The AI system's role in generating these images is pivotal to the harm caused. The harm is realized as the images were widely viewed and shared before removal, and the incident has prompted calls for legal action and institutional responses. Hence, it meets the criteria for an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

Taylor Swift buscaría demandar a sitio para adultos tras imágenes generadas con IA

2024-01-26
24 Horas
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated images (deepfakes) that have been used to create non-consensual pornographic content of a real person, Taylor Swift. This constitutes a violation of rights and is harmful to the individual and community. The AI system's use directly led to harm through the creation and dissemination of abusive and offensive content. Therefore, this qualifies as an AI Incident under the framework, as it involves realized harm caused by the use of an AI system.
Thumbnail Image

Indignación en EE.UU. por imágenes pornográficas falsas de Taylor Swift

2024-01-26
Montevideo Portal / Montevideo COMM
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create realistic but fake pornographic images of a public figure without consent, which constitutes a violation of rights and harm to the community. The harm is realized as the images were widely viewed and circulated, causing indignation and potential reputational damage. The AI system's use is directly linked to this harm, qualifying this as an AI Incident. The article also mentions platform moderation and legislative responses, but the primary focus is on the harm caused by the AI-generated content, not just the responses, so it is not merely Complementary Information.
Thumbnail Image

El sindicato de actores condena la creación con IA de imágenes íntimas de Taylor Swift

2024-01-27
Diario La Prensa
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI was used to generate fake intimate images of Taylor Swift without her consent, which were then widely shared online. This use of AI directly caused harm by violating privacy rights and causing emotional and reputational damage. The involvement of AI in creating and disseminating harmful content fits the definition of an AI Incident, as it has directly led to violations of human rights and harm to communities. The removal of images and suspension of accounts are responses but do not negate the occurrence of harm.
Thumbnail Image

Bloquea X búsquedas relacionadas con Taylor Swift

2024-01-29
Diario La Prensa
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and dissemination of AI-generated deepfake sexual images of Taylor Swift without her consent, which is a violation of her rights and constitutes harm. The AI system's role in generating these images is central to the incident. The social media platform's blocking of related searches is a response to this harm. The harm is realized, not just potential, and involves violations of rights and reputational damage, fitting the definition of an AI Incident.
Thumbnail Image

Taylor Swift demandará por videos falsos realizados con IA

2024-01-26
Diario La Prensa
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are created using AI systems. The videos are described as abusive and exploitative, made without consent, and have caused distress to Taylor Swift. This fits the definition of an AI Incident because the AI system's use has directly led to harm in terms of violation of rights and personal harm. The harm is realized, not just potential, as the videos have gone viral and caused disturbance to the individual involved.
Thumbnail Image

"¿Es ella?": fans de Taylor Swift indignados por fotos "nopor" de la cantante | Espectáculos

2024-01-26
La Cuarta
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create fake pornographic images of Taylor Swift, which are being spread on social media, causing harm to her honor and distress among her fans. This is a clear case of an AI system's misuse leading to violations of personal rights and harm to the community, fitting the definition of an AI Incident. The harm is realized, not just potential, as the images are circulating and causing distress. Hence, the event is classified as an AI Incident.
Thumbnail Image

X bloqueó las búsquedas de Taylor Swift tras fotos explícitas hechas por IA | Tendencias

2024-01-29
La Cuarta
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create hyperrealistic deepfake images that are sexually explicit and falsely represent Taylor Swift. The spread of these images has caused reputational and privacy harm to the individual, which is a violation of rights and harm to communities. The platform's response to block searches indicates recognition of the harm caused. Since the AI-generated content directly led to realized harm, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El tremendo momento que vive Taylor Swift por culpa de la Inteligencia Artificial

2024-01-29
Terra USA
Why's our monitor labelling this an incident or hazard?
The creation and distribution of AI-generated explicit images without consent constitutes a violation of privacy and potentially other rights, which fits the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The harm is realized as the images circulated and caused distress, and the AI system's role in generating these images is pivotal. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Taylor Swift estaría "furiosa" por imágenes explícitas suyas generadas por IA - El Diario NY

2024-01-25
El Diario Nueva York
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake videos that falsely depict Taylor Swift in explicit acts without her consent, which constitutes a violation of her rights and is abusive and exploitative. The AI system's use in creating and spreading these videos has directly caused harm, including reputational damage and emotional distress. The viral spread on social media and the subsequent suspension of accounts further confirm the realized harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

X suspende la búsqueda de "Taylor Swift" tras las difusión de imágenes de la cantante creadas con IA - El Diario NY

2024-01-27
El Diario Nueva York
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of non-consensual deepfake images using AI directly harms Taylor Swift's rights and causes reputational and emotional harm, which fits the definition of an AI Incident. The platform's response to suspend search results indicates recognition of the harm caused. The AI system's use in generating these images is central to the incident, and the harm is realized, not just potential.
Thumbnail Image

Indignación en las redes: imágenes pornográficas con IA de Taylor Swift provocan enojo masivo

2024-01-27
ADN Radio 91.7 Chile
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated deepfake images that sexually exploit a public figure without consent, which is a violation of rights and causes harm to the individual and communities. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The article also mentions societal and governance responses, but the primary focus is on the harm caused by the AI-generated content, not just the responses, so it is not merely Complementary Information.
Thumbnail Image

Taylor Swift, víctima de la IA: La drástica decisión de X para proteger a la cantante

2024-01-29
Diario de Noticias
Why's our monitor labelling this an incident or hazard?
The event describes explicit images created by AI (deepfake technology) that have been widely shared, causing harm to Taylor Swift's rights and reputation. The AI system's use directly led to the dissemination of harmful content, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The platform's measures and public/government concern further confirm the materialized harm. Hence, it is not merely a hazard or complementary information but a realized AI Incident.
Thumbnail Image

Twitter bloquea búsquedas de Taylor Swift tras escándalo de deepfakes | LevelUp

2024-01-28
LevelUp
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake images that depict a person in explicit situations without consent, which constitutes a violation of rights and causes harm to the individual. The spread of such content on a major platform like Twitter has led to direct harm, including reputational and privacy violations. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to the community).
Thumbnail Image

La peor cara de las imágenes generadas por IA: los deepfakes pornográficos de Taylor Swift inundan las redes sociales

2024-01-26
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images used to create non-consensual pornographic content, which is a clear violation of personal rights and privacy, fitting the definition of harm under (c) violations of human rights or breach of obligations protecting fundamental rights. The AI system's use directly caused this harm by generating and enabling the spread of these images. The dissemination on social media platforms further amplifies the harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Los deepfakes pornográficos de Taylor Swift llevan a X (Twitter) a bloquear las búsquedas del nombre de la cantante

2024-01-29
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves AI generative systems creating deepfake pornographic images, which is an AI system's use leading directly to harm (violation of rights and harm to the individual and community). The dissemination of these images on social media caused real harm, prompting platform intervention. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated content and its spread.
Thumbnail Image

¿Por qué Taylor Swift ha desaparecido de X (Twitter)?

2024-01-29
El Output
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating manipulated images (deepfakes) that have been widely distributed, causing harm to the individual's honor and dignity, which is a violation of rights under the framework. The platform's intervention to block searches is a response to an ongoing AI Incident. The harm is direct and materialized, not just potential. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated content and the platform's response to mitigate it.
Thumbnail Image

Bloquean las búsquedas de Taylor Swift tras difusión de imágenes manipuladas

2024-01-29
www.expreso.ec
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools to generate manipulated images (deepfakes) of a public figure without consent, which constitutes a violation of rights and harm to the individual. The dissemination of these images on social media platforms caused reputational and privacy harm, fulfilling the criteria for an AI Incident. The platforms' response to remove content and block searches is a reaction to an ongoing incident, not the primary focus. Therefore, this event qualifies as an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Bloquean búsquedas de Taylor Swift en X

2024-01-29
El Vocero de Puerto Rico
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and spread of AI-generated deepfake images (pornographic and abusive) of Taylor Swift, which is a clear violation of rights and causes harm to the individual and community. The AI system's development and use (deepfake generation) directly led to this harm. The platform's blocking of searches is a response to the incident but does not change the fact that harm has occurred. Hence, this is classified as an AI Incident.
Thumbnail Image

Revienta X: Taylor Swift víctima de imágenes porno con IA

2024-01-28
Urgente 24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate deepfake images of a person without consent, which is a direct violation of rights and dignity, a form of harm under the AI Incident definition (c). The AI system's use in creating and spreading these images has directly led to harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Indignación en EEUU por imágenes pornográficas falsas de Taylor Swift Ensegundos República Dominicana

2024-01-28
José Peguero
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate non-consensual deepfake pornography, which directly harms the individual depicted (Taylor Swift) and represents a violation of rights. The harm is realized as the images are circulating, causing indignation and concern from public figures and institutions. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-generated content violating rights and causing reputational and emotional harm.
Thumbnail Image

Difunden imágenes explícitas falsas de Taylor Swift generadas con AI - Entrelineas

2024-01-27
Las Noticias de Chihuahua - Entrelíneas
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems creating realistic but false content. The harm is realized as the images are non-consensual, explicit, and offensive, violating the individual's rights and causing reputational and emotional harm. The widespread dissemination on social media and the platform's response confirm the incident's materialization. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Inteligencia artificial crea imágenes subidas de tono con la imagen de Taylor Swift

2024-01-26
La Voz de Michoacán
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create manipulated sexual images (deepfakes) of Taylor Swift, which are being spread on a social media platform. This constitutes a violation of rights and harm to the individual and community, fitting the definition of an AI Incident. The AI system's use in generating these images directly leads to harm through misinformation and reputational damage. Therefore, this is classified as an AI Incident.
Thumbnail Image

¡Escándalo en redes! Se filtran fotos comprometedoras de Taylor Swift

2024-01-27
Meridiano Web
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate compromising fake images of a public figure, which were widely disseminated on social media, causing harm to the individual's privacy and potentially to public trust. The AI system's use directly led to harm (violation of rights and harm to community trust). The event is not merely a potential risk but a realized harm, fulfilling the criteria for an AI Incident. The discussion about moderation failures and regulatory needs supports the assessment but does not change the classification to complementary information, as the primary focus is on the harm caused by AI-generated content.
Thumbnail Image

Taylor Swift, "furiosa" por las imágenes pornográficas que generaron de ella con IA

2024-01-29
Telemadrid
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of AI-generated non-consensual explicit images constitute a violation of human rights, specifically privacy and dignity, which fits the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The AI system's use in generating these images directly led to harm, as evidenced by the widespread sharing and public outcry. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

¿Por qué no encuentras a Taylor Swift en X? Escándalo por video pornográfico creado por IA escala hasta el gobierno de EE.UU.

2024-01-28
Dia a Dia
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate pornographic deepfake images of a public figure, which have been widely shared, causing reputational and emotional harm. The harm is realized and ongoing, as evidenced by platform restrictions and government intervention. The AI system's use directly led to violations of personal rights and harm to the individual and community by spreading false and harmful content. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Se viralizan imágenes explicitas de Taylor Swift generadas con IA

2024-01-26
Diario de Morelos
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate explicit images without consent, which directly harms the individual's rights and personal integrity. The harm is realized as the images have been widely viewed and caused distress. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of rights and harm to the individual. The legal and social responses mentioned support the recognition of this as an incident rather than a mere hazard or complementary information.
Thumbnail Image

X limita búsquedas de Taylor Swift por difusión de fotos explícitas creadas con IA - CHVNoticias.cl

2024-01-29
CNN
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of explicit deepfake images using AI directly leads to harm by violating the individual's rights and causing reputational and emotional damage. The AI system's role in generating these images is pivotal to the harm. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use has directly led to harm to a person and communities.
Thumbnail Image

Taylor Swift sufre las consecuencias de la IA y X bloquea sus búsquedas para frenar la difusión de falsas imágenes explícitas

2024-01-29
Deia
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating manipulated explicit images without consent, leading to harm to the individual (Taylor Swift) in terms of privacy violation, reputational damage, and emotional distress. The widespread dissemination of such content on a major social media platform constitutes harm to the community and the individual, fulfilling the criteria for an AI Incident. The platform's response and government attention further confirm the seriousness of the harm caused by the AI system's misuse.
Thumbnail Image

Taylor Swift es víctima de "deepfake" con imágenes con IA

2024-01-26
El Heraldo de San Luis Potosi
Why's our monitor labelling this an incident or hazard?
The event describes the creation and viral spread of AI-generated deepfake images of a person without consent, which constitutes a violation of rights and harm to the individual and community. The AI system's role in generating these images is central to the harm caused. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Difunden fotos "explícitas" de Taylor Swift generadas con IA - Norte de Ciudad Juárez

2024-01-26
Nortedigital
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of AI-generated explicit images of Taylor Swift without her consent, which is a direct violation of her rights and causes reputational harm. The AI system's role in generating these images is pivotal to the harm. The dissemination on multiple platforms and the need for reporting and mitigation efforts further confirm the harm is occurring. Therefore, this event qualifies as an AI Incident due to realized harm caused by AI-generated content violating personal rights and causing harm to the individual and community.
Thumbnail Image

Imágenes pornográficas de Taylor Swift generadas por IA inundan X (Twitter)

2024-01-26
Los Replicantes
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used to create non-consensual pornographic images of Taylor Swift, which were then widely shared on social media, causing harm to the individual and distress to the community. This is a clear violation of rights and privacy, fitting the definition of harm under (c) violations of human rights or breach of obligations protecting fundamental rights. The AI system's use directly led to this harm. The delay in content removal and the ongoing spread further emphasize the realized harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

X bloquea las búsquedas de "Taylor Swift" temporalmente | Teknófilo

2024-01-28
Teknófilo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images of Taylor Swift that are non-consensual and sexually explicit, which have caused harm by spreading false and harmful content. The social media platform's response to block searches is a direct consequence of the AI system's misuse. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to the community).
Thumbnail Image

Redes Sociales boquean búsquedas de Taylor Swift ante difusión de 'deepfakes' ofensivos

2024-01-29
DiarioDigitalRD
Why's our monitor labelling this an incident or hazard?
The article describes the creation and widespread dissemination of AI-generated deepfake images that harm the reputation and privacy of Taylor Swift, a public figure. The use of generative AI (deepfake technology) to produce non-consensual, offensive content constitutes a violation of rights and causes harm to the community by spreading misinformation and damaging individuals. The harm is realized and ongoing, not merely potential, thus qualifying as an AI Incident under the framework. The platforms' blocking and removal actions are responses to this harm but do not negate the incident itself.
Thumbnail Image

Difunden fotos 'explícitas' de Taylor Swift generadas con IA

2024-01-26
Horacero
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create non-consensual explicit images of a person, which is a clear violation of personal rights and can be classified as harm to the individual. The AI-generated images are being actively shared, causing realized harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Con IA crean imágenes explícitas de Taylor Swift y podría demandar a X

2024-01-26
Diario Puntual
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the creation and dissemination of AI-generated explicit images (deepfakes) of Taylor Swift without consent, which constitutes a violation of her rights and causes harm to her reputation and dignity. The AI system's role is pivotal as it generated the harmful content. The harm is realized, not just potential, as the images were viewed by millions and caused distress. This fits the definition of an AI Incident due to violations of rights and harm to communities. The article also discusses the legal context and societal responses, but the primary focus is on the harmful AI-generated content and its impact.
Thumbnail Image

El Estridor de las Armas con Noelia Quintana | Ñanduti

2024-01-29
Ñanduti
Why's our monitor labelling this an incident or hazard?
The AI system's use in creating and spreading non-consensual deepfake images constitutes a violation of individual rights and causes harm to the person involved. The harm is realized as the images have been widely disseminated, prompting platform intervention. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-generated content infringing on rights and privacy.
Thumbnail Image

X bloquea búsquedas de Taylor Swift tras las filtraciones de fotos sexuales creadas con IA | Ñanduti

2024-01-29
Ñanduti
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created using AI systems capable of generating realistic fake content. The dissemination of these images constitutes a violation of rights (privacy and potentially intellectual property) and causes harm to the individual and community by spreading non-consensual explicit content. The platform's blocking of searches is a response to this harm. Since the harm is realized and directly linked to the AI system's use, this is classified as an AI Incident.
Thumbnail Image

¡Furiosa! Taylor Swift demandaría por porno hecho con IA

2024-01-26
La Prensa.mx
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI programs to create deepfake videos depicting Taylor Swift in fake sexual acts without her consent. The dissemination of these videos has caused harm to her personal rights and reputation, constituting a violation of rights under applicable law. The AI system's role is pivotal as it generated the fake content leading to the harm. The harm is realized, not just potential, as the videos have gone viral and caused distress. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift, la nueva víctima de la inteligencia artificial; filtran imágenes falsas

2024-01-29
Notimundo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems creating realistic but fake content. The circulation of these images constitutes a violation of rights and causes harm to the individual and communities, fitting the definition of an AI Incident. The harm is realized, not just potential, as the images were widely viewed and caused public outrage and concern. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift y sus imágenes pornográficas, la convierten en la nueva víctima de la inteligencia artificial

2024-01-25
La Banda Diario
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create manipulated explicit images (deepfakes) of a real person without consent, which is a violation of personal rights and privacy, fitting the definition of an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights. The harm is realized as the images were viewed by millions and caused distress to the victim and her community. The AI system's use directly led to this harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

28 enero, 2024

2024-01-29
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images without consent, which have been widely disseminated on social media platforms, causing harm to the individual (Taylor Swift) and potentially to communities by spreading non-consensual explicit content. This constitutes a violation of rights and harm to communities. The involvement of AI in creating the deepfakes and their distribution leading to realized harm fits the definition of an AI Incident. The legislative and platform responses are complementary information but do not change the primary classification.
Thumbnail Image

Taylor Swift consigue que X pare los pies a la Inteligencia Artificial

2024-01-29
MegaStarFM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images causing widespread misinformation and harm through their dissemination on a major social media platform. The platform's intervention to block searches indicates recognition of the harm caused. The involvement of authorities and calls for legislation further confirm the seriousness of the issue. The AI system's use in generating and spreading false images directly leads to harm to the community and individuals' reputations, fitting the definition of an AI Incident.
Thumbnail Image

Imágenes de Taylor Swift. Microsoft investiga y la Casa Blanca comenta - Notiulti

2024-01-29
Notiulti
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Microsoft Designer with DALL-E 3) being used to generate fake images that have been widely disseminated, causing harm to the individual and raising concerns about privacy and safety. The harm is realized, as the images have been viewed by millions and have led to platform interventions and governmental concern. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The investigation and policy responses are complementary information but do not negate the primary classification as an incident.
Thumbnail Image

X de Elon Musk bloquea búsquedas de Taylor Swift en la plataforma tras difusión de imágenes explícitas falsas

2024-01-29
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create false explicit images (deepfakes) of Taylor Swift, which were widely viewed and spread on the platform. This constitutes harm to the community and individuals by spreading false and harmful content. The AI system's role in generating these images is pivotal to the harm. The platform's response to block searches is a mitigation measure but does not negate the fact that harm occurred. Hence, this qualifies as an AI Incident under the framework because the AI-generated content directly led to harm.
Thumbnail Image

X hace que el nombre "Taylor Swift" no se pueda buscar en la plataforma después de que se difundieran fotos falsas de ella - Notiulti

2024-01-29
Notiulti
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated manipulated images (deepfakes) that have been actively spread on a social media platform, causing harm to the individual depicted and potentially to the community by spreading misinformation and harmful content. The AI system's use in generating these images directly led to this harm. Therefore, this qualifies as an AI Incident due to the realized harm from AI-generated content and the platform's reactive measures to mitigate further harm.
Thumbnail Image

Taylor Swift, última víctima de los 'deepfake' pornográficos virales

2024-01-26
Ara en Castellano
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the creation and viral dissemination of AI-generated deepfake pornographic images without consent, which directly harms the individual involved by violating privacy and dignity, a clear breach of human rights. The AI system's role in generating these images is central to the incident. The harm is realized, not just potential, as the images have been widely viewed and shared, causing reputational and emotional damage. This fits the definition of an AI Incident as the AI system's use has directly led to harm to a person. The article also references ongoing societal and legal concerns about such AI misuse, reinforcing the classification.
Thumbnail Image

Taylor Swift contra la IA: "La legislación necesita proteger y prevenir estas acciones a través de las leyes"

2024-01-26
MegaStarFM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and distribution of AI-generated deepfake images of Taylor Swift, which are false and harmful representations causing reputational and emotional harm. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident under violations of rights and harm to the person. The ongoing legal actions and public outcry further confirm the harm has occurred. Therefore, this is classified as an AI Incident.
Thumbnail Image

27 enero, 2024

2024-01-27
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for image creation) to produce non-consensual explicit content, which constitutes a violation of individual rights and privacy, a form of harm to the person. The harm has already occurred as the images were viewed millions of times and caused public alarm. Therefore, this qualifies as an AI Incident due to the direct link between AI-generated content and realized harm to a person's rights and dignity.
Thumbnail Image

Taylor Swift abre una guerra contra la Inteligencia Artificial... y sus fans la secundan: "Un delito grave"

2024-01-26
CADENA 100
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and spread of AI-generated deepfake videos depicting Taylor Swift in a false and provocative manner. This use of AI has directly caused harm to the individual and her community, including reputational damage and emotional distress. The involvement of AI in generating these deepfakes is clear, and the harm is realized, not just potential. Hence, this event meets the criteria for an AI Incident due to violation of rights and harm to community.
Thumbnail Image

Taylor Swift demandará a quien difundió imágenes porno con su cara hechas con inteligencia artificial

2024-01-25
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that have been distributed without consent, causing harm to Taylor Swift's privacy and dignity. The use of AI in creating these images is central to the harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The harm is realized, not just potential, as the images have been published and caused distress. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

X se llena de imágenes sexualmente explícitas de Taylor Swift generadas por IA

2024-01-26
20 minutos
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake pornographic images without consent, which have been widely shared and viewed, causing harm to the individual (Taylor Swift) and violating her rights. The AI system's use in creating and disseminating this content directly led to the harm. The incident fits the definition of an AI Incident under violations of human rights and harm to communities. The platform's response and the ongoing circulation of the images further confirm the realized harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Imágenes explícitas y falsas de Taylor Swift se esparcen por las redes

2024-01-27
San Diego Union-Tribune
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI generative models to create and disseminate explicit deepfake images without consent, directly causing harm to Taylor Swift and potentially others. The harm includes violation of personal rights and reputational damage, fitting the definition of an AI Incident. The AI system's use is central to the harm, and the article documents the actual occurrence of this harm, not just potential or future risk. Hence, the classification is AI Incident.
Thumbnail Image

Imágenes explícitas de Taylor Swift habrían sido creadas por IA

2024-01-26
TC Televisión
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated explicit images of Taylor Swift being widely shared, causing harm through misinformation and violation of personal rights. The AI system's use in creating and disseminating these images directly led to harm to the individual and the community, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and involves violations of rights and reputational damage. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

X bloquea la búsqueda de Taylor Swift ante la proliferación de imágenes explícitas generadas por IA

2024-01-28
El Progreso de Lugo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated explicit images of a person without consent, which is a violation of privacy and intellectual property rights, falling under harm category (c). The dissemination of such images on a social media platform caused harm to the individual and the community, fulfilling the criteria for an AI Incident. The AI system's development and use directly led to this harm. The blocking of searches and account suspensions are responses but do not negate the occurrence of harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Conoce qué es un deepfake sexual, la nueva forma de acosar sexualmente en internet

2024-01-24
infobae
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation via machine learning) to create manipulated sexual content that harasses and harms individuals, particularly women and minors. This constitutes a violation of human rights and personal dignity, fulfilling the criteria for harm under AI Incident definition (c). The harm is realized and ongoing, as evidenced by the reported increase and victim testimonies. Therefore, this is classified as an AI Incident.
Thumbnail Image

Legisladores de EEUU impulsan leyes ante deepfake de Taylor Swift

2024-01-27
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake images that are non-consensual and explicit, which have been widely disseminated on social media platforms. This dissemination causes harm to the individual (Taylor Swift) and potentially to others targeted by similar content, constituting violations of rights and harm to communities. The platforms are actively removing such content, and legislators are pushing for laws to criminalize these acts, indicating recognized harm. The AI system's use in generating and spreading these images is directly linked to the harm described, meeting the criteria for an AI Incident.
Thumbnail Image

Cómo se puede prohibir lo que le han hecho a Taylor Swift

2024-01-27
WWWhat's new
Why's our monitor labelling this an incident or hazard?
The creation and distribution of non-consensual deepfake images using AI systems directly harms individuals by violating their privacy and rights, as exemplified by the Taylor Swift case. The article details actual harm caused by AI-generated content and the legislative and platform measures responding to this harm. Since the AI system's use has directly led to violations of rights and reputational damage, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'Deepfakes' en porno: cómo esta tecnología se usa sin el consentimiento de las víctimas (la mayoría, mujeres famosas) - Maldita.es

2024-01-26
Maldita.es — Periodismo para que no te la cuelen
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create manipulated pornographic content without consent, which constitutes a violation of human rights and personal dignity, specifically the right to privacy and protection from non-consensual sexual content. The harm is realized and ongoing, affecting the victims directly. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to individuals and communities.
Thumbnail Image

Los peligros de 'deepfakes': tecnología que vulnera la privacidad

2024-01-24
La Hora Noticias de Ecuador, sus provincias y el mundo
Why's our monitor labelling this an incident or hazard?
Deepfakes are explicitly described as AI-generated content that has been used maliciously to harm individuals' privacy and security, including a concrete example of students distributing deepfake images without consent. The article also mentions legislative efforts motivated by these harms. The involvement of AI systems in causing direct harm to individuals' rights and privacy is clear, fulfilling the criteria for an AI Incident. The harms are realized, not just potential, and the AI system's use is central to the incident.
Thumbnail Image

¿Qué son "DeepFakes"? la peligrosa practica ilegal que se hace

2024-01-25
Diario de Morelos
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as deepfakes are created using AI techniques to generate highly realistic manipulated images and videos. The harms described include violations of rights (privacy, dignity, psychological harm), especially affecting women and minors, which have already occurred and are ongoing. The article reports on actual cases and the widespread presence of such content, not just potential risks. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

26 enero, 2024

2024-01-27
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation and dissemination of deepfake images, which are AI-generated manipulated media. The harm is realized as the non-consensual sharing of explicit images causes privacy violations and emotional harm to the victim, Taylor Swift, and potentially others. The article also highlights the broader societal harm to victims of similar AI-generated abuses. Since the harm is occurring and directly linked to the use of AI systems, this qualifies as an AI Incident under the framework, specifically under violations of human rights and privacy protections.
Thumbnail Image

Pressionada por fãs de Taylor Swift e até pela Casa Branca, plataforma X bloqueia busca por nome da cantora

2024-01-29
UOL notícias
Why's our monitor labelling this an incident or hazard?
An AI system was used to create deepfake images, which are AI-generated content that caused harm by spreading false and damaging material about a person, constituting harm to the community and potentially violating rights. The platform's response to block searches is a mitigation measure. Since the AI-generated deepfakes have already been disseminated and caused harm, this qualifies as an AI Incident due to realized harm from AI-generated content.
Thumbnail Image

Taylor Swift é vítima de pornografia por IA e fãs denunciam crime; entenda

2024-01-25
TechTudo
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated explicit images of Taylor Swift being shared, which is a direct violation of her rights and is considered a criminal act under applicable law. The AI system's use in creating these fake images directly leads to harm (violation of rights and potential psychological harm). Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs.
Thumbnail Image

Taylor Swift: X diz que prioriza segurança ao barrar busca - 29/01/2024 - Tec - Folha

2024-01-29
Folha de S.Paulo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake images that are sexually explicit and non-consensual, causing harm to the individual depicted (Taylor Swift) and potentially to the broader community through the spread of abusive content. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The platform's response to block searches is a mitigation measure but does not negate the occurrence of harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Fãs de Taylor Swift se revoltam com fotos pornográficas falsas feitas com inteligência artificial

2024-01-26
O Globo
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of AI-generated non-consensual deepfake pornography constitutes a violation of rights and causes harm to the individual and potentially to communities. The AI system's use in generating these images is central to the harm. Since the harm is realized and ongoing, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities. The article also mentions responses and policy measures, but the primary focus is on the incident itself.
Thumbnail Image

Taylor Swift é vítima de nudes falsos gerados por Inteligência Artificial

2024-01-27
uol.com.br
Why's our monitor labelling this an incident or hazard?
The event describes the creation and viral spread of AI-generated deepfake pornography targeting a real person, Taylor Swift. This constitutes a violation of her rights and causes harm to her reputation and dignity. The AI system's use in generating these images is central to the harm caused. Therefore, this qualifies as an AI Incident under the framework, as it involves harm to a person and violation of rights directly linked to AI-generated content.
Thumbnail Image

Taylor Swift: fãs não conseguem buscar nome de cantora no X após polêmica com nudes falsos criados por IA

2024-01-27
Revista Marie Claire Brasil
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake nude images of a public figure, which is a recognized harm related to AI misuse. However, the article focuses on the social media platform's response (search restrictions) rather than the direct occurrence of harm or a new incident. There is no explicit mention of realized harm such as legal violations or physical harm, only the potential reputational and community impact. The platform's action is a governance or societal response to an existing AI-related issue, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

X bloqueia buscas por 'Taylor Swift' após fotos pornográficas falsas feitas com IA viralizarem

2024-01-29
O Globo
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI to create non-consensual pornographic deepfake images of a public figure, which have been widely disseminated, causing harm to the individual's privacy and dignity. The AI system's use directly led to the harm (violation of rights and harm to community standards). The platform's temporary blocking of searches is a response to this harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

X bloqueia pesquisas sobre Taylor Swift após vídeos de IA - 28/01/2024 - Mercado - Folha

2024-01-28
Folha de S.Paulo
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the creation and widespread dissemination of AI-generated sexually explicit deepfake images of a public figure, which is a direct violation of rights and causes harm to the individual and the community. The AI system's role in generating these images is central to the incident. The platform's response to block searches indicates recognition of the harm caused. Therefore, this event meets the criteria for an AI Incident due to realized harm stemming from the use of AI systems.
Thumbnail Image

Imagens pornográficas de Taylor Swift! Assim não Inteligência Artificial...

2024-01-27
Pplware
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create pornographic deepfake images of a real person without consent, which is a clear violation of human rights and privacy. The widespread dissemination of these images on social media causes harm to the individual and communities by spreading non-consensual explicit content. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to communities). The platform's removal efforts are a response but do not change the classification of the event as an incident.
Thumbnail Image

X bloqueia a pesquisa "Taylor Swift" depois de se terem espalhado deepfakes da cantora

2024-01-29
Pplware
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfakes, which are AI-generated synthetic media, as the cause of the harmful content spread. The harm includes reputational damage and the spread of false sexually explicit images, which can be considered harm to the individual and communities. The social media platform's blocking of search results is a direct response to this AI-driven harm. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and reputational damage.
Thumbnail Image

Pesquisa por deepfakes pornográficas de Taylor Swift bloqueada na rede social X - SAPO Tek

2024-01-29
SAPO Tek
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating manipulated pornographic images without consent, which is a direct violation of rights and causes harm to the individual depicted and the broader community. The widespread sharing and virality of these images on major platforms confirm that harm has materialized. The use of AI for creating non-consensual deepfake pornography is a recognized form of harm under the framework, specifically under violations of human rights and harm to communities. The platforms' interventions are reactive and do not change the classification of the event as an AI Incident.
Thumbnail Image

Pressionada por fãs e pela Casa Branca após deepfakes pornográficos, X bloqueia busca por Taylor Swift

2024-01-29
RFI
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and viral spread of AI-generated deepfake pornographic images of Taylor Swift, which constitute a violation of personal rights and cause harm to the individual and community. The AI system's role in generating these images is central to the harm. The widespread dissemination on social media platforms and messaging services further amplifies the impact. The harm is realized and ongoing, not merely potential. Hence, this event meets the criteria for an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Taylor Swift é vítima de pornografia com IA e deve processar autores

2024-01-26
Tecnologia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create fake pornographic images (deepfakes) of a real person without consent, which is a clear violation of rights and causes harm. The harm is realized as the images have been published and circulated, leading to reputational and personal harm to Taylor Swift. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to the individual. The article also discusses legal considerations and platform removals, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Imagens pornográficas falsas de Taylor Swift criadas por IA causam indignação - SAPO Mag

2024-01-27
SAPO Mag
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create non-consensual pornographic deepfake images of a public figure, Taylor Swift. These images have been widely disseminated, causing harm to the individual's rights and dignity, as well as harm to communities by spreading degrading content. The AI system's use directly led to these harms. The article also discusses regulatory and societal responses, but the primary focus is on the harm caused by the AI-generated content. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Após divulgação de falsos nudes, nome de Taylor Swift é bloqueado no X, antigo Twitter; entenda

2024-01-28
Terra
Why's our monitor labelling this an incident or hazard?
The article describes an AI system being used to create false nude images of a public figure, which were widely shared and caused harm to the individual and the community. The AI-generated content led to direct harm (violation of rights and reputational damage) and prompted platform actions and legislative discussions. This fits the definition of an AI Incident, as the AI system's use directly led to harm (violation of rights and harm to community).
Thumbnail Image

Taylor Swift é vítima de pornografia com IA e deve processar autores

2024-01-26
Terra
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the creation and dissemination of AI-generated deepfake pornographic images without consent, which is a direct violation of personal rights and privacy, and is criminalized by law. The AI system's role in generating these images is central to the harm caused. The harm is actual and ongoing, not merely potential. The article also mentions legal and social responses, but the primary focus is on the harm caused by the AI-generated content. Hence, this is an AI Incident due to realized harm involving an AI system.
Thumbnail Image

X: rede social remove buscas por "Taylor Swift" após viralizar nudes criados por IA

2024-01-29
Terra
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images that have been circulated, causing harm to the individual’s privacy and dignity, which constitutes a violation of rights under the framework. The social media platform's removal of search results and content, as well as legal considerations, confirm that harm has materialized. The AI system's misuse (creation of deepfake images) directly led to the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift é vítima de pornografia com IA e deve processar autores

2024-01-26
Canaltech
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images, which are created using AI systems capable of generating realistic fake content. The harm is realized as the images are publicly shared, causing violation of privacy and defamation, which are breaches of fundamental rights. The involvement of AI in creating the harmful content and the resulting violation of rights and harm to the individual clearly qualifies this as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

X (antigo Twitter) bloqueia pesquisas sobre Taylor Swift após imagens explícitas de IA viralizarem

2024-01-29
TecMundo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating harmful content (explicit images) without consent, which has been widely disseminated, causing harm to the individual’s rights and to the community by spreading false and harmful material. The platform's intervention to block searches is a response to an ongoing AI Incident. The AI system's use in creating non-consensual explicit images directly led to the harm described, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

Imagens sexuais falsas de Taylor Swift viralizam e fãs pedem proteção contra IA

2024-01-26
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The event describes the creation and viral spread of AI-generated deepfake images with sexual content of a real person, which constitutes a violation of rights and harm to the individual and community. The AI system's use directly caused this harm. The widespread dissemination and difficulty in controlling the spread further emphasize the impact. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to community).
Thumbnail Image

Twitter barra pesquisas sobre Taylor Swift 'por segurança'

2024-01-28
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images, which are created using AI systems capable of generating realistic fake content. The circulation of these images constitutes a violation of rights (non-consensual explicit content) and causes harm to the individual and community. The platform's blocking of searches and content removal efforts are responses to an ongoing AI Incident. Since the harm (distribution of non-consensual deepfake pornography) has already occurred and is ongoing, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Imagens pornô falsas de Taylor Swift causam indignação nos EUA

2024-01-26
ISTOÉ Independente
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative tools were used to create false pornographic images (deepfakes) of Taylor Swift, which were widely disseminated on social media platforms. This constitutes a violation of rights (non-consensual use of images) and harm to the individual and community by spreading harmful content. The AI system's use directly led to this harm, fulfilling the definition of an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by AI-generated content.
Thumbnail Image

O que são deepnudes? Taylor Swift é vítima de imagens pornográficas geradas por IA | Exame

2024-01-26
Exame
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake nude images ('deepnudes') of Taylor Swift, which are created and disseminated without consent, constituting a violation of rights and causing harm to the individual and community. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. The harm is realized (images are widely viewed and shared), and the event involves the use of AI systems to generate and spread harmful content. Therefore, this is classified as an AI Incident.
Thumbnail Image

Taylor Swift pode entrar com ação após montagens pornográficas feitas com IA

2024-01-26
Notícias ao Minuto Brasil
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create false pornographic images of Taylor Swift, which are being disseminated on social media. This use of AI has directly led to harm in terms of violation of privacy, potential defamation, and emotional distress, fitting the definition of harm to a person or group (a) and violations of rights (c). The AI system's role is pivotal as it generated the harmful content. Hence, this is classified as an AI Incident.
Thumbnail Image

Imagens pornográficas falsas de Taylor Swift criadas por Inteligência Artificial causam indignação

2024-01-27
Correio da Manha
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and dissemination of AI-generated pornographic deepfake images, which are non-consensual and harmful to the individual depicted, constituting a violation of rights and harm to communities. The AI system's use is central to the incident, as the images are generated by AI and have been widely shared, causing indignation and calls for legal action. This meets the criteria for an AI Incident because the harm has already occurred and is directly linked to the AI system's outputs.
Thumbnail Image

Taylor Swift tem nudes fakes divulgados na web e fãs se revoltam

2024-01-26
O TEMPO
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to create fake explicit images of celebrities, which are then shared online. This use of AI directly causes harm by violating privacy and potentially other rights, as well as causing reputational damage and distress. The involvement of AI in generating these images and the resulting harm fits the definition of an AI Incident, as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

Nome de Taylor Swift não pode mais ser buscado no X após nudes falsos

2024-01-27
O TEMPO
Why's our monitor labelling this an incident or hazard?
The article describes the creation and sharing of AI-generated fake nude images of Taylor Swift, which is a direct violation of her rights and causes harm to her reputation and privacy. The involvement of AI in generating these images is explicit, and the harm is realized as the images are being shared and causing distress. The response from the Hollywood actors' union underscores the severity and legal implications of such AI misuse. Hence, this event meets the criteria for an AI Incident due to realized harm stemming from AI misuse.
Thumbnail Image

Após divulgação de falsos nudes, nome de Taylor Swift é bloqueado no X, antigo Twitter; entenda

2024-01-28
Estadão
Why's our monitor labelling this an incident or hazard?
The article describes the creation and widespread sharing of AI-generated fake nude images of Taylor Swift, which constitutes a violation of privacy and reputational harm. The AI system's use in generating these images directly led to harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the images were viewed millions of times and caused significant distress and platform intervention. The involvement of AI in generating the manipulated content is explicit and central to the incident.
Thumbnail Image

Taylor Swift é vítima de 'nudes' geradas por IA nas redes sociais

2024-01-26
Estadão
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate deepfake images that are sexually explicit and non-consensual, which constitutes a violation of rights and harm to the individual and community. The harm is realized as the images have been spread and caused distress. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities. The article also discusses responses and policy considerations, but the primary focus is on the incident of harm caused by AI-generated content.
Thumbnail Image

<em>Deepfakes</em> pornográficas de Taylor Swift preocupam Casa Branca

2024-01-27
Publico
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate deepfake images, which are highly realistic and non-consensual, causing harm to the person depicted (Taylor Swift) and potentially to communities through exploitation and disinformation. The harm is realized, not just potential, as the images have been widely viewed and disseminated. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The article also discusses societal and governance responses, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Já tentou pesquisar hoje por Taylor Swift na rede social X? Não vai conseguir

2024-01-29
Jornal Expresso
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images that have been widely disseminated, causing harm to the individual's rights and potentially to the community by spreading false and explicit content. The social media platform's blocking of search results is a direct response to this harm. The AI system's use has directly led to violations of rights and harm to the community, fitting the definition of an AI Incident.
Thumbnail Image

Imagens pornô falsas de Taylor Swift causam indignação nos EUA

2024-01-26
O Povo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create false pornographic images (deepfakes) of a public figure. The viral dissemination of these images caused indignation and harm to the individual's rights and dignity, which constitutes a violation of human rights and harm to communities. Since the AI system's use directly led to these harms, this qualifies as an AI Incident under the framework.
Thumbnail Image

Imagens pornô falsas de Taylor Swift causam indignação nos EUA

2024-01-26
Correio do povo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create fake pornographic images of Taylor Swift, which were widely shared and viewed, constituting a violation of rights and causing harm to the individual and community. The AI system's outputs directly led to this harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and involves violations of rights and harm to communities through toxic content dissemination. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Rede social X bloqueia pesquisa por Taylor Swift depois de imagens manipuladas da artista começarem a circular

2024-01-28
Observador
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated explicit images of Taylor Swift being shared without consent, which is a violation of privacy and potentially other rights. The social media platform's response to block searches and remove content indicates the harm is occurring and recognized. The AI system's role in generating the manipulated images is pivotal to the harm. Hence, this is an AI Incident due to realized harm caused by AI-generated content violating rights.
Thumbnail Image

Imagens pornográficas falsas de Taylor Swift criadas por IA partilhadas na internet e geram indignação

2024-01-27
Observador
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI generative systems creating deepfake pornographic images, which are non-consensual and harmful to the individual depicted, constituting a violation of rights and harm to communities. The widespread sharing of these images and the platform's delayed removal demonstrate direct harm caused by the AI system's outputs. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use.
Thumbnail Image

Nome de Taylor Swift não pode mais ser buscado no X após nudes falsos da cantora

2024-01-27
agazeta.com.br
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create fake pornographic images without consent, which constitutes a violation of rights and harm to the individual (Taylor Swift). The harm is direct and ongoing, as the images are being shared and causing reputational and emotional harm. The involvement of AI in generating the images and the resulting harm fits the definition of an AI Incident. The societal and organizational responses further confirm the recognition of harm caused by AI misuse.
Thumbnail Image

X bloqueia buscas por "Taylor Swift" após fotos pornográficas falsas feitas com IA viralizarem

2024-01-29
Folha - PE
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI generative systems creating deepfake images, which are false but highly realistic and non-consensual, constituting a violation of personal rights and causing harm to the individual and community. The viral spread of these images on a major platform and the resulting public and political outcry confirm that harm has occurred. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs and their dissemination.
Thumbnail Image

Fãs denunciam posts com fotos falsas de Taylor Swift nua

2024-01-25
O Liberal
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false and pornographic images of a real person without consent, which is a violation of privacy and personal rights, falling under harm category (c) - violations of human rights or breach of obligations protecting fundamental rights. The harm is realized as the images are circulating and causing reputational and emotional harm. Therefore, this qualifies as an AI Incident. The mention of legal measures supports the recognition of harm but does not change the classification.
Thumbnail Image

Fotos falsas de Taylor Swift nua, feitas com inteligência artificial, causam revolta

2024-01-25
TNH1
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated fake nude images of Taylor Swift are circulating and causing outrage. The AI system's role in generating these images is central to the harm, which includes violation of privacy and potential emotional distress. This fits the definition of an AI Incident as the AI system's use has directly led to harm, specifically violations of human rights and harm to communities. The mention of legal responses further supports the recognition of harm caused by AI misuse.
Thumbnail Image

Fotos íntimas falsas de Taylor Swift são divulgadas na web e geram indignação nos EUA

2024-01-27
itatiaia.com.br
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are realistic fabricated content created by generative AI systems. The dissemination of these images constitutes a violation of rights, specifically privacy and potentially other human rights, fulfilling the criteria for harm under the AI Incident definition. The harm is realized as the images were widely viewed and caused public indignation and concern. The involvement of AI in generating the harmful content and its direct role in causing harm to the individual and community justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Nudes fakes de Taylor Swift viralizam na web

2024-01-26
Jornal de Brasília
Why's our monitor labelling this an incident or hazard?
The article describes the use of an AI system to create fake nude images of Taylor Swift, which were then spread online. This constitutes a violation of rights (privacy, dignity) and causes harm to the individual and her community of fans. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

Nome de Taylor Swift não pode mais ser buscado no X após nudes falsos | Folha de Londrina

2024-01-28
Folha de Londrina
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-generated fake nude images of Taylor Swift without her consent. This constitutes a violation of rights and causes harm to the individual and community by spreading harmful false content. The AI system's role in generating these images is central to the harm occurring. Therefore, this qualifies as an AI Incident under the definitions provided, as it involves harm to rights and communities directly caused by the AI system's outputs.
Thumbnail Image

Imagens pornográficas falsas de Taylor Swift criadas por IA causam indignação

2024-01-27
DNOTICIAS.PT
Why's our monitor labelling this an incident or hazard?
The event explicitly involves generative AI systems creating false pornographic images (deepfakes) without consent, which constitutes a violation of individual rights and harms communities by spreading degrading content. The harm is realized and ongoing, as the images have been widely shared and caused indignation. Therefore, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights and harm to communities.
Thumbnail Image

Fãs de Taylor Swift se revoltam com fotos pornográficas falsas feitas com inteligência artificial

2024-01-27
Tribuna do Sertão
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI generative systems to create false pornographic images (deepfakes) of Taylor Swift, which were widely disseminated on social media platforms. This constitutes a violation of rights (non-consensual use of likeness and potential harm to reputation and privacy), fitting the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The harm is realized as the images were viewed millions of times and caused public outrage and political concern. Therefore, this is classified as an AI Incident.
Thumbnail Image

Imagens falsas de Taylor Swift nua criadas por inteligência artificial inundam redes sociais

2024-01-26
Tribuna do Sertão
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images of Taylor Swift were created and widely disseminated, causing harm through non-consensual pornography and harassment. The AI system's use (generative diffusion models) directly led to the creation and spread of harmful content, fulfilling the criteria for an AI Incident. The harm includes violation of rights and harm to the individual and community. The discussion of legislative responses and platform moderation efforts further supports the recognition of this as a significant AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Imagens porno falsas de Taylor Swift criadas por IA causam indignação

2024-01-27
NOTÍCIAS DE COIMBRA
Why's our monitor labelling this an incident or hazard?
The event involves generative AI systems creating fake pornographic images (deepfakes) of a public figure without consent, which is a clear violation of rights and causes harm to the individual and community. The harm is realized as the images have been viewed millions of times and circulated on social media platforms. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The article also discusses the platform's response and regulatory concerns, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Nome de Taylor Swift não pode mais ser buscado no X após nudes falsos da cantora | Diario de Cuiabá

2024-01-29
Diario de Cuiabá
Why's our monitor labelling this an incident or hazard?
The article describes the creation and sharing of AI-generated fake nude images of Taylor Swift, which is a direct violation of her rights and causes harm to her reputation and privacy. The AI system's role in generating these images is central to the harm. The platform's search issues are likely a consequence of the incident. The involvement of AI in producing harmful content that is actively disseminated and causing distress meets the criteria for an AI Incident under violations of rights and harm to communities.
Thumbnail Image

Fotos falsas de Taylor Swift nua, feitas com inteligência artificial, causam revolta | Diario de Cuiabá

2024-01-26
Diario de Cuiabá
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create false nude images of Taylor Swift, which are being disseminated on social media. This use of AI directly leads to harm in the form of violation of privacy and human rights, as well as reputational damage and emotional distress. The harm is occurring, not just potential, as the images are already circulating and causing public reaction. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

Rede social X impede busca pelo nome de Taylor Swift; Entenda o motivo

2024-01-29
Terra Brasil Notícias
Why's our monitor labelling this an incident or hazard?
The article describes the generation and sharing of AI-generated pornographic images without consent, which constitutes a violation of rights and harm to the individual and community. The AI system's role in creating these images is explicit and central. The harm is realized, not just potential, as the images have been shared and caused public outcry and concern from the actors' union. Therefore, this qualifies as an AI Incident due to violation of rights and harm to communities. The platform's search malfunction is likely a response to this issue but is not itself the harm. The focus on AI-generated harmful content and its dissemination meets the criteria for an AI Incident.
Thumbnail Image

Exame Informática | Deepfakes pornográficos de Taylor Swift surgiram num grupo de Telegram

2024-01-29
Visão
Why's our monitor labelling this an incident or hazard?
The article describes the creation and sharing of AI-generated deepfake pornographic images of a public figure without consent, which constitutes a violation of rights and harm to communities. The AI tools (such as Microsoft's Designer) were used to generate these images, and the dissemination caused reputational and privacy harm. This fits the definition of an AI Incident as the AI system's use directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Falsas imagens pornográficas de Taylor Swift geram revolta de fãs

2024-01-26
O Antagonista
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake images that are sexually explicit and falsely depict Taylor Swift, which have been widely disseminated on social media. This use of AI directly leads to harm by violating personal rights and causing reputational and emotional damage. The involvement of AI in generating these images is clear, and the harm is realized, not just potential. The event also highlights challenges in content moderation and regulatory gaps, but the primary classification is an AI Incident because the harm has occurred due to the AI system's malicious use.
Thumbnail Image

AP Business SummaryBrief at 11:00 p.m. EST

2024-01-27
Eagle-Tribune
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as deepfake technology is an AI-based method for generating realistic fake images. The harm is realized and direct, as the non-consensual pornographic images violate Taylor Swift's rights and cause reputational and emotional harm. The spread on social media platforms further amplifies the harm to communities by normalizing or enabling abuse. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

AP Business SummaryBrief at 7:57 p.m. EST

2024-01-27
Eagle-Tribune
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology to create nonconsensual pornographic images. The harm is realized as these images are actively circulating online, violating the individual's rights and causing reputational and personal harm. The involvement of AI in generating the harmful content and the direct link to the violation of rights meets the criteria for an AI Incident. The article does not merely discuss potential harm or responses but reports on an ongoing harmful event caused by AI misuse.
Thumbnail Image

AP Business SummaryBrief at 9:58 p.m. EST

2024-01-27
Beckley Register-Herald
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create and spread non-consensual explicit images of a person, which constitutes a violation of rights and harm to the individual and community. The harm is realized and ongoing, as the images are actively circulating on social media. The AI system's role is pivotal in generating the harmful content. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift: Το X μπλοκάρει τις αναζητήσεις του pop icon εξαιτίας χιλιάδων AI deepfakes | in.gr

2024-01-29
in.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deepfake images that have been widely distributed and caused harm by violating the rights and privacy of Taylor Swift, a public figure. The AI system's use in creating non-consensual sexual images directly leads to harm (violation of rights and sexual harassment). The platforms' responses to block or remove such content further confirm the recognition of harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm, including violations of rights and harm to the individual and community.
Thumbnail Image

Συνεχίζουν να είναι αηδιασμένοι οι θαυμαστές της Taylor Swift με τις AI ακατάλληλες φωτο της - Μικροπράγματα

2024-01-29
Μικροπράγματα
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated sexually inappropriate images (deepfakes) of a public figure being distributed on social media, causing outrage among fans and potential legal implications. The AI system's use directly leads to harm in terms of violation of rights and reputational damage, fitting the definition of an AI Incident under violations of human rights and harm to communities. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Το Twitter μπλοκάρει προσωρινά τις αναζητήσεις για την Taylor Swift

2024-01-29
PCMag Greece
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for content moderation on a social media platform where AI likely plays a role in detecting and managing harmful, digitally fabricated explicit images. The spread of such images causes harm to the individual depicted and the community, fulfilling the criteria for an AI Incident. The platform's temporary blocking of searches and suspension of accounts indicates a response to an ongoing harm caused or facilitated by AI systems. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Σάλος από φωτογραφίες πορνογραφικού περιεχομένου της Taylor Swift: Δημιουργήθηκαν μέσω τεχνητής νοημοσύνης (vid)

2024-01-27
Gazzetta.gr - Sports News Portal
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake images, which are explicitly mentioned as being generated through artificial intelligence. The harm is realized and significant, including emotional and reputational harm to Taylor Swift and broader societal harm through the spread of non-consensual pornographic content. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm. The article also discusses societal and legislative responses, but the primary focus is on the incident itself.
Thumbnail Image

Taylor Swift: Ακατάλληλες AI-generated φωτογραφίες διαδόθηκαν στα social media

2024-01-26
SecNews.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating inappropriate images (deepfakes) of a public figure, which were then spread on social media, causing harm. The AI system's use is central to the incident, as the images are AI-generated and non-consensual, leading to reputational and privacy harms. The dissemination of such content on a large scale and the discussion of its harmful consequences align with the definition of an AI Incident involving violations of rights and harm to communities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Αμερικανοί νομοθέτες ασχολούνται με τα deepfakes μετά το σκάνδαλο με την Taylor Swift

2024-01-29
SecNews.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake images that were widely disseminated, causing harm to Taylor Swift's reputation and emotional health. This constitutes a violation of rights and harm to the individual, which aligns with the definition of an AI Incident. The involvement of lawmakers and platform responses further confirms the recognition of harm caused by the AI system's misuse. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift: Κυκλοφόρησαν γυμνές AI φωτογραφίες της - Έξαλλη η τραγουδίστρια

2024-01-26
NEWS 24/7
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that have caused harm to Taylor Swift by violating her rights and spreading inappropriate content without consent. The widespread distribution of these images on social media platforms constitutes harm to the individual and communities, fitting the definition of an AI Incident under violations of human rights and harm to communities. The mention of political calls for legal action further underscores the recognized harm and societal impact. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Taylor Swift: Η πλατφόρμα "X" μπλοκάρισε την αναζήτηση για τις deepfake φωτογραφίες της

2024-01-29
HuffPost Greece
Why's our monitor labelling this an incident or hazard?
Deepfake images are generated by AI systems and their non-consensual distribution causes harm to the individual's rights and reputation, fitting the definition of an AI Incident due to violations of human rights and harm to communities. The platform's blocking of searches and user reports are responses but do not negate the occurrence of harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Οι AI γυμνές φωτογραφίες της Taylor Swift είναι revenge porn και είσαι το επόμενο θύμα

2024-01-29
Ladylike.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to create deepfake images that depict Taylor Swift in non-consensual pornographic scenarios. This use of AI has directly caused harm by enabling sexual and psychological abuse, which fits the definition of an AI Incident under violations of human rights and harm to individuals. The widespread sharing and viral nature of these images further amplify the harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift: Κυκλοφόρησαν γυμνές ΑΙ φωτογραφίες της

2024-01-28
InStyle
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate synthetic nude images without consent, which were widely disseminated, causing harm to Taylor Swift's privacy and dignity. This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The harm is realized, not just potential, as the images were viewed millions of times and caused significant distress. The involvement of AI in creating the harmful content and its role in the incident is clear and direct. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Yapay zekayla oluşturulan müstehcen fotoğrafları yayılmıştı... Taylor Swift'in artık adını aratmak bile yasak oldu!

2024-01-28
Hürriyet
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create deepfake explicit images, which were then disseminated on a social media platform, causing reputational and emotional harm to Taylor Swift. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The platform's removal of content and search restrictions are responses to the harm caused. The involvement of AI in generating harmful content and the resulting direct harm to the individual and community justify classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift'in müstehcen görsellerine Beyaz Saray'dan yasal düzenleme - Magazin haberleri

2024-01-28
HABERTURK.COM
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to create fake explicit images of a real person, which are being shared on social media, causing harm related to privacy and potentially violating rights. The White House's statement focuses on the risk and ongoing efforts to address the issue through legal and governance measures. However, the article does not report a specific incident of harm occurring beyond the sharing of these images, nor does it describe a concrete AI incident with direct harm or a specific event causing harm. Instead, it emphasizes the risk and the need for regulation and content management. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI-generated harmful content rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Taylor Swift'i X'te aramak engellendi - Magazin haberleri

2024-01-29
HABERTURK.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake explicit images of a person, which constitutes a violation of personal rights and can be considered harm to the individual and community. The dissemination of such AI-generated content has already occurred, causing reputational and privacy harm. Therefore, this qualifies as an AI Incident due to violations of rights and harm to communities. The mention of governmental efforts to address the issue supports the seriousness of the harm but does not change the classification.
Thumbnail Image

Beyaz Saray Taylor Swift için devreye girdi: Müstehcen görüntülere karar

2024-01-27
Sabah
Why's our monitor labelling this an incident or hazard?
The article involves AI systems generating harmful fake explicit images, which is a direct violation of personal rights and causes harm to the individual and potentially to communities. This fits the definition of an AI Incident because the AI-generated content has already been disseminated and caused harm. Although the article emphasizes the White House's response and legal efforts, the underlying event is the harmful AI-generated content already in circulation, meeting the criteria for an AI Incident.
Thumbnail Image

Taylor Swift'in deepfake pornosu AI tartışmalarını alevlenlendirdi

2024-01-29
euronews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake pornographic content, which is a direct violation of personal rights and causes harm to the individual depicted and the broader community. The widespread sharing of these images on a social media platform and the platform's response to restrict searches and remove content confirm the AI system's role in causing harm. The involvement of political figures and calls for legal action further underscore the recognition of harm caused by AI misuse. Therefore, this qualifies as an AI Incident due to realized harm stemming from AI-generated content misuse.
Thumbnail Image

Beyaz Saray'dan Taylor Swift'in müstehcen görsellerine yasal düzenleme

2024-01-27
NTV
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of AI-generated fake images (deepfakes) that cause harm by violating privacy and potentially causing reputational and psychological harm to individuals, which falls under violations of human rights and harm to communities. Since the event describes ongoing harm from the sharing of such images and the government's response to mitigate it, it qualifies as an AI Incident. The White House's statement about legal measures and social media content management indicates recognition of existing harm and efforts to address it, rather than just a potential future risk or general information.
Thumbnail Image

Beyaz Saray'dan Taylor Swift'in müstehcen görsellerine yasal düzenleme

2024-01-27
CNN Türk
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating non-consensual explicit deepfake images, which is a direct violation of privacy and can be considered sexual abuse, thus constituting harm under the AI Incident definition (violation of human rights and harm to individuals). The dissemination of these images on social media platforms causes realized harm. The responses by government and platforms are complementary information but the core event is an AI Incident due to the realized harm caused by AI-generated content.
Thumbnail Image

Beyaz Saray'dan Taylor Swift'in 'müstehcen' görsellerine yasal düzenleme

2024-01-27
Cumhuriyet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated explicit images of a real person (Taylor Swift) being shared on social media, causing harm through privacy violations and non-consensual sexual content. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to individuals). The responses from the White House and lawmakers indicate recognition of the harm and efforts to mitigate it, but the harm is already occurring. Hence, it is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

Beyaz Saray'dan Taylor Swift'in yapay zekâ ile yapılan müstehcen görsellerine yasal düzenleme

2024-01-27
T24
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated explicit deepfake images of Taylor Swift being shared on social media, which constitutes a violation of privacy and sexual abuse, harming individuals and communities. The involvement of AI in generating these images is explicit, and the harm is realized, not hypothetical. The responses from the White House and lawmakers indicate recognition of the harm and efforts to mitigate it. Hence, this event meets the criteria for an AI Incident due to direct harm caused by AI use.
Thumbnail Image

Taylor Swift'e seks kumpası: Yapay zeka işin neresinde... Beyaz Saray devreye girdi

2024-01-28
Haber Sitesi ODATV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake pornographic content of a real person without consent, which is a violation of human rights and causes harm to the individual and community. The harm is realized as the images are circulating and causing public concern. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

Taylor Swift'in pornografik görüntüleri, sosyal medyayı sardı

2024-01-26
CHIP Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate non-consensual pornographic deepfake images, which have been widely disseminated on a social media platform, causing harm to the individual depicted and potentially to communities by spreading harmful content. The AI system's use directly led to the harm (violation of rights and reputational harm). The platform's failure to remove the content promptly exacerbated the harm. Therefore, this qualifies as an AI Incident under the definitions, specifically under violations of human rights and harm to communities.
Thumbnail Image

Gündemdeki tartışma: Taylor Swift ve deepfake pornografisi

2024-01-27
CHIP Online
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate deepfake images, which are being maliciously distributed, causing direct harm to the person depicted (Taylor Swift) and potentially to others targeted by similar technology. This constitutes a violation of rights and emotional harm, fitting the definition of an AI Incident. The article also discusses responses and mitigation efforts, but the primary focus is on the realized harm caused by the AI-generated content, not just potential or complementary information.
Thumbnail Image

Yapay zeka ile ünlülerin çıplak fotoğraflarını yayınladılar! İşin sonu iyi yere gitmiyor!

2024-01-29
Teknolojioku
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated explicit images (deepfakes) of a celebrity being shared widely, causing harm through non-consensual pornography. This is a clear violation of rights and harms the individual and community. The AI system's use in generating these images directly led to the harm described. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Nieuw seizoen: hoeveel weet jij van Heel Holland Bakt?

2024-01-28
Provinciale Zeeuwse Courant
Why's our monitor labelling this an incident or hazard?
Deepfake images are generated using AI systems that create realistic but fake media. The sharing of such pornographic deepfake photos constitutes a violation of the individual's rights and can cause harm to their reputation and privacy. The platform's enforcement action (account suspension) indicates recognition of the harm. Therefore, this event involves the use of an AI system (deepfake generation) that has directly led to harm (violation of rights and reputational harm), qualifying it as an AI Incident.
Thumbnail Image

"Bescherm Taylor Swift": Fans springen in de bres voor zangeres nadat valse pornografische beelden viraal gaan

2024-01-25
vrtnws.be
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated false pornographic images (deepfakes) of a person, which is a direct violation of rights and causes harm to the individual and community. The spread of such content is a clear harm caused by the use of AI systems. The involvement of AI in generating the harmful content and the resulting harm to the person and community meet the criteria for an AI Incident. The fans' actions and moderators' content removal are responses to this incident, not the primary event. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Ook Witte Huis verbolgen over verspreiding valse naaktbeelden van Taylor Swift

2024-01-27
Telegraaf
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI (deepfake technology) to create and distribute false intimate images, which directly harms the individual depicted and violates rights. The harm is realized as the images were widely viewed and caused public outrage. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to community). The political and social responses are complementary information but do not change the primary classification.
Thumbnail Image

Taylor Swift niet meer vindbaar op X om onduidelijke redenen

2024-01-28
RTL Nieuws
Why's our monitor labelling this an incident or hazard?
The article mentions AI-generated explicit photos as a possible reason for Taylor Swift's unavailability on the platform, indicating AI involvement. However, there is no clear evidence of harm caused by the AI system's development, use, or malfunction leading to injury, rights violations, or other harms. The event is about a social media search function failure or content moderation issue, which is not explicitly linked to AI malfunction or misuse causing harm. The mention of AI-generated content is background context rather than a direct cause of harm. Thus, the event is best categorized as Complementary Information, providing additional context about AI's societal impact without describing a new incident or hazard.
Thumbnail Image

Waarom pornografische AI-foto's Taylor Swift leiden tot bezorgde overheid

2024-01-28
FOK!
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake pornographic images of a public figure spreading widely on social media, which is a direct violation of personal rights and can cause reputational and psychological harm. The AI system's use in generating these images directly led to harm, fulfilling the criteria for an AI Incident. The governmental concern and calls for regulation further support the recognition of harm already realized rather than just a potential hazard.
Thumbnail Image

De zeven gezichten van de jarige Oprah Winfrey (70) en bindt ze ooit de strijd aan met Trump?

2024-01-28
De Gelderlander
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images (pornographic deepfakes of Taylor Swift) circulating on social media, which is a direct use of AI systems to create harmful content. The harm is realized as the images are circulating and causing reputational and privacy harm, prompting official concern and platform intervention. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities and violation of rights.
Thumbnail Image

Waarom je niks meer op X vindt als je Taylor Swift zoekt

2024-01-28
RTL Nieuws
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Microsoft Designer) used to create deepfake images that are non-consensual and sexually explicit, causing reputational harm and distress to Taylor Swift and the NFL. The spread of these images on social media constitutes harm to communities and violation of rights. The platform's blocking of search results is a response to this harm. Since the AI system's use has directly led to harm, this qualifies as an AI Incident under the framework, specifically harm to communities and violation of rights due to AI-generated non-consensual content.
Thumbnail Image

​De donkere kant van AI: hoe het uit kan pakken voor beroemdheden

2024-01-26
DutchCowboys
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images that are fake and sexually explicit, depicting Taylor Swift in ways that are harmful and non-consensual. The creation and spread of such AI-generated content directly causes harm to the individual and her community, fitting the definition of an AI Incident due to violation of rights and harm to communities. The AI system's use in generating and disseminating these images is central to the harm described.
Thumbnail Image

Van Taylor Swift over Celine Van Ouytsel tot Emma Watson: "deepnudes" overspoelen het internet (en niet alleen op X)

2024-01-29
vrtnws.be
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated manipulated images (deepnudes) that have been widely disseminated, causing harm to the individuals depicted (Taylor Swift and others). The AI system's use directly led to violations of privacy and rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the images circulated widely and caused reputational and personal harm. The platform's mitigation efforts do not negate the occurrence of harm. Hence, the classification is AI Incident.
Thumbnail Image

Platforma X a blocat căutările despre Taylor Swift, după ce au apărut imagini pornografice cu artista, create cu ajutorul AI

2024-01-29
Stirile ProTV
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images that are non-consensual and pornographic, which directly harms the individual depicted and violates their rights. The platform's response to block searches and remove content confirms the recognition of harm caused. The involvement of AI in creating and spreading these images meets the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's use. The event is not merely a potential risk or a complementary update but a concrete incident of harm caused by AI misuse.
Thumbnail Image

Imagini pornografice false cu Taylor Swift cu ajutorul inteligenței artificiale. Reacția Casei Albe VIDEO

2024-01-27
adevarul.ro
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated false images (deepfakes) that have been widely distributed, causing harm to the individual depicted and potentially to the public discourse. The AI system's use directly led to the harm described, fulfilling the criteria for an AI Incident. The harm includes violation of privacy and reputational damage, which fall under violations of human rights and harm to communities. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Imagini porno cu Taylor Swift, create cu ajutorul inteligenţei artificiale, s-au viralizat pe X. Casa Albă vrea măsuri legislative din partea Congresului

2024-01-27
HotNews.ro
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create deepfake pornographic images without consent, which directly leads to violations of human rights and harm to the individual and community. The harm is realized as the images have been widely viewed and distributed, causing reputational and emotional damage. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

X a blocat căutările despre Taylor Swift, după ce imagini pornografice false cu artista au devenit virale

2024-01-29
Ziare.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create deepfake images of a public figure without consent, which constitutes a violation of rights and harm to the individual and community. The spread of these images on a social media platform and the platform's intervention to block searches and remove content confirm that harm has occurred. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content and its dissemination.
Thumbnail Image

Căutările despre Taylor Swift au fost restabilite pe X, după un blocaj temporar al pozelor deepfake explicite

2024-01-30
Mediafax.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI image generators to create explicit deepfake images of Taylor Swift, which were circulated on the platform X. This use of AI directly led to harm by spreading false and explicit content, violating the individual's rights and causing reputational and emotional harm. The platform's response to block searches temporarily and remove such content confirms the recognition of harm. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

False imagini pornografice cu Taylor Swift, distribuite pe internet

2024-01-27
Gândul
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create false pornographic images (deepfakes) of a public figure, which have been widely distributed online. This use of AI has directly led to harm by violating the individual's rights and causing reputational and emotional damage, as well as harm to communities through the spread of degrading content. The article explicitly mentions the AI-generated nature of the images and the resulting public and political backlash, confirming the direct link between AI use and harm. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

Taylor Swift, din nou victima AI. Casa Albă: Este foarte alarmant

2024-01-26
DCnews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create false pornographic images of Taylor Swift, which have been widely viewed and caused reputational harm. This constitutes a violation of rights and harm to the individual and communities. The involvement of AI in generating and spreading these images meets the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

VIDEO Casa Albă, îngrijorată de difuzarea unor imagini pornografice false cu Taylor Swift

2024-01-26
AGERPRES
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create false pornographic images of Taylor Swift, which were widely disseminated online. This constitutes a violation of rights and harm to the individual and communities through misinformation and defamation. The harm is realized, not just potential, as the images were viewed millions of times before removal. The involvement of AI in generating the harmful content and its role in causing reputational and societal harm meets the criteria for an AI Incident.
Thumbnail Image

Casa Albă, îngrijorată de difuzarea unor imagini pornografice false cu Taylor Swift - ARADON

2024-01-27
ARADON
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the false pornographic images were created with the help of generative AI and widely disseminated, causing harm to Taylor Swift's reputation and distress to the public and political figures. This fits the definition of an AI Incident because the AI system's use directly led to harm in terms of violation of rights and harm to communities. The harm is realized, not just potential, and the AI system's role is pivotal in generating the false content.
Thumbnail Image

Taylor Swift, în centrul atenției. Imagini pornografice false cu artista circulă pe net

2024-01-27
România Liberă
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the false pornographic images were generated using AI (deepfake technology). The harm caused includes violation of personal rights and reputational harm to Taylor Swift, which falls under violations of human rights and harm to individuals. The event describes actual harm that has occurred due to the AI system's use, not just a potential risk. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm.
Thumbnail Image

Indignare generală după ce imagini pornografice cu Taylor Swift, create cu ajutorul inteligenţei artificiale, s-au viralizat pe X. Casa Albă vrea măsuri legislative din partea Congresului

2024-01-27
News.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative technology was used to create false pornographic images of Taylor Swift, which were widely disseminated and viewed millions of times. This directly leads to harm in terms of violation of rights and harm to the individual and community. The AI system's use in generating these deepfake images is central to the incident. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI-generated content. The legislative and social responses mentioned are complementary but do not change the primary classification of the event as an AI Incident.
Thumbnail Image

X a blocat căutările despre Taylor Swift, după ce după ce imagini pornografice cu artista, create cu ajutorul AI, au devenit virale

2024-01-28
News.ro
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake pornographic images, which constitute a violation of rights (non-consensual explicit content) and harm to the community by spreading false and harmful content. The harm is realized as the images went viral and caused significant concern, prompting platform intervention and official attention. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI-generated content.
Thumbnail Image

Platforma X a blocat căutările despre Taylor Swift după ce imagini pornografice false cu ea au devenit virale. Reacția Casei Albe - B1TV.ro

2024-01-29
B1TV.ro
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images that are non-consensual and pornographic, which constitutes a violation of individual rights and causes harm to the person depicted and the broader community. The platform's active removal and blocking of searches indicate the harm is occurring and recognized. The AI system's role in generating these images is pivotal to the harm. Therefore, this qualifies as an AI Incident due to realized harm stemming from the use of AI systems to create and spread harmful content.
Thumbnail Image

Fotografiile cu Taylor Swift pe rețeaua X au fost blocate, după apariția imaginilor false cu artista în ipostaze pornografice

2024-01-29
comisarul.ro
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI to create false pornographic images (deepfakes) of a real person, Taylor Swift, which were widely disseminated and viewed by millions. This constitutes a violation of rights (non-consensual intimate imagery) and harm to the community (spread of harmful misinformation and abuse). The AI system's use directly led to these harms, qualifying this as an AI Incident. The platform's removal efforts and political responses are complementary but do not change the classification of the core event as an incident.
Thumbnail Image

Fake online images of Taylor Swift alarm White House

2024-01-26
NASDAQ Stock Market
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated fake images (deepfakes) that have been widely disseminated, causing harm to the individual's rights and reputational harm, which falls under violations of human rights and harm to communities. The AI system's use in generating these images directly leads to harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Winston-Salem Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create explicit deepfake images without consent, which have been widely shared and caused harm. This constitutes a violation of rights and harm to the individual and community, fitting the definition of an AI Incident. The harm is realized and ongoing, not merely potential. The involvement of AI in generating the harmful content is clear and central to the event.
Thumbnail Image

Explicit Deepfake Images of Taylor Swift Elude Safeguards and Swamp Social Media

2024-01-26
The New York Times
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated explicit deepfake images that have been widely shared, causing harm to the individual depicted and distress to the public. The AI system's use directly led to violations of rights and harm to communities, fitting the definition of an AI Incident. The harm is realized, not just potential, as the images have been viewed millions of times and caused public outcry and platform interventions.
Thumbnail Image

Taylor Swift 'furious' about explicit AI pics, may take legal action: report

2024-01-26
MoneyControl
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that are non-consensual and harmful, violating personal rights and causing emotional and reputational harm to Taylor Swift. The AI system's role in creating these images is central to the harm, fulfilling the criteria for an AI Incident. The circulation and hosting of these images on social media platforms further compound the harm. Therefore, this is classified as an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Taylor Swift considering legal action over nude AI deepfakes shared online

2024-01-26
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event describes explicit AI-generated deepfake images of a celebrity being shared online without consent, which is a clear violation of rights and causes harm. The AI system's use in creating these images and their circulation on social media platforms directly leads to harm. Therefore, this qualifies as an AI Incident under the framework, as it involves realized harm due to the use of AI systems.
Thumbnail Image

Can Taylor Swift sue over deepfake porn images? US laws make justice elusive for victims.

2024-01-26
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create nonconsensual deepfake pornographic images, which constitute a violation of personal rights and cause harm to the victim's dignity and privacy, fitting the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The article details actual harm (spread of explicit AI-generated images) and the challenges in legal recourse, indicating the harm has occurred, not just a potential risk. Therefore, this is an AI Incident rather than a hazard or complementary information. The focus is on the harm caused by AI misuse and the legal implications, not just on legal or societal responses alone.
Thumbnail Image

Taylor Swift's Fake Explicit Images Leaves White House 'Alarmed' as Press Secretary Addresses Dangers of Deepfakes

2024-01-27
RadarOnline
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated synthetic media that can cause significant harm by spreading misinformation and violating individuals' rights, particularly through non-consensual intimate imagery. The circulation of such images constitutes harm to individuals and communities, fulfilling the criteria for an AI Incident. The press secretary's remarks confirm the harm is occurring and recognized at a high level, indicating the event is an AI Incident rather than a potential hazard or complementary information.
Thumbnail Image

White House voices concern over 'alarming' Taylor Swift deepfakes

2024-01-26
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event describes explicit AI-generated deepfake images of Taylor Swift being widely disseminated on social media, constituting non-consensual pornography and misinformation. This directly harms the individual's rights and causes community harm. The AI system's use in generating and spreading these images is central to the incident. The harm is realized, not just potential, and involves violations of rights and harm to communities. Therefore, this qualifies as an AI Incident.
Thumbnail Image

'Disgusting' Taylor Swift AI images circulated on X/Twitter despite platform rules

2024-01-26
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are synthetic media created using AI systems. The nonconsensual nature of the images and their widespread circulation caused harm to the individual depicted, violating her rights and potentially causing psychological and reputational harm. This fits the definition of an AI Incident as the AI system's use directly led to harm (violation of rights and harm to the community). The article also discusses the platforms' responses and legal considerations, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Searches for Taylor Swift on X come up empty after explicit AI pictures go viral | CNN Business

2024-01-27
CNN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated synthetic images that caused reputational and privacy harm to Taylor Swift, a public figure, through nonconsensual deepfake pornography. The AI system's use directly led to harm by creating and spreading deceptive and harmful content. The platform's temporary search restrictions and content removal are responses to this harm. The incident fits the definition of an AI Incident as it involves realized harm to a person and communities through AI-generated manipulated media.
Thumbnail Image

Explicit, AI-generated Taylor Swift images spread quickly on social media | CNN Business

2024-01-25
CNN
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generated synthetic, explicit images of a public figure without consent, which were widely disseminated, causing reputational and privacy harm. This fits the definition of an AI Incident because the AI's use directly led to harm (violation of rights and harm to community through misleading content). The article discusses realized harm rather than just potential harm, so it is not merely an AI Hazard. It is not Complementary Information because the main focus is on the incident itself, not on responses or broader ecosystem context. Therefore, the event is classified as an AI Incident.
Thumbnail Image

It's not just Taylor Swift: AI-generated porn is targeting women and kids all over the world | CNN Business

2024-01-26
CNN
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create manipulated explicit images and videos without consent, directly leading to harm to individuals' rights, privacy, and social well-being. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities. The article provides multiple examples of realized harm, including minors and public figures being targeted, and discusses the societal impact and ongoing challenges in mitigating these harms. Therefore, this is classified as an AI Incident.
Thumbnail Image

AI-generated porn is targeting women and kids all over the world

2024-01-27
lite.cnn.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate manipulated explicit images and videos without consent, which directly leads to harm including violations of privacy and rights, emotional and reputational damage, and harm to communities. The AI system's use in creating and disseminating these images is central to the harm described. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm to individuals and communities.
Thumbnail Image

Taylor Swift deepfakes spark calls in Congress for new legislation

2024-01-26
BBC
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate deepfake images, which have directly led to harm in the form of emotional and reputational damage to individuals targeted, notably Taylor Swift and other women disproportionately affected. The widespread sharing of these AI-generated explicit images constitutes a violation of rights and causes harm to communities. The article also discusses legislative and platform responses, but the primary focus is on the realized harm caused by the AI-generated content. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media: Swifties are fighting back

2024-01-27
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create non-consensual pornographic deepfake images of Taylor Swift, which have been widely shared on social media. This use of AI has directly caused harm by violating the individual's rights and causing reputational and emotional damage. The involvement of AI in generating these images and the resulting harm meets the criteria for an AI Incident under the OECD framework, specifically under violations of human rights and harm to communities. The article also mentions ongoing efforts to remove the content, but the harm is already realized and significant.
Thumbnail Image

Deepfake trend claims its most popular victim: Fake, explicit images of Taylor Swift go viral

2024-01-27
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create fake explicit images without consent, which constitutes a violation of rights and causes harm to the individual and community. The harm is realized and ongoing, as the images have spread widely and caused distress. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The involvement of AI is clear and central to the harm described.
Thumbnail Image

Taylor Swift considering legal action over graphic AI photos, report

2024-01-26
Economic Times
Why's our monitor labelling this an incident or hazard?
The AI system was used to create and distribute non-consensual, graphic deepfake images of a public figure, which is a violation of her rights and causes reputational and emotional harm. This fits the definition of an AI Incident as the AI's use has directly led to harm (violation of rights and harm to community). The article describes realized harm, not just potential harm, and the AI system's role is pivotal in generating the harmful content.
Thumbnail Image

Taylor Swift searches blocked on X after fake explicit images spread

2024-01-29
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the fake explicit images are possibly created by AI, indicating the involvement of AI systems in generating harmful content. The spread of these images has led to reputational harm and misinformation, which are harms to communities and individuals. The social media platform's action to block searches is a response to this harm. Since the harm is occurring and AI is a direct factor in creating the harmful content, this qualifies as an AI Incident under the framework, specifically harm to communities and violation of rights through misinformation and reputational damage.
Thumbnail Image

Taylor Swift Deepfake Trend Sparks White House Worries

2024-01-27
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is an AI system's output. The circulation of these false images constitutes misinformation, which can harm communities by spreading falsehoods and potentially damaging reputations. The White House's concern and mention of executive action underscore the significance of the harm. Since the misinformation is actively spreading and causing concern, this qualifies as an AI Incident due to harm to communities through misinformation dissemination.
Thumbnail Image

Explicit deepfake images of Taylor Swift elude safeguards, swamp social media - Times of India

2024-01-26
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (diffusion models) to create explicit deepfake images, which have been widely disseminated on social media, causing harm to the individual depicted and distress to the community. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The AI system's use directly led to the harm through the creation and spread of non-consensual explicit content. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift deepfake images prompt US politicians to call for new...

2024-01-26
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images causing harm through sexual exploitation and violation of rights. The harm is realized as the images have been posted and circulated, prompting political calls for legal action. The AI system's use in generating these images is central to the harm, fulfilling the criteria for an AI Incident. The political and platform responses are complementary but do not change the primary classification.
Thumbnail Image

Swifties Track Down Man Behind Explicit Deepfake Taylor Swift Pictures; White House to Take Action | - Times of India

2024-01-27
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems capable of creating realistic but fake content. The harm includes violation of privacy and reputational damage to Taylor Swift, as well as the spread of harmful content on social media, which affects communities and individuals. The article describes the harm as having occurred (images went viral, causing alarm), and the White House's response underscores the seriousness of the incident. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

White House Alarmed by Taylor Swift Deepfake Images | World News - Times of India

2024-01-27
The Times of India
Why's our monitor labelling this an incident or hazard?
Deepfake images are generated by AI systems and their circulation can cause harm to individuals' reputations and potentially to communities by spreading misinformation. The article describes actual circulation of such images, indicating realized harm rather than just potential. Therefore, this qualifies as an AI Incident due to harm to communities and individuals through misinformation and false representation. The involvement of AI in generating deepfakes and the resulting harm meets the criteria for an AI Incident.
Thumbnail Image

Deepfake: Taylor Swift targeted by AI-generated sexually explicit images on X | Business - Times of India

2024-01-26
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems creating realistic but fake content. The harm caused includes violation of privacy and non-consensual use of someone's likeness in sexually explicit material, which constitutes a breach of fundamental rights and harms the individual and community. The widespread dissemination and millions of views indicate the harm is realized, not just potential. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

Deepfake: X blocks some Taylor Swift-related searches on the platform | - Times of India

2024-01-28
The Times of India
Why's our monitor labelling this an incident or hazard?
AI-generated explicit deepfake images of Taylor Swift have surfaced on X, causing harm related to privacy and reputational damage. The AI system's use in generating these images directly led to this harm. The platform's blocking of certain searches is a response to this harm. The presence of AI-generated harmful content and its impact on individuals and communities fits the definition of an AI Incident, as it involves violations of rights and harm to communities. The event is not merely a potential risk or a complementary update but a realized harm caused by AI misuse.
Thumbnail Image

What Microsoft CEO Satya Nadella has to say on Taylor Swift's explicit AI images | - Times of India

2024-01-28
The Times of India
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (Microsoft's text-to-image generator) to create explicit deepfake images of Taylor Swift, which have been widely shared and viewed millions of times. This constitutes a violation of rights (privacy and potentially intellectual property) and causes harm to the individual and communities by spreading non-consensual explicit content. The CEO's response highlights the ethical concerns and the need for safeguards. Since the harm (distribution of explicit AI-generated images) has already occurred, this qualifies as an AI Incident.
Thumbnail Image

'Swifties' Outraged As AI-Generated Pornographic Images Of Taylor Swift Circulate Unfiltered On X, Spark Call For Stronger Regulation By Benzinga

2024-01-26
Investing.com UK
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content (pornographic images) that has been widely disseminated, causing harm to the individual depicted and distress to their community. The AI system's use in creating and distributing these images constitutes misuse of AI technology leading to violations of rights and harm to communities. The harm is realized, not just potential, as the images circulated widely before removal. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lewd Taylor Swift AI images likely originated in a Telegram chat group

2024-01-26
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to generate harmful content (non-consensual sexual images) that has been widely disseminated, causing distress and violating rights. The AI's role is pivotal as it enabled the creation of these images, and the harm is realized and ongoing. This fits the definition of an AI Incident due to violation of rights and harm to the individual and community.
Thumbnail Image

Taylor Swift searches blocked on X after fake explicit images of pop singer spread

2024-01-28
The Guardian
Why's our monitor labelling this an incident or hazard?
The article describes the spread of fake explicit images of Taylor Swift that are possibly AI-generated, which constitutes misinformation and reputational harm. The AI system's role is indirect but pivotal in generating and spreading false content. The social media platform's response to block searches indicates recognition of harm caused. The harm is realized, not just potential, as the images have been widely viewed and caused public concern. Therefore, this event meets the criteria for an AI Incident due to the direct or indirect harm caused by AI-generated misinformation.
Thumbnail Image

If anyone can get the US government to take deepfake porn seriously, it's Swifties | Arwa Mahdawi

2024-01-27
The Guardian
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake pornographic images without consent, which directly leads to harm to individuals' rights and causes significant emotional and reputational damage. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities. The article describes actual harm occurring, not just potential harm, and thus it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system's role is central to the harm described.
Thumbnail Image

Taylor Swift deepfake pornography sparks renewed calls for US legislation

2024-01-26
The Guardian
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate deepfake pornography, which has been widely disseminated and viewed, causing direct harm to the individual depicted and raising broader societal concerns about sexual exploitation and rights violations. The harm is realized, not just potential, as millions have viewed the images, and the article details the emotional and reputational damage caused. Therefore, this qualifies as an AI Incident due to the direct link between AI-generated content and harm to individuals and communities.
Thumbnail Image

Fake explicit Taylor Swift photos have politicians sounding off but will AI laws actually change?

2024-01-26
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake pornography that has been widely disseminated, causing harm to the individual depicted and raising broader concerns about sexual harassment and rights violations. The AI system's use in generating and spreading these images directly leads to harm (violation of rights and harm to communities). Therefore, this qualifies as an AI Incident. The legislative discussions and proposals are complementary information providing context and responses to the incident but do not themselves constitute a new incident or hazard.
Thumbnail Image

Taylor Swift fans are furious about graphic fake AI images of the pop superstar being shared on X

2024-01-26
Business Insider
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate deepfake images, which are non-consensual and graphic, causing harm to the individuals depicted and their communities. The spread of such content on a major platform with insufficient moderation leads to realized harm, including violations of privacy and potential emotional harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

White House calls explicit AI-generated Taylor Swift images 'alarming,' urges Congress to act

2024-01-27
Fox News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated images that are false and non-consensual, causing harm to the individual depicted and potentially to broader communities by enabling harassment and abuse. The White House's reaction and calls for legislation underscore the recognition of harm caused by the AI system's outputs. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals and communities.
Thumbnail Image

Taylor Swift's Name No Longer Searchable on X After AI-Generated Explicit Photos Go Viral - Yahoo Sports

2024-01-27
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated deepfake images that are sexually explicit and nonconsensual, which constitutes a violation of rights and harm to the individual and community. The AI system's use in creating and disseminating these images directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, as it involves realized harm caused by the use of an AI system.
Thumbnail Image

Taylor Swift fans start online movement after vile AI photos

2024-01-26
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated explicit images of Taylor Swift were created and shared without her consent, constituting abusive and exploitative content. The AI system's use in generating these deepfake images directly leads to harm, including violations of rights and harm to the individual and community (fans and public). The involvement of AI in generating harmful content and the resulting abuse meets the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's outputs.
Thumbnail Image

Microsoft CEO: AI deepfake porn that targeted Taylor Swift 'alarming'

2024-01-27
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated explicit images of Taylor Swift created using Microsoft's AI image generator, which were shared widely and caused distress. The AI system was used to produce nonconsensual sexual content, a clear violation of personal rights and harmful to the community by spreading such content. The harm is realized and directly linked to the AI system's use. Therefore, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights and harm to communities.
Thumbnail Image

Fury as extremely graphic AI images of Taylor Swift go viral

2024-01-25
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create deepfake images that are sexualized and non-consensual, which is a direct violation of personal rights and causes harm to the individual depicted and the community. This fits the definition of an AI Incident because the AI-generated content has directly led to harm (violation of rights and harm to community). The article also references existing laws and ongoing legislative efforts to address such harms, reinforcing the recognition of the harm caused. Therefore, this is classified as an AI Incident.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media....

2024-01-26
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create pornographic deepfake images without consent, which have been widely spread, causing harm to the victim's rights and dignity. This constitutes a violation of human rights and a breach of obligations protecting fundamental rights. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to significant harm.
Thumbnail Image

Fake nudes of Taylor Swift spread across social media, sparking outrage

2024-01-26
Washington Post
Why's our monitor labelling this an incident or hazard?
The article describes the creation and rapid spread of AI-generated deepfake pornographic images of Taylor Swift, which is a clear violation of her rights and causes harm to her and her community. The AI system's role in generating these images is pivotal to the harm occurring. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI-generated content.
Thumbnail Image

X suspends account that posted Taylor Swift AI porn

2024-01-25
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated pornographic images, which are nonconsensual and harmful to the individual depicted, constituting a violation of rights. The AI system's use in creating and distributing these images directly leads to harm (violation of rights and harm to community standards). The harm is realized, not just potential, as the images are circulating and causing outrage. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift 'furious' about explicit AI images

2024-01-25
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake pornographic images of a real person without consent, which is a direct violation of personal rights and causes significant harm to the individual and her community. The AI-generated content is explicitly described as abusive and exploitative, and the harm is realized as the images have been widely disseminated online. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to communities). The article also discusses ongoing legal and societal responses, but the primary focus is on the harm caused by the AI-generated images.
Thumbnail Image

The sick websites that's posted deepfake porn of celebs for years

2024-01-25
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created using AI systems capable of generating realistic fake content. The harm caused includes violations of personal rights, privacy, and dignity, which fall under violations of human rights and harm to communities. The repeated and ongoing nature of the publication, along with the backlash and legal actions, confirms that harm has materialized. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm.
Thumbnail Image

Congress says Taylor Swift deep fake nudes posted on line shows AI NEEDS to be regulated: Bill launched to crackdown on spread of 'abusive' images

2024-01-26
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article describes the creation and viral spread of AI-generated deepfake nude images of Taylor Swift and other individuals without their consent. This use of AI has directly caused harm by violating privacy rights and enabling sexual exploitation, which fits the definition of an AI Incident. The harm is realized and ongoing, as the images are actively shared and have caused distress. The article also discusses the absence of adequate legal frameworks to address this harm, reinforcing the significance of the incident. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Deepfakes Have Been A Problem -- Taylor Swift Was The Breaking Point

2024-01-27
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deepfakes causing harm by producing and spreading false, explicit images of a public figure without consent, which constitutes a violation of rights and harm to communities. The involvement of AI systems in creating these deepfakes is clear, and the harm is realized and ongoing, meeting the criteria for an AI Incident. The article also highlights the broader societal impact and potential future harms, but the current realized harm takes precedence in classification.
Thumbnail Image

X Is Doing A Terrible Job Solving The Taylor Swift Deepfake Problem

2024-01-28
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create sexually explicit deepfake images of a real person, Taylor Swift, which is a violation of rights and causes harm to the individual and community. The social media platform's inadequate response (blocking searches but not removing content) has allowed the harmful content to persist, directly linking the AI system's use to ongoing harm. This meets the criteria for an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to community).
Thumbnail Image

Taylor Swift artificial intelligence images circulate on X, prompt backlash from fans

2024-01-25
Yahoo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating manipulated images (deepfakes) of a real person without consent, which constitutes a violation of privacy and human rights. The harm is realized as the images circulate publicly, causing distress and backlash from fans, and implicating reputational and personal harm to Taylor Swift. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person and communities. The legislative efforts mentioned are complementary information but do not change the classification of the primary event.
Thumbnail Image

After Sexually Explicit AI Images Of Taylor Swift Circulated Online, SAG-AFTRA And The White House Issued Statements - Yahoo Sports

2024-01-27
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake pornography, which is a clear example of an AI system's use causing harm through violation of rights and harm to communities. The images are non-consensual and sexually explicit, constituting sexual violence and privacy violations. The harm is realized as the images have circulated online, causing distress and harm to the individual and potentially others. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Twitter races to remove explicit fake Taylor Swift images

2024-01-26
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create deepfake explicit images, which is a direct violation of human rights and privacy (a breach of applicable law protecting fundamental rights). The harm is realized as the images went viral and caused reputational and personal harm to Taylor Swift. The AI system's use in generating these images directly led to the harm, qualifying this as an AI Incident. The platforms' responses are complementary information but do not change the classification of the event itself.
Thumbnail Image

Taylor Swift deepfake images prompt US politicians to call for new laws

2024-01-26
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated deepfake images that are sexually explicit and non-consensual, which constitutes a violation of rights and sexual exploitation, a form of harm to individuals. The circulation of these images on social media platforms has caused direct harm to the individual depicted and potentially to others targeted by similar content. The involvement of AI in generating these images and the resulting harm qualifies this as an AI Incident. The political response and calls for legislation are complementary information but do not change the classification of the core event as an AI Incident.
Thumbnail Image

Explicit AI photos of Taylor Swift were shared online. Legal experts weigh in on how she can fight back.

2024-01-26
Yahoo
Why's our monitor labelling this an incident or hazard?
The event explicitly describes AI-generated explicit images (deepfakes) of Taylor Swift being circulated online, causing harm to her personal rights and reputation. The AI system's use in creating non-consensual synthetic media directly leads to a violation of rights and harm to the individual, which aligns with the definition of an AI Incident. The discussion of legal challenges and platform responses further supports the classification as an incident rather than a mere hazard or complementary information. The harm is realized and ongoing, not just potential.
Thumbnail Image

AI-Generated Explicit Taylor Swift Images 'Must Be Made Illegal,' Says SAG-AFTRA

2024-01-27
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated explicit images (deepfakes) of a person without consent, which is a direct violation of privacy and personal rights, thus constituting harm under the framework. The AI system's use in generating and disseminating these images has directly led to harm to the individual and broader societal concerns. Therefore, this qualifies as an AI Incident. The article also discusses responses and calls for legislation, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

White House 'alarmed' over fake images of Taylor Swift, other artificial intelligence misuse

2024-01-27
Aol
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI systems to generate fake explicit images without consent, which is a violation of personal rights and can cause significant harm to the individuals depicted. The AI system's use here directly leads to harm in the form of violations of rights and potential psychological harm. Therefore, this qualifies as an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

Nude deepfakes of Taylor Swift went viral on X, evading moderation and sparking outrage - Yahoo Sports

2024-01-26
Yahoo Sports
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake images that are nonconsensual and sexually explicit, which constitute a violation of rights and cause harm to the individual depicted and the broader community. The AI system's role in generating and enabling the spread of these images is central to the harm. The viral nature of the content and the platform's inadequate moderation further contribute to the incident. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Taylor Swift Explicit A.I. Images Condemned By SAG-AFTRA

2024-01-27
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating explicit fake images (deepfakes) of a real person without consent, which directly leads to harm in terms of violation of privacy and rights, fitting the definition of an AI Incident. The harm is realized as the images are circulating and causing distress, and the AI system's role is pivotal in creating these images. The discussion of legislative responses and societal concerns supports the significance of the harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

X/Twitter Blocks Searches for 'Taylor Swift' as a 'Temporary Action to Prioritize Safety' After Deluge of Explicit AI Fakes - Yahoo Sports

2024-01-28
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated sexually explicit deepfake images of Taylor Swift being widely disseminated, causing harm to the individual and community. The AI system's use in generating non-consensual explicit content is a direct cause of harm, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The platform's response to block searches and remove content confirms the harm has materialized and is being addressed. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift Searches Blocked by X Amid Circulation of Deepfakes - Yahoo Sports

2024-01-28
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
The article describes the circulation of AI-generated deepfake pornography, which is a direct harm to the individual's rights and privacy, fitting the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The AI system's use in generating these deepfakes has directly led to harm, and the platform's blocking of searches is a mitigation response. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

White House condemns 'worrying' AI after sexually explicit deepfake image of Taylor Swift shared 47m times

2024-01-27
Yahoo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems capable of generating realistic but fake content. The widespread sharing of these images (47 million times) indicates realized harm through violation of privacy and non-consensual intimate imagery, which falls under violations of human rights. The involvement of AI in creating these images and their harmful dissemination meets the criteria for an AI Incident, as the AI system's use has directly led to significant harm to the individual and potentially to communities by spreading misinformation and harmful content.
Thumbnail Image

Democrat Urges Action After Fake, Sexually Explicit Taylor Swift Images Go Viral

2024-01-26
Yahoo
Why's our monitor labelling this an incident or hazard?
The article describes the circulation of AI-generated sexually explicit deepfake images of a public figure without consent, which constitutes a violation of rights and sexual exploitation, a clear harm under the AI Incident definition. The AI system's use in creating these images is central to the harm. The event is not merely a potential risk but an actual incident of harm occurring, with calls for legislative action and content moderation responses. Therefore, this qualifies as an AI Incident.
Thumbnail Image

X blocks searches for Taylor Swift after explicit AI images of her go viral

2024-01-29
Yahoo
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as the images are AI-generated deepfakes, which are a known AI application. The harm caused is the violation of personal rights through non-consensual explicit imagery, which is a breach of fundamental rights and causes harm to the individual and community. The platform's blocking of searches and removal of content is a response to this harm. The event meets the criteria for an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to community).
Thumbnail Image

X/Twitter Temporarily Suspends 'Taylor Swift' Searches After AI Image Uproar

2024-01-28
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating explicit fake images of a public figure without consent, which constitutes a violation of rights and harm to the individual and community. The AI-generated images have already been disseminated, causing harm, thus qualifying as an AI Incident. The platform's response to block searches is a mitigation measure but does not negate the fact that harm has occurred due to AI use. Therefore, this is classified as an AI Incident.
Thumbnail Image

White House 'alarmed' by AI-generated explicit images of Taylor Swift on social media

2024-01-27
Yahoo
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake explicit images of Taylor Swift spreading on social media, which is a direct harm to the individual's rights and to the community by spreading misinformation and potentially causing reputational and psychological harm. The White House's alarm and call for social media enforcement highlight the seriousness of the incident. The AI system's use in generating and disseminating these images directly led to harm, fitting the definition of an AI Incident.
Thumbnail Image

White House 'alarmed' over fake images of Taylor Swift, other artificial intelligence misuse

2024-01-27
Yahoo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating fake images without consent, leading to harm in the form of violations of personal rights and potential psychological harm to the individuals depicted. This misuse of AI technology directly causes harm to persons and communities, fitting the definition of an AI Incident. The involvement of AI in creating non-consensual intimate imagery and the resulting harm to the individuals targeted is clear and materialized, not merely potential.
Thumbnail Image

X/Twitter Blocks Searches for 'Taylor Swift' as a 'Temporary Action to Prioritize Safety' After Deluge of Explicit AI Fakes

2024-01-28
Aol
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated explicit deepfake images, which are created using AI systems capable of generating realistic fake content. The harm includes violation of Taylor Swift's rights (non-consensual use of her likeness in explicit content) and harm to the community through the spread of harmful misinformation and explicit material. The platform's blocking of searches and removal of content indicates recognition of the harm caused. Therefore, this qualifies as an AI Incident because the AI system's use directly led to realized harm as defined in the framework.
Thumbnail Image

It's not just Taylor Swift: AI-generated porn is targeting women and kids all over the world

2024-01-27
Aol
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and disseminate harmful, non-consensual pornographic deepfake images and videos, which directly leads to violations of human rights and harm to individuals and communities. The AI system's use in generating these images is central to the harm described. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm (violation of rights and harm to communities).
Thumbnail Image

Taylor Swift, Joe Biden, Teens Latest Deepfake Victims

2024-01-27
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images and audio that have directly led to harms such as non-consensual explicit content, misinformation, and emotional distress to victims like Taylor Swift and Joe Biden, as well as harm to communities through viral spread of manipulated media. These harms fall under violations of rights and harm to communities. The article reports on realized harms, not just potential risks, and thus qualifies as an AI Incident. The discussion of regulatory and platform responses is complementary but secondary to the primary focus on the harms caused by AI-generated deepfakes.
Thumbnail Image

Taylor Swift "Furious" Over Deepfake Images, May Take Legal Action: Report

2024-01-27
NDTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that have been distributed without consent, causing harm to Taylor Swift and potentially to the broader community by spreading abusive and exploitative content. This constitutes a violation of rights (privacy, consent) and harm to the individual and community. The AI system's use in generating these images directly led to the harm, meeting the definition of an AI Incident. The ongoing presence of the images on platforms despite removal efforts further supports the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media

2024-01-26
Inquirer
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images of Taylor Swift have been widely circulated, causing harm through non-consensual pornography and reputational damage. The AI system (diffusion models) was used to create these images, and their spread on social media platforms has led to realized harm, including violations of rights and harm to the community. This fits the definition of an AI Incident because the AI system's use has directly led to harm. The article also discusses responses and legislative efforts, but the primary focus is on the incident itself.
Thumbnail Image

Taylor Swift's lewd AI-generated images prompt legislators and The White House to back laws on the matter

2024-01-27
MARCA
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated pornographic images (deepfakes) of a public figure that have been widely viewed and circulated, causing harm through online harassment and violation of privacy and rights. The AI system's use in generating these images is central to the harm, fulfilling the criteria for an AI Incident. The discussion of legislative and White House responses supports the recognition of actual harm caused by AI misuse, not just potential harm.
Thumbnail Image

Who is Zubear Abdi, aka Zvbear? The man allegedly behind explicit Taylor Swift AI photos

2024-01-27
MARCA
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI systems to create explicit deepfake images of Taylor Swift without consent, which is a clear violation of rights and causes harm to the individual and community. The AI system's use is central to the harm caused. The incident has already occurred, with the images being shared and the account facing backlash, indicating realized harm. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and breach of applicable law.
Thumbnail Image

The Swifties are up in arms! Deepfakes of the singer cause outrage on social media

2024-01-25
MARCA
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to generate deepfake pornographic images of Taylor Swift without her consent, which constitutes a violation of her rights and causes harm. The AI system's outputs are directly linked to the harm, fulfilling the criteria for an AI Incident. The harm includes violation of rights (privacy, dignity), and harm to communities (social outrage, distress). The presence of AI systems is explicit (deepfake generation), and the harm is realized, not just potential. Therefore, this is classified as an AI Incident.
Thumbnail Image

White House release statement after Taylor Swift fake nude images go viral

2024-01-27
MARCA
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to generate deepfake images, which are AI-generated synthetic media. The creation and viral spread of these images have directly led to harm in the form of violations of privacy, non-consensual intimate imagery, and online harassment, particularly targeting a woman. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person and communities. The White House's response and social media companies' actions are reactions to this realized harm, not the primary focus of the article, which centers on the incident itself.
Thumbnail Image

Taylor Swift AI images have caused a complete Swifties blackout on social media

2024-01-28
MARCA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated pornographic images causing a significant reaction, including a social media blackout and government attention. The AI system's role in generating harmful content that violates personal rights and causes reputational harm fits the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized, not just potential, as the images were spread and caused disruption and distress.
Thumbnail Image

Taylor Swift considering legal action over AI images, Swifties rally to protect singer

2024-01-26
MARCA
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of AI-generated deepfake images that are abusive and exploitative, violating the individual's rights. This constitutes a violation of human rights and personal rights through the misuse of AI-generated content. The harm is realized as the images have been widely viewed and circulated, causing reputational and emotional harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

Biden White House 'Alarmed' Over Sexually Explicit Taylor Swift AI Photos

2024-01-27
Breitbart
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI systems to generate sexually explicit deepfake images of a real person without consent, which is a violation of rights and causes harm. The harm is realized as the images are circulating and causing outrage and concern. This fits the definition of an AI Incident because the AI system's use has directly led to harm in terms of violation of rights and harm to the individual and community. The involvement of AI in generating the images and the resulting harm meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fans Outraged Over AI-Generated Sexually Graphic Taylor Swift Images

2024-01-25
Breitbart
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated sexually graphic images (deepfakes) of a real person without consent, which constitutes a violation of rights and sexual harassment. The harm is realized and ongoing as the images are circulating and causing outrage and distress. The AI system's role in generating these images is pivotal to the harm, meeting the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

Taylor Swift 'furious' about AI-generated fake nude images, considers...

2024-01-25
New York Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that were shared without consent, causing harm to Taylor Swift's personal rights and dignity, which constitutes a violation of human rights and intellectual property rights. The AI system's use directly led to the harm described, including reputational damage and emotional distress. The widespread circulation and the platform's initial failure to prevent the posting further underline the AI system's role in the incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

White House Responds to Explicit AI Generated Images of Taylor Swift

2024-01-27
www.theepochtimes.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images causing harm through non-consensual intimate imagery, which constitutes a violation of rights and harm to communities. The proliferation of such content on social media platforms, facilitated by AI systems generating realistic fake images, directly leads to harm as defined in the framework. The White House's concern and call for legislative action further confirm the recognition of actual harm caused by AI misuse. Therefore, this event qualifies as an AI Incident due to the realized harm stemming from AI-generated content.
Thumbnail Image

White House calls on legislation to regulate AI amid explicit deepfake Taylor Swift images

2024-01-26
ABC News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating explicit deepfake images, which directly leads to harm in the form of non-consensual intimate imagery and online harassment, violating personal rights and causing harm to individuals and communities. Since the harm is occurring due to the use of AI-generated content, this qualifies as an AI Incident. The White House's call for legislation is a response to this incident, but the primary event is the harm caused by the AI-generated deepfakes.
Thumbnail Image

Can Taylor Swift sue over deepfake porn images? US laws make justice elusive for victims.

2024-01-26
USA Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake sexually explicit images without consent, which directly causes harm to individuals' rights and personal dignity, fitting the definition of an AI Incident. The article describes actual harms (nonconsensual deepfake pornography) that have occurred and the legal and societal responses to these harms. The AI system's use in creating and spreading these images is central to the harm. Therefore, this qualifies as an AI Incident due to violations of rights and harm to individuals caused by AI-generated content. The article does not merely discuss potential future harms or general AI developments, nor is it solely about responses or complementary information; it reports on realized harm caused by AI deepfake technology.
Thumbnail Image

Taylor Swift 'Furious' After Graphic AI Images Go Viral, May Push for Legal Action Against Deepfake Site

2024-01-28
The Western Journal
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created and spread without consent, constituting a violation of rights and causing harm to the individual depicted. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and emotional harm). The discussion of legal action and policy responses further confirms the recognition of harm caused by the AI system's outputs.
Thumbnail Image

X Stopping Users from Seeing 'Taylor Swift' Search Results as Explicit AI Images Continue to Spread

2024-01-28
The Western Journal
Why's our monitor labelling this an incident or hazard?
The event describes the spread of AI-generated explicit images of a real person without consent, which constitutes a violation of rights and causes harm to the individual and community. The AI system's role in generating these images is central to the harm. The platform's response to remove content and restrict users confirms the harm is occurring. This fits the definition of an AI Incident due to violation of rights and harm to the individual caused by AI-generated content.
Thumbnail Image

'Disgusting' AI-generated pornographic images of Taylor Swift...

2024-01-25
New York Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that have been distributed, causing harm to the individual depicted and potentially to the community by spreading nonconsensual sexual content. The harm is direct and realized, as the images are actively circulating and causing distress. The AI system's use in creating these images is central to the incident. The article also references legal and regulatory responses, but the primary focus is on the harm caused by the AI-generated content. Hence, this is classified as an AI Incident.
Thumbnail Image

Taylor Swift AI 'deepfakes': What happened and where did the images come from?

2024-01-27
The Indian Express
Why's our monitor labelling this an incident or hazard?
The article describes the creation and viral spread of AI-generated explicit images of Taylor Swift, which are non-consensual and violate her privacy and rights. The AI system (Microsoft's text-to-image generator) was used maliciously to produce harmful content. The harm is direct and realized, affecting the individual and communities (fans, public discourse). The involvement of AI in generating the images and the resulting violation of rights and social harm fits the definition of an AI Incident. The article also mentions societal and governance responses, but the primary focus is on the harm caused by the AI-generated images.
Thumbnail Image

What is the Taylor Swift X-rated AI photo controversy? The need for 'Protect Taylor Swift,' explained

2024-01-25
We Got This Covered
Why's our monitor labelling this an incident or hazard?
The event explicitly describes AI-generated deepfake images that depict Taylor Swift in offensive and explicit ways without her consent. This use of AI directly leads to harm by violating her rights and causing reputational and emotional damage. The creation and dissemination of such non-consensual sexual images constitute a breach of fundamental rights and can be classified as an AI Incident under violations of human rights and harm to individuals. The involvement of AI in generating these images and the resulting harm meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Has Taylor Swift responded to her graphic AI-generated photos?

2024-01-25
We Got This Covered
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake photos that are sexually explicit and harassing Taylor Swift, which constitutes a violation of personal rights and causes harm to the individual and community. The AI system's use in creating these images is central to the harm described. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (harassment and violation of rights).
Thumbnail Image

Taylor Swift's fans condemn AI-generated NSFW pictures of the singer

2024-01-25
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated explicit images of Taylor Swift being disseminated without her consent, which is a direct violation of her privacy and dignity. The AI system's use in creating these images has caused harm to the individual, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in generating the harmful content. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Microsoft CEO Nadella responds to Taylor Swift's AI-generated deepfake images

2024-01-27
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images that are explicit and defamatory, which constitutes harm to the individual and her community, fitting the definition of an AI Incident under violations of human rights and harm to communities. The harm is realized as the images have spread to millions of users before removal efforts. Therefore, this is classified as an AI Incident.
Thumbnail Image

Is 'furious' Taylor Swift considering legal action against her explicit AI-generated images?

2024-01-25
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-generated explicit images (deepfakes) of Taylor Swift without her consent, which is a clear violation of privacy and dignity, thus a breach of fundamental rights. The AI system's role in generating these images is pivotal to the harm caused. Therefore, this qualifies as an AI Incident due to realized harm involving AI misuse.
Thumbnail Image

US lawmakers call Taylor Swift AI deepfake 'appalling', seek new legislation

2024-01-27
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-generated deepfake images without consent, which directly harms the individual's privacy and dignity, violating fundamental rights. The AI system's use in producing these images and their spread on social media platforms has led to actual harm, not just potential harm. The lawmakers' response and the platform's actions further confirm the recognition of this harm. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

'They'll never find me,' Swifties track down the culprit who shared NSFW Taylor Swift pictures on X

2024-01-27
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to create non-consensual explicit images of Taylor Swift, which is a clear violation of her rights and causes harm. The AI system's outputs (the fake NSFW images) were shared and led to significant social and legal consequences, including public outrage and doxxing of the user who shared them. The harm is realized and directly linked to the AI system's use. Hence, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

White House sounds alarm over explicit AI-generated Taylor Swift photos, 'Congress should take..'

2024-01-27
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-driven image generators) to create and circulate explicit fake images of a real person without consent, directly leading to harm in the form of privacy violations, online harassment, and abuse. This fits the definition of an AI Incident because the AI system's use has directly led to harm to individuals and communities (harassment and abuse). The article also discusses government responses, but the primary focus is on the harm caused by the AI-generated content, not just the response, so it is not merely Complementary Information.
Thumbnail Image

Fake Online Images of Taylor Swift Alarm White House

2024-01-26
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to create fake sexually explicit images of a real person, which have been widely disseminated on social media. This constitutes a violation of rights (non-consensual intimate imagery) and harm to the individual and communities through misinformation. The AI system's role in generating these images is pivotal to the harm occurring. Therefore, this qualifies as an AI Incident due to realized harm linked to AI-generated content.
Thumbnail Image

Taylor Swift Searches Blocked on X After Fake Explicit Images Spread

2024-01-28
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the fake explicit images are possibly created by AI, indicating the involvement of an AI system in generating harmful content. The spread of these images has caused reputational harm to Taylor Swift and raised concerns about misinformation and safety on the platform. The platform's blocking of searches is a direct response to the harm caused. This fits the definition of an AI Incident, as the AI system's use (generation of fake images) has directly led to harm (reputational and misinformation harm).
Thumbnail Image

White House calls for legislation to stop Taylor Swift AI fakes

2024-01-26
The Verge
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake sexual images causing harm by spreading harassment and abuse online, which is a violation of rights and harm to individuals. The AI system's use in creating and disseminating these images directly led to this harm. The White House's response and call for legislation further confirm the recognition of this as a significant AI-related harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Trolls have flooded X with graphic Taylor Swift AI fakes

2024-01-25
The Verge
Why's our monitor labelling this an incident or hazard?
AI image generators are explicitly mentioned as producing photorealistic and pornographic deepfake images of a real person, which constitutes a violation of rights and harm to communities. The spread of such content on a social platform, combined with inadequate moderation, has led to realized harm. Therefore, this qualifies as an AI Incident due to the direct involvement of AI systems in causing harm through the generation and dissemination of harmful deepfake content.
Thumbnail Image

Nude deepfakes of Taylor Swift went viral on X, evading moderation and sparking outrage

2024-01-25
NBC News
Why's our monitor labelling this an incident or hazard?
The event explicitly describes AI-generated deepfake images that are nonconsensual and sexually explicit, which constitutes a violation of human rights and personal dignity. The AI system's use in generating and spreading these images directly led to harm (violation of rights and harm to the individual and community). The harm is realized, not just potential, as the images went viral and caused outrage. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Taylor Swift's name not searchable on X days after sexually explicit deepfakes go viral

2024-01-27
NBC News
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create deepfake images, which are nonconsensual and sexually explicit, causing harm to Taylor Swift's privacy and dignity, constituting a violation of rights. The widespread circulation of these images on a major social media platform and the subsequent disruption to searching her name on the platform further indicate the impact. The harm is realized, not just potential, fulfilling the criteria for an AI Incident. The AI system's use directly led to the harm described, and the event is not merely a general news or complementary update but a specific incident of harm caused by AI-generated content.
Thumbnail Image

Twitter Temporarily Blocks Searches For 'Taylor Swift'

2024-01-28
PCMag Australia
Why's our monitor labelling this an incident or hazard?
The event describes the circulation of digitally fabricated explicit images of Taylor Swift, which are AI-generated and constitute a direct harm to the individual and community by spreading false and harmful content. The AI system's use in creating these images is central to the incident. The platform's temporary blocking of searches and content moderation efforts are responses to this harm. Since the harm is occurring and the AI system's involvement is direct, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Twitter Temporarily Blocks Searches For 'Taylor Swift'

2024-01-28
PC Magazine
Why's our monitor labelling this an incident or hazard?
The article describes the spread of explicit, digitally fabricated images of Taylor Swift generated by AI, which constitutes harm to the individual and the community by disseminating harmful and false content. The AI system's outputs (fake images) directly caused reputational and psychological harm, and the platform's intervention (blocking searches) is a response to this harm. The involvement of AI in generating the harmful content and the realized harm from its spread meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Deepfake Porn Images Of Taylor Swift Enrage Fans: 'How Is This Not Considered Sexual Assault?'

2024-01-25
The Daily Caller
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake images depicting Taylor Swift in explicit sexual poses without her consent. This use of AI directly causes harm by violating her rights and constitutes sexual harassment, which is a breach of fundamental rights. The harm is realized as the images have circulated widely, causing distress and prompting public outcry. Hence, this is an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Taylor Swift 'furious' over 'sick', X-rated pics

2024-01-25
News.com.au
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake images (deepfakes) that are non-consensual and exploitative, causing harm to Taylor Swift's rights and reputation. The AI system's use in creating and distributing these images has directly led to harm, including violation of privacy and potential emotional harm, which aligns with the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The presence of AI is clear, the harm is realized, and the event is not merely a potential risk or complementary information.
Thumbnail Image

Taylor Swift pornography deepfakes renew calls to stamp out insidious AI problem

2024-01-26
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the creation and widespread sharing of AI-generated pornographic deepfake images, which are non-consensual and sexually explicit, thus constituting a violation of human rights and causing harm to individuals and communities. The AI system involved is generative AI diffusion models used to produce photorealistic fake images. The harm is realized and ongoing, as the images have been widely viewed and shared, prompting social and political responses. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Explicit Taylor Swift deepfakes circulated the internet. Her Swifties are seeing red

2024-01-25
Los Angeles Times
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI systems to create and distribute sexually explicit deepfake images of a real person without consent. This is a direct violation of personal rights and dignity, which falls under harm category (c) - violations of human rights or breach of obligations protecting fundamental rights. The harm is realized as the images have circulated widely, causing distress and prompting public outcry and calls for protection. Therefore, this is an AI Incident.
Thumbnail Image

White House 'alarmed' by circulation of fake AI-generated Taylor Swift photos

2024-01-26
The Hill
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating fake explicit images (deepfakes) of a real person, which is a direct use of AI technology. The harm includes violations of rights (non-consensual intimate imagery) and harm to communities (online harassment and abuse, especially targeting women and girls). Since the harm is occurring through the circulation of these images, this qualifies as an AI Incident. The article also references governance responses, but the primary focus is on the realized harm from AI misuse.
Thumbnail Image

White House 'alarmed' by Taylor Swift deepfakes

2024-01-26
The Hill
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that are non-consensual and sexually explicit, which constitutes a violation of rights and causes harm to the individual and communities targeted. The spread of such content is occurring, not just a potential risk, thus meeting the criteria for an AI Incident. The White House's alarm and calls for enforcement highlight the seriousness of the harm caused by the AI system's outputs.
Thumbnail Image

Taylor Swift deepfake images prompt US politicians to call for new laws

2024-01-26
The Independent
Why's our monitor labelling this an incident or hazard?
The event explicitly describes AI-generated deepfake images causing harm through sexual exploitation and violation of rights, which fits the definition of an AI Incident under violations of human rights and harm to communities. The AI system's use in generating and spreading these images directly led to the harm. The political response and platform actions are complementary but do not negate the fact that harm has occurred. Therefore, this event is classified as an AI Incident.
Thumbnail Image

'Protect Taylor Swift' trends on X after 'disgusting' AI photos posted on platform

2024-01-26
The Independent
Why's our monitor labelling this an incident or hazard?
The deepfake images are AI-generated synthetic media that have been shared on a platform, causing harm to Taylor Swift by violating her rights and privacy. The AI system's involvement in creating these images directly leads to harm as defined by violations of human rights and harm to communities. The event describes realized harm, not just potential harm, and thus is classified as an AI Incident.
Thumbnail Image

Taylor Swift is the latest victim of explicit deepfake images. She won't be the last

2024-01-26
The Independent
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the creation and distribution of AI-generated deepfake images that sexually exploit and humiliate Taylor Swift without her consent. This is a clear violation of her human rights and privacy, caused directly by the use of AI systems capable of generating realistic fake images. The harm is realized and ongoing, as the images have been viewed millions of times and spread across multiple platforms. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm to a person and communities through sexual harassment and violation of rights.
Thumbnail Image

Explicit fake images of Taylor Swift prove laws haven't kept pace with tech, experts say | CBC News

2024-01-26
CBC News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate explicit deepfake images, which have been widely shared and viewed, causing harm to the individual depicted and raising issues of non-consensual intimate image distribution. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The article details actual harm occurring due to the AI-generated content, not just potential harm, and discusses legal and societal responses, but the primary focus is on the incident of harm itself.
Thumbnail Image

Taylor Swift deepfakes spread online, sparking outrage

2024-01-27
CBS News
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (diffusion models like Stable Diffusion, Midjourney, DALL-E) used to create harmful deepfake images. The harm includes violations of rights (non-consensual use of likeness, sexual abuse), reputational damage, and emotional distress, which fits the definition of an AI Incident under violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Taylor Swift: The Victim of AI-Generated Pornographic Deepfake Images

2024-01-26
Oneindia
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake images that are sexually explicit and non-consensual, directly harming Taylor Swift by violating her privacy and dignity. The AI system's use in generating and spreading these images has directly led to harm (violation of rights and harm to the individual). The involvement of AI in creating the deepfakes is clear, and the harm is realized, not just potential. Hence, this event meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Taylor Swift Deepfake Porn Images Goes Viral, Singer Exploring Legal Options

2024-01-27
Oneindia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are non-consensual and sexually explicit, causing harm to the individual depicted and distress to her fans. The AI system's use in creating and spreading these images directly leads to violations of rights and harm to communities. The widespread dissemination and the platform's struggle to remove the content confirm realized harm. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

White House calls Taylor Swift's AI pics 'alarming', asks Congress to take action

2024-01-27
India Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create non-consensual, explicit fake images and deepfake audio, which have directly caused harm by violating privacy and rights, spreading misinformation, and potentially disrupting democratic elections. These harms fall under violations of human rights and harm to communities. Therefore, this qualifies as an AI Incident. The article also discusses governance responses, but the primary focus is on the realized harms caused by AI-generated content.
Thumbnail Image

If Taylor Swift Can't Defeat Deepfake Porn, No One Can

2024-01-26
Wired
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake pornographic images without consent, which directly harms the individuals depicted by violating their rights and causing psychological and reputational harm. The article documents actual harm occurring through the distribution of these AI-generated images, meeting the criteria for an AI Incident under violations of human rights and harm to communities. The discussion of legal responses and societal reactions serves as complementary information but does not negate the presence of realized harm caused by AI misuse.
Thumbnail Image

Taylor Swift Removed From Twitter Search After Explicit AI Photos Go Viral

2024-01-27
The Daily Beast
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to generate deepfake images, which are sexually explicit and non-consensual, constituting a violation of privacy and potentially other rights. The harm is realized as the images went viral and caused distress, prompting public denouncement and platform action. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (Taylor Swift) and harm to communities (spread of harmful misinformation and non-consensual intimate imagery).
Thumbnail Image

Taylor Swift Deepfakes Highlight Need for Legal Protections

2024-01-26
TIME
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake images, which have directly led to harm in the form of privacy violations, reputational damage, and online abuse against Taylor Swift and others. The widespread sharing of these images on social media platforms demonstrates realized harm to individuals and communities. The article also highlights the insufficiency of current legal frameworks to address these harms effectively, reinforcing the significance of the incident. Hence, it meets the criteria for an AI Incident as the AI system's use has directly caused harm.
Thumbnail Image

White House 'Alarmed' After Taylor Swift's Deepfake Nudes Go Viral, Reminds Social Media Of Rules

2024-01-27
TimesNow
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of AI-generated deepfake nudes directly involve an AI system's use (generative AI for image synthesis). The harm includes violation of personal rights and reputational damage, which falls under violations of human rights or breach of obligations protecting fundamental rights. Since the harm is realized and the AI system's role is pivotal in generating the content, this qualifies as an AI Incident.
Thumbnail Image

Taylor Swift Searches Blocked by X Amid Spread of Deepfakes

2024-01-28
TIME
Why's our monitor labelling this an incident or hazard?
The article describes the proliferation of AI-generated deepfake pornography involving Taylor Swift, which is non-consensual and sexually explicit. This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The AI system's use (deepfake generation) has directly led to harm. The platform's response to block searches and remove content is a mitigation effort but does not negate the incident classification. The article also discusses legal and governance responses, but the primary event is the harm caused by AI misuse.
Thumbnail Image

Taylor Swift's Fans Swarm X to Combat AI Fakes of Singer

2024-01-26
mint
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the creation and spread of AI-generated fake explicit images of a person without consent, which is a violation of rights and harmful content. The AI system's outputs have directly led to harm by disseminating nonconsensual explicit imagery, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The platform's inadequate enforcement of policies further contributes to the harm. Hence, this is not merely a hazard or complementary information but a realized AI Incident.
Thumbnail Image

Deepfake pornographic images of Taylor Swift circulated online. Her fans are fighting back

2024-01-27
Brisbane Times
Why's our monitor labelling this an incident or hazard?
Deepfake images are generated by AI systems that create realistic but fake content. The circulation of sexually explicit deepfake images without consent causes harm to the individual depicted, violating rights and potentially causing psychological harm. Since the AI system's use has directly led to this harm, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media

2024-01-27
The Hindu
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models (diffusion models like Stable Diffusion, Midjourney, and DALL-E) were used to create non-consensual pornographic deepfake images of Taylor Swift. These images have been widely spread on social media, causing harm to the individual and communities by disseminating abusive and objectifying content. The harm is realized, not just potential, and the AI system's use directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of rights and harm to communities.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media

2024-01-27
The Hindu
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models (diffusion models like Stable Diffusion, Midjourney, and DALL-E) were used to create non-consensual pornographic deepfake images of Taylor Swift. The spread of these images on social media platforms has caused harm to the individual (Taylor Swift) and to communities by propagating abusive and explicit content. This constitutes a violation of rights and harm to communities, fulfilling the criteria for an AI Incident. The involvement of AI in the creation and dissemination of harmful content is direct and has led to realized harm, not just potential harm.
Thumbnail Image

Deepfakes of Taylor Swift have gone viral. How does this keep happening?

2024-01-26
Mashable
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images being used to create nonconsensual pornographic content, which is a direct violation of individuals' rights and causes significant psychological and personal harm. The AI system's use in generating and spreading these images directly leads to harm to communities and individuals, fulfilling the criteria for an AI Incident. The discussion of legal and platform responses provides context but does not negate the realized harm caused by the AI system's outputs.
Thumbnail Image

Taylor Swift AI Generated NSFW Images Spark An Outrage; Swifties Call Out Elon Musk's X

2024-01-25
Mashable India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that are non-consensual and explicit, which is a clear violation of privacy and human rights. The misuse of AI technology to create and spread such content has directly caused harm to the celebrities involved and distress among their fan communities. The involvement of AI in generating these images and their harmful impact fits the definition of an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

X Makes Taylor Swift's Name Unsearchable Amid Viral Deep Fakes

2024-01-28
Mashable India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating harmful deepfake content that is actively spreading on a major social media platform, causing reputational and privacy harm to an individual and potentially misleading the public. The platform's reactive measures (blocking search terms) indicate the harm is ongoing and significant. The AI system's use (generative AI for deepfakes) directly leads to the harm, fulfilling the criteria for an AI Incident. The mention of legislative interest further underscores the seriousness of the harm caused.
Thumbnail Image

Taylor Swift's fans unite as the singer contemplates legal action over explicit AI-generated images

2024-01-26
Mashable ME
Why's our monitor labelling this an incident or hazard?
The article describes explicit AI-generated images (deepfakes) of Taylor Swift being circulated online, which is a clear violation of privacy and potentially other legal rights. The AI system's role in generating these images is central to the harm caused. The harm is realized, not just potential, as the images have been published and caused distress. Therefore, this qualifies as an AI Incident due to violations of rights and harm to the individual caused by the AI system's outputs.
Thumbnail Image

Inappropriate AI creations target Taylor Swift, social media erupts in outrage

2024-01-25
Mashable ME
Why's our monitor labelling this an incident or hazard?
The event explicitly describes AI-generated explicit images of Taylor Swift being circulated, which is a direct misuse of AI systems causing harm to the individual’s mental and emotional well-being and violating privacy rights. This fits the definition of an AI Incident as the AI system's use has directly led to harm to a person and communities. The harm is realized, not just potential, and involves violations of rights and emotional harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sickening Taylor Swift AI pics cause fury as Musk warning comes back to bite him

2024-01-26
EXPRESS
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake images of a public figure that are nonconsensual and graphic, causing harm and outrage. The AI system's use in creating and disseminating these images directly leads to violations of rights and harm to the community. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and reputational harm).
Thumbnail Image

Taylor Swift AI Porn Is Driving Fans Ballistic

2024-01-25
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, indicating the use of AI systems for content generation. The images are non-consensual and pornographic, which constitutes a violation of rights and harm to the individual and communities. The widespread dissemination and the resulting public backlash demonstrate realized harm. The AI system's role is pivotal as it enabled the creation of these abusive images. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

The White House: Taylor Swift AI Porn Is "Alarming"

2024-01-27
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating non-consensual explicit images, which is a clear violation of rights and causes harm to individuals, fitting the definition of an AI Incident. However, the article primarily reports on the White House's concern and potential policy responses rather than detailing a new specific incident or harm event. Since the harm is ongoing and the AI-generated content is actively circulating, this situation qualifies as an AI Incident due to realized harm from AI use. The article's main focus is on the governmental response and the broader issue rather than a new incident, but the underlying harm is present and material.
Thumbnail Image

Taylor Swift 'heartbroken' over vile fake images and 'in two minds about career'

2024-01-27
Mirror
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems used to create realistic but fake content. The images are non-consensual and sexually explicit, causing emotional harm to Taylor Swift and potentially violating her rights. This fits the definition of an AI Incident because the AI system's use has directly led to harm (emotional distress, violation of rights, reputational damage). The article describes realized harm rather than potential harm, so it is not an AI Hazard. It is not merely complementary information or unrelated news, as the core issue is the harmful use of AI-generated content.
Thumbnail Image

Taylor Swift 'considering legal action' amid explicit AI-generated images

2024-01-25
Mirror
Why's our monitor labelling this an incident or hazard?
The article describes explicit AI-generated deepfake images of a person being shared without consent, which is a direct violation of personal rights and can be classified as harm to the individual. The AI system's use in creating these images is central to the harm caused. Since the harm has already occurred (the images have been shared and caused distress), this qualifies as an AI Incident under the category of violations of human rights or breach of obligations intended to protect fundamental rights. The consideration of legal action is a response to the incident, not the incident itself.
Thumbnail Image

Taylor Swift fans disgusted after 'vulgar' AI pictures of the singer go viral

2024-01-25
Mirror
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create deepfake images without consent, which is a direct violation of personal rights and can be considered a form of harm to the individual and community. The AI system's use has directly led to the dissemination of harmful content, fulfilling the criteria for an AI Incident under violations of human rights and breach of legal protections. The article describes actual harm occurring, not just potential harm, so it is classified as an AI Incident.
Thumbnail Image

Vile Taylor Swift AI images spark fury as Elon Musk's warning back to bite him

2024-01-26
Mirror
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate deepfake images, which are synthetic media created by AI. The sharing of these images constitutes a violation of rights (non-consensual use of likeness and explicit content), which fits the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized, not just potential, as the images were shared widely and caused public outrage and emotional harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Taylor Swift in 'unusually subdued' appearance amid explicit AI snap drama

2024-01-28
Mirror
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated explicit images of Taylor Swift being shared, which is a direct misuse of AI systems to create harmful deepfake content. This has caused harm to her personal and professional reputation and emotional distress, which falls under violations of human rights and harm to individuals. The AI system's role is pivotal in creating and spreading these images, thus meeting the criteria for an AI Incident.
Thumbnail Image

U.S. lawmakers propose quick legislation in response to Taylor Swift deepfake

2024-01-27
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and disseminate non-consensual deepfake images, which directly harms the individuals depicted by violating their rights and privacy. The harm is occurring as the images are widely circulated on social media platforms. The article also mentions legislative efforts to criminalize such acts, indicating recognition of the harm caused. Therefore, this qualifies as an AI Incident due to realized harm linked to AI-generated content.
Thumbnail Image

Swift retaliation: Fans strike back after explicit deepfakes flood X | TechCrunch

2024-01-25
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the creation and viral spread of nonconsensual deepfake pornography generated by AI systems, which directly harms the individual depicted and violates rights. The involvement of generative AI in producing explicit, nonconsensual content meets the definition of an AI system causing harm. The harm is realized, not just potential, as the content went viral and caused distress. Therefore, this qualifies as an AI Incident. The discussion of legislative and social responses is complementary but does not change the primary classification.
Thumbnail Image

A 'Furious' Taylor Swift Is Reportedly Considering Legal Action Over Graphic A.I. Images

2024-01-26
BroBible
Why's our monitor labelling this an incident or hazard?
The AI system was used to create graphic, non-consensual images of Taylor Swift, which have been widely disseminated, causing harm to her reputation and personal dignity. This constitutes a violation of rights and is a clear harm caused by the AI system's outputs. The event meets the criteria for an AI Incident because the AI-generated content has directly led to harm (emotional, reputational, and rights violations). The consideration of legal action further underscores the seriousness of the harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

'Congress Should Take Legislative Action' Over Graphic Taylor Swift A.I. Images Says White House

2024-01-26
BroBible
Why's our monitor labelling this an incident or hazard?
The AI system was used to create false, graphic images of a real person without consent, which is a violation of privacy and potentially other rights. The images have circulated widely, causing harm to the individual and raising concerns at the governmental level. The harm is realized and directly linked to the AI-generated content. Therefore, this event qualifies as an AI Incident due to violations of rights and harm to the individual caused by the AI system's outputs.
Thumbnail Image

Taylor Swift Fans Are Very Angry 'Disgusting' AI Images Of The Singer Are Flooding The Internet

2024-01-25
BroBible
Why's our monitor labelling this an incident or hazard?
The event describes the creation and spread of AI-generated deepfake images that sexualize and demean a real person, Taylor Swift. This constitutes a violation of her rights and causes harm to her and her community of fans. The AI system's role in generating these images is central to the harm, fulfilling the criteria for an AI Incident involving violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential.
Thumbnail Image

'Taylor Swift' Search Term Blocked On Twitter/X After A.I. Scandal

2024-01-27
BroBible
Why's our monitor labelling this an incident or hazard?
The incident involves AI-generated images that are false and non-consensual, which can be considered a violation of personal rights and potentially harmful to the individual depicted. The blocking of the search term is a response to the harm caused by the AI-generated content. Since the AI system's use has directly led to the spread of harmful content and subsequent platform actions, this qualifies as an AI Incident due to violations of rights and harm to the individual/community through misinformation and non-consensual imagery.
Thumbnail Image

Taylor Swift: White House says sexually explicit fake images of pop star 'very alarming'

2024-01-26
Sky News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that have been used to create non-consensual pornographic content, which constitutes a violation of rights and harm to the individual depicted. The widespread sharing of these images and the resulting harm to the victim and potential broader societal impacts meet the criteria for an AI Incident. The involvement of AI in generating the harmful content and the direct harm caused by its dissemination justify this classification.
Thumbnail Image

Taylor Swift's name not searchable on X after sexually explicit fake images circulated

2024-01-27
Sky News
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create deepfake images, which are sexually explicit and non-consensual, causing harm to the individual depicted and potentially to broader communities by spreading harmful misinformation and violating rights. The harm is realized and ongoing, as the images have been widely viewed and circulated. The involvement of AI in generating the content and the resulting violations of rights and harm to the community fit the definition of an AI Incident. The article also discusses responses and mitigation efforts, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Taylor Swift AI Deepfake Images Controversy: Microsoft CEO Satya Nadella Says 'Alarming And Terrible'

2024-01-27
TimesNow
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake and explicit images of a public figure, which have been widely circulated and caused significant social harm. The AI system's use in creating and disseminating these images directly leads to harm to the individual's reputation and dignity, as well as harm to communities through the spread of offensive content. Therefore, this qualifies as an AI Incident under the framework, as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

X/Twitter Temporarily Suspends 'Taylor Swift' Searches After AI Image Uproar

2024-01-28
Rolling Stone
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images (deepfakes) of Taylor Swift that are harmful and have caused public outcry. The AI system's use in creating and spreading these images has directly led to violations of privacy and rights, which fits the definition of an AI Incident under violations of human rights or harm to communities. The platform's response to suspend searches is a mitigation measure but does not negate the occurrence of harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Swifties Want a Massive Crackdown on AI-Generated Nudes. They Won't Get One

2024-01-26
Rolling Stone
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating nonconsensual explicit images of a real person, which constitutes a violation of rights and harm to communities. The AI-generated content was widely disseminated, causing real harm and distress. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm. The article also covers societal and legislative responses, but the primary focus is on the realized harm caused by the AI-generated images.
Thumbnail Image

The violation of Taylor Swift

2024-01-27
Newsweek
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create deepfake images without consent, leading to direct harm including psychological trauma, reputational damage, and violation of privacy rights. These harms fall under violations of human rights and harm to individuals. The AI system's use in generating and disseminating these images is central to the incident. Therefore, this qualifies as an AI Incident. The article also discusses legal and societal responses, but the primary focus is on the realized harm caused by the AI-generated content, not just complementary information or potential future harm.
Thumbnail Image

"Taylor Swift" can't be searched on X as Chiefs game begins

2024-01-28
Newsweek
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created by AI systems. These images caused harm by spreading non-consensual pornographic content, leading to psychological trauma and reputational damage to Taylor Swift, a clear violation of rights. The platform's temporary disabling of searches for her full name is a direct response to this harm. The AI system's development and use have directly led to harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by AI misuse.
Thumbnail Image

Taylor Swift gets boost in battle against explicit AI pictures

2024-01-26
Newsweek
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating explicit deepfake images without consent, which is a direct violation of rights and causes harm to the individual depicted. The harm is realized as the images have been circulated widely, causing reputational and emotional damage. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to the individual and community. The discussion of legal responses and legislative proposals is complementary but secondary to the primary incident of harm caused by the AI-generated images.
Thumbnail Image

The Deepfakes Of Taylor Swift Prove Yet Again How Laws Fail Women

2024-01-26
Refinery29
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-generated deepfake images of a public figure, Taylor Swift, which are abusive and psychologically harmful. The AI system's use in generating these images directly leads to harm (psychological and reputational) and violates rights, fitting the definition of an AI Incident. Although the origins of the images are unclear, the harm is occurring and linked to AI-generated content.
Thumbnail Image

SAG-AFTRA Slams Explicit Taylor Swift AI Images: 'Upsetting, Harmful' and 'Must Be Made Illegal'

2024-01-27
Variety
Why's our monitor labelling this an incident or hazard?
The event describes the actual dissemination of AI-generated explicit images without consent, which constitutes a violation of rights and harm to the individual depicted. The AI system's role in creating these deepfake images is central to the harm. Therefore, this qualifies as an AI Incident because the AI's use has directly led to harm (violation of rights and harm to communities). The discussion of legislative responses and union statements supports the recognition of realized harm rather than just potential harm.
Thumbnail Image

X/Twitter Blocks Searches for 'Taylor Swift' as a 'Temporary Action to Prioritize Safety' After Deluge of Explicit AI Fakes

2024-01-28
Variety
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated sexually explicit fake images, which are non-consensual and harmful, thus constituting a violation of rights and harm to the community. The AI system's use in generating and spreading these images directly leads to harm. The platform's actions to block searches and remove content are responses to an ongoing incident, not merely complementary information. Hence, this is an AI Incident.
Thumbnail Image

SAG-AFTRA Releases Statement About Fake Explicit Taylor Swift Images

2024-01-27
Us Weekly
Why's our monitor labelling this an incident or hazard?
The event describes the creation and spread of AI-generated explicit images without consent, which constitutes a violation of privacy and rights, causing harm to the individual depicted. The AI system's use in generating these images directly led to this harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's outputs. The article also discusses responses and calls for legislation, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

AI-generated nude images of Taylor Swift went viral on X, evading moderation and sparking outrage

2024-01-25
TODAY.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate nonconsensual sexually explicit deepfake images, which have been widely disseminated, causing harm to the individual depicted (Taylor Swift) and the broader community. The harm includes violation of rights and psychological harm, fitting the definition of an AI Incident. The AI system's development and use directly led to these harms. The article also notes the failure of platform moderation to fully prevent the spread, reinforcing the incident classification rather than a mere hazard or complementary information. The mass-reporting campaign and platform responses are secondary and do not change the primary classification.
Thumbnail Image

Deepfake porn images of Taylor Swift have gone viral. Fans are fighting back

2024-01-26
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images of Taylor Swift have been widely circulated, causing harm by sexualizing and objectifying her without consent. The use of generative AI models (diffusion models) to create these images is confirmed, and the harm includes violation of rights and abusive content dissemination. The harm is realized, not just potential, as millions have seen the images. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to the community. The article also discusses responses and legal considerations, but the primary event is the harmful AI-generated content distribution.
Thumbnail Image

Taylor Swift's Name Unsearchable on X After AI-Generated Explicit Photos Scandal

2024-01-27
Billboard
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of AI-generated explicit deepfake images, which constitutes a violation of rights and causes harm to the individual targeted and the broader community affected by online harassment. The AI system's role in generating these images is central to the harm, fulfilling the criteria for an AI Incident. The widespread dissemination and the social and political reactions further confirm the realized harm rather than a potential risk, distinguishing it from an AI Hazard or Complementary Information.
Thumbnail Image

White House, SAG-AFTRA Speak Out Against 'Alarming' Taylor Swift AI-Generated Explicit Photos

2024-01-27
Billboard
Why's our monitor labelling this an incident or hazard?
The event describes the creation and spread of AI-generated explicit images without consent, which constitutes a violation of rights and causes harm to the individual targeted. The AI system's use in generating these images directly led to harm through harassment and abuse. Therefore, this qualifies as an AI Incident due to realized harm linked to AI-generated content.
Thumbnail Image

The explicit AI-created images of Taylor Swift flooding the internet highlight a major problem with generative AI

2024-01-25
Fast Company
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake pornography depicting Taylor Swift, which has been viewed millions of times before removal. This involves the use of an AI system (generative AI) to create harmful content that sexually abuses and objectifies a person, constituting harm to the individual and a violation of rights. The harm is realized and ongoing as the images were widely seen. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Taylor Swift AI Images: White House 'Alarmed', Singer Mulls Legal Action; Swifties Start 'ProtectTaylorSwift' Trend

2024-01-27
Jagran English
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that are non-consensual and pornographic, causing harm to Taylor Swift's rights and reputation. The harm is realized and ongoing, as the images have been widely viewed and shared, leading to public alarm and legal considerations. The AI system's role in generating these images is pivotal to the incident, fulfilling the criteria for an AI Incident due to violations of rights and harm to communities. The involvement of social media platforms in the dissemination and their efforts to remove such content further supports the classification as an AI Incident.
Thumbnail Image

Taylor Swift AI Images: 'Alarming And Terrible,' Satya Nadella Reacts To Deep Fake Menace; What Microsoft CEO Said

2024-01-27
Jagran English
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated explicit images (deepfakes) of a person without consent, which constitutes a violation of personal rights and can be considered harm to the individual and communities. The AI system's use in generating these images directly led to this harm. Although the exact AI tool used is not confirmed, the involvement of AI in creating harmful content is clear. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI-generated deepfake content violating rights and causing social harm.
Thumbnail Image

Searches for Taylor Swift on X come up empty after explicit AI pictures go viral

2024-01-28
CTV News
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate synthetic explicit images (deepfakes) of a public figure, which have been circulated widely causing harm. This meets the definition of an AI Incident because the AI system's use has directly led to harm to the individual (harassment, reputational harm) and communities (potential disinformation and harassment). The mention of laws against nonconsensual deepfake photography further supports the recognition of harm and legal violations. Therefore, this is classified as an AI Incident.
Thumbnail Image

Taylor Swift's name unsearchable on X after explicit deepfake images

2024-01-27
Entertainment Weekly
Why's our monitor labelling this an incident or hazard?
The creation and viral spread of AI-generated explicit deepfake images directly harms the individual depicted, violating privacy and potentially other rights. The AI system's role in generating these images is central to the harm. The widespread circulation and public concern, including calls for legislation, confirm that harm has occurred. Therefore, this qualifies as an AI Incident due to violations of rights and harm to the individual and community through harmful content dissemination.
Thumbnail Image

AI content of Taylor Swift, George Carlin, and Jennifer Aniston cause controversy online - The Boston Globe

2024-01-26
The Boston Globe
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI systems to create explicit deepfake images of a celebrity without consent, which directly leads to harm in terms of violation of personal rights and reputational damage. The involvement of AI in generating the content is explicit, and the harm is realized as the content is actively shared and causing controversy. This fits the definition of an AI Incident due to violation of rights and harm to communities.
Thumbnail Image

Swifties 'protect' Taylor Swift from spread of pornographic deepfake images

2024-01-26
Daily News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI diffusion models were likely used to create photorealistic deepfake images of Taylor Swift without her consent. The spread of these images on social media constitutes a violation of rights and causes harm to the individual and community. The harm is realized, not just potential, as the images have reached millions of users. The involvement of AI in generating the harmful content and the direct link to harm (non-consensual explicit images) meets the criteria for an AI Incident. The article also discusses responses by platforms and lawmakers, but the primary focus is on the incident of harm caused by AI-generated deepfakes.
Thumbnail Image

Taylor Swift Is Living Every Woman's AI Porn Nightmare

2024-01-25
VICE
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI models) to create non-consensual explicit deepfake images, which directly causes harm to individuals' privacy, dignity, and potentially their mental health, constituting violations of rights and harm to communities. The article documents that these harms are actively occurring and spreading on social media platforms, fulfilling the criteria for an AI Incident. The involvement of AI in generating the harmful content is explicit, and the harm is realized, not merely potential. Therefore, this qualifies as an AI Incident.
Thumbnail Image

"SHE'S RIGHT TO DO SO": Fans rally behind Taylor Swift as sources claim she might file lawsuit over AI images

2024-01-26
Sportskeeda
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that have caused harm by spreading doctored explicit content of Taylor Swift, which is a violation of her rights and harmful to her reputation and privacy. The AI system's use in creating these images directly led to harm (violation of rights and harm to the individual). Although the legal action is only being considered and not yet taken, the harm from the AI-generated content is already occurring. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

Who is ZvBear? Memes erupt on Twitter amid viral Taylor Swift AI pictures scandal

2024-01-26
Sportskeeda
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create offensive, pornographic deepfake images of Taylor Swift, which have been widely shared and caused significant backlash. The harm includes violation of privacy and intellectual property rights, reputational damage, and emotional distress to the individual depicted. The AI system's outputs directly led to these harms, fulfilling the criteria for an AI Incident. The involvement of websites known for deepfake content and the creator's admission further confirm the AI system's role. The potential legal actions and public outrage underscore the severity of the harm caused.
Thumbnail Image

Why is "Protect Taylor Swift" trending on Twitter? Viral AI outrage explored

2024-01-25
Sportskeeda
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create obscene and non-consensual images of Taylor Swift, which have gone viral and caused significant outrage and harm. The AI system's use here directly leads to violations of rights and harm to the community (fans and the celebrity). Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of rights and harm to community).
Thumbnail Image

"So it happens to her and they have to ban it": Taylor Swift falls victim to AI-made NSFW content, sparks debate as US laws get a pop-star update

2024-01-27
Sportskeeda
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated explicit deepfake content involving Taylor Swift, which is a clear example of AI system use (generative AI creating fake images/videos). The harm is direct and realized: privacy violations, reputational damage, potential legal violations, and societal harm through online harassment and objectification. The AI system's role is pivotal as it enables the creation and spread of this harmful content. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift Fans Furious Over Explicit AI-Generated Images Being Shared Online

2024-01-25
The Daily Wire
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are unauthorized and sexually explicit, directly harming the individual by violating her rights and causing distress. The AI system's use in generating these images is central to the harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the images are actively shared and causing outrage. This fits the definition of an AI Incident as it involves violations of rights and harm to communities through the dissemination of harmful AI-generated content.
Thumbnail Image

Taylor Swift Explicit A.I. Images Condemned By SAG-AFTRA

2024-01-27
Deadline
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated images that are sexually explicit and non-consensual, which directly harms the individual depicted by violating privacy and autonomy rights. The involvement of AI in generating these images and the resulting harm to the person depicted fits the definition of an AI Incident, as the AI system's use has directly led to harm in terms of rights violations and personal harm. The call for legislation further underscores the recognition of harm caused by the AI system's misuse.
Thumbnail Image

White House urges legislation after Taylor Swift deepfakes

2024-01-28
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated manipulated content (deepfakes) that have been widely disseminated, causing harm to the individual depicted (Taylor Swift) and potentially to broader societal norms regarding misinformation and privacy. The AI system's use in generating these images has directly led to harm (violation of rights and harm to the individual), qualifying this as an AI Incident. The White House's response and call for legislation is complementary but the core event is the realized harm from AI-generated deepfakes.
Thumbnail Image

'Protect Taylor Swift' trends on X after graphic AI photos of pop...

2024-01-25
Page Six
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake images that are sexually explicit and offensive, targeting a specific individual, Taylor Swift. These images have been circulated widely, causing harm to her privacy, dignity, and emotional well-being. The AI system's role in generating these images is pivotal to the harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The harm is realized, not just potential, as the images are actively shared and causing distress.
Thumbnail Image

Taylor Swift no longer searchable on X amid scandal over graphic AI...

2024-01-27
Page Six
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems creating realistic but fake content. The images are described as abusive, offensive, exploitative, and created without consent, indicating a violation of rights and harm to the individual. The sharing and viral spread of these images on X have caused reputational and emotional harm to Taylor Swift. The platform's action to restrict searchability and remove accounts indicates recognition of the harm caused. Since the AI system's use has directly led to harm, this fits the definition of an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

'Furious' Taylor Swift considering legal action over graphic AI...

2024-01-25
Page Six
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that are abusive and exploitative, created without consent, which constitutes a violation of rights and harm to the individual. The AI system's use in generating and disseminating these images directly led to harm. The event fits the definition of an AI Incident as it involves realized harm (violation of rights and reputational damage) caused by the AI system's outputs. The consideration of legal action and public outcry further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

SAG-AFTRA and White House Issue Statements on Taylor Swift AI Nudes: "We Have It in Our Power to Control These Technologies"

2024-01-27
The Hollywood Reporter
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of generative AI systems. The harm is realized as the images are non-consensual and sexually explicit, violating privacy and autonomy rights, which falls under violations of human rights and harm to individuals. The statements from SAG-AFTRA and the White House confirm the harm and the need for legal protections, indicating the AI system's role in causing the incident. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Taylor Swift Searches Apparently Blocked by X Following AI Nudes

2024-01-27
The Hollywood Reporter
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to create deepfake images, which are non-consensual and sexually explicit, causing harm to the individual's privacy and rights. The circulation of these images on a social media platform and the platform's response to block related searches indicate direct involvement of AI-generated content leading to harm. The public and legislative reactions further underscore the recognition of harm caused. Hence, this is an AI Incident as the AI system's use has directly led to violations of rights and harm to the individual and community.
Thumbnail Image

'Repulsive!' Sexualized AI generated pics of Taylor Swift take internet by storm, spark widespread outrage

2024-01-25
Conservative News Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created using AI systems capable of generating photorealistic content. The images are sexually explicit and non-consensual, causing harm to Taylor Swift's rights and to the online community by spreading harmful content. The sharing and proliferation of these images on social media platforms have led to public outrage and calls for moderation, indicating that harm has materialized. The AI system's use in generating these images directly led to violations of rights and harm to the community, fulfilling the criteria for an AI Incident.
Thumbnail Image

Taylor Swift searches shut down on X after graphic AI-generated images circulate

2024-01-28
Conservative News Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that have been circulated and caused harm by spreading false, sexually explicit content about a public figure. This constitutes harm to communities and a violation of rights (sexual harassment/assault implications). The AI system's use directly led to this harm, and the platform's intervention confirms the seriousness of the incident. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift searches blocked on X after fake explicit images spread

2024-01-29
CNA
Why's our monitor labelling this an incident or hazard?
The fake explicit images are likely AI-generated or AI-assisted deepfakes, which are a form of AI-generated misinformation causing harm to the individual's reputation and potentially to communities by spreading false content. The spread of such images on a large social media platform constitutes harm to the community and the individual, fulfilling the criteria for an AI Incident. The platform's blocking of searches is a mitigation response but does not negate the fact that harm has occurred due to the AI-generated content. Therefore, this event qualifies as an AI Incident due to the realized harm from AI-generated misinformation.
Thumbnail Image

No recent spike in tech-facilitated sexual harm, but AI poses concern for future: Women's groups

2024-01-28
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos and images being used to create sexually explicit content without consent, causing harm to victims such as anxiety, paranoia, and privacy violations. The AI system's use in generating these deepfakes is directly linked to the harm experienced by individuals. This fits the definition of an AI Incident as the AI system's use has directly led to harm to persons and communities, including violations of rights and emotional harm. The article also discusses ongoing support and advocacy responses, but the primary focus is on the realized harm caused by AI-generated content.
Thumbnail Image

X-rated AI images of Taylor Swift spread on X, spurring calls for crackdown - National | Globalnews.ca

2024-01-26
Global News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created and spread using AI systems. The harm includes non-consensual pornography, violation of privacy and personal rights, and community harm through misinformation and harassment. The AI system's role in generating and enabling the spread of these images is direct and pivotal. The article describes actual harm occurring, not just potential harm, fulfilling the criteria for an AI Incident. The discussion of platform response and policy issues further supports the classification but does not change the primary harm caused by the AI-generated content.
Thumbnail Image

Taylor Swift Searches Blocked By X Following Viral Graphic AI-Generated Images

2024-01-27
Comicbook
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created using AI systems. The circulation of these non-consensual, sexually explicit images constitutes a violation of privacy and personal rights, a recognized harm under the framework. The social media platform's actions to remove the content and block related searches confirm the harm has materialized. Hence, the AI system's use directly led to a violation of rights, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift-Related Searches Blocked on X After AI-Generated Explicit Images Go Viral

2024-01-28
Complex
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating explicit fake images without consent, which is a direct violation of privacy and intellectual property rights, falling under harm category (c). The harm is realized as the images have circulated widely, causing distress and reputational harm. The involvement of AI in creating these images is explicit, and the harm is direct. Therefore, this qualifies as an AI Incident. The societal and governance responses mentioned are complementary information but do not change the primary classification.
Thumbnail Image

"Protect Taylor Swift" Trends As Fans Fight Back Against NSFW AI Images Of Singer

2024-01-25
HotNewHipHop
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create harmful content (NSFW images) without consent, which is a violation of personal rights and can be considered harm to the individual (harm to person and violation of rights). The widespread dissemination of these images on social media platforms indicates realized harm, not just potential harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm. The legal actions against the stalker are related but separate; the core AI-related harm is the non-consensual creation and spread of explicit AI-generated images.
Thumbnail Image

Twitter Responds to Taylor Swift AI Deepfake Images

2024-01-27
Vulture
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems. The harm caused is the violation of Taylor Swift's rights to privacy and autonomy through non-consensual intimate imagery, fitting the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The platform's response and legislative advocacy further confirm the recognition of harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media

2024-01-26
Khaleej times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (diffusion models like Stable Diffusion, Midjourney, DALL-E) used to generate harmful deepfake images. The harm is realized and ongoing, including violations of rights (non-consensual sexualized images) and harm to communities (spread of abusive content). The AI's role is pivotal in creating the images, and the harm is direct and materialized. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Outrage over deepfake porn images of Taylor Swift

2024-01-26
The Star
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create non-consensual deepfake pornographic images, which directly causes harm by violating the rights of the individual depicted and contributing to online harassment and toxic content proliferation. This fits the definition of an AI Incident because the AI system's use has directly led to harm to individuals and communities. The article details the harm occurring, the societal outrage, and the challenges in enforcement, confirming the realized impact rather than a potential future risk.
Thumbnail Image

If Anyone Can Stop the Coming AI Hellscape, It's Taylor Swift

2024-01-26
British Vogue
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-generated deepfake images that sexually exploit Taylor Swift without her consent. This is a clear case of AI misuse causing harm, specifically a violation of rights and abusive exploitation, which fits the definition of an AI Incident. The involvement of AI in generating manipulated images that cause harm to a person's dignity and rights is explicit. Although the article also mentions legal and political responses, the primary focus is on the harm caused by the AI misuse, making this an AI Incident rather than Complementary Information.
Thumbnail Image

X halts searches for Taylor Swift over AI worries - RTHK

2024-01-28
news.rthk.hk
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of AI-generated fake explicit images of a public figure is a direct harm caused by the use of AI systems. The platform's response to block searches indicates recognition of the harm caused. The harm includes violation of personal rights and reputational damage, as well as broader societal harm through misinformation and harassment. Therefore, this event qualifies as an AI Incident due to the realized harm caused by AI-generated content.
Thumbnail Image

Fake AI Taylor Swift images flood X amid calls to criminalize deepfake porn

2024-01-25
Ars Technica
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake images that sexualize Taylor Swift without her consent, which constitutes a violation of rights and causes harm to the individual and communities. The AI system's use in generating and spreading these images is central to the harm described. The widespread dissemination and difficulty in removing such content demonstrate realized harm, qualifying this as an AI Incident. Additionally, the article discusses regulatory efforts and platform responses, but the primary focus is on the ongoing harm caused by the AI-generated deepfakes.
Thumbnail Image

Taylor Swift fans are furious about graphic fake AI images of the pop superstar being shared on X

2024-01-26
Business Insider India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated images that are non-consensual and graphic, which is a clear violation of personal rights and privacy, falling under harm category (c) - violations of human rights or breach of obligations protecting fundamental rights. The widespread sharing and difficulty in removing the content indicate realized harm rather than just potential harm. The AI system's use in generating these images and their dissemination on the platform directly led to harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift searches banned after 'disgusting' explicit AI pics spread online

2024-01-28
Daily Star
Why's our monitor labelling this an incident or hazard?
The creation and sharing of explicit AI-generated deepfake images of a person without consent constitutes a violation of rights and harm to the individual and community. The AI system's use in generating these images is central to the harm caused. Since the harm has already occurred and is ongoing, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities. The involvement of AI in generating the images and their widespread dissemination meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Explicit Taylor Swift deepfake images elude safeguards, swamp social media

2024-01-26
The Seattle Times
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI diffusion models were used to create sexually explicit deepfake images of Taylor Swift, which were widely disseminated on social media, causing harm to the individual and distress to the community. This constitutes a violation of rights (non-consensual intimate imagery) and harm to communities, fulfilling the criteria for an AI Incident. The AI system's use directly led to the harm, and the incident is ongoing with significant societal impact and calls for legal action. Hence, it is not merely a hazard or complementary information but a realized AI Incident.
Thumbnail Image

Outrage Over Deepfake Porn Images of Taylor Swift

2024-01-26
NewsMax
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake pornographic images that have been widely shared and caused harm to individuals, particularly women, through harassment and violation of rights. The use of generative AI to create non-consensual sexually explicit content directly leads to harm as defined by violations of human rights and harm to communities. The presence of the AI system is clear, the harm is realized, and the event fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift's offensive deep fake images inspire MAJOR action by Swifties

2024-01-25
GEO TV
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated deepfake images that are offensive and non-consensual, which is a clear violation of rights and causes harm to the individual targeted. The AI system's use in creating these images is central to the harm. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's misuse in generating harmful content.
Thumbnail Image

Searches for Taylor Swift on X come up empty after viral explicit AI photos

2024-01-28
celebrity.nine.com.au
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated synthetic explicit images (deepfakes) of Taylor Swift and others, which were widely circulated and viewed tens of millions of times before removal. The use of AI to create convincingly real but false and harmful images constitutes a violation of rights and causes harm to individuals and communities. The harm is realized, not just potential, as the images were widely disseminated and caused reputational and emotional damage. The platform's actions to limit search results and remove content confirm the harm's materialization. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Taylor Swift searches blocked on X after fake AI-generated, sexually explicit images go viral

2024-01-28
Axios
Why's our monitor labelling this an incident or hazard?
AI-generated sexually explicit images of Taylor Swift were circulated widely, causing harm through violation of privacy and spreading misinformation. The AI system's use in creating these images directly led to harm, fulfilling the criteria for an AI Incident. The platform's blocking of searches is a response to this harm. The involvement of AI in generating harmful content and the resulting impact on the individual and community justify classification as an AI Incident.
Thumbnail Image

AI gone bad: Deepfakes of Taylor Swift porn images spark calls for stronger online protections

2024-01-26
NJ.com
Why's our monitor labelling this an incident or hazard?
The event explicitly describes AI-generated deepfake images that have been disseminated online, causing harm to the individual depicted and sparking public outcry. The AI system's use in creating and spreading these images directly leads to violations of rights and harm to communities, fitting the definition of an AI Incident. The article also discusses legal and societal responses, but the primary focus is on the realized harm caused by the AI-generated content, not just potential or complementary information.
Thumbnail Image

How deepfake photos of Taylor Swift are raising concerns about AI regulation | ITV News

2024-01-26
ITV Hub
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create deepfake images, which are a form of AI-generated manipulated content. The circulation of explicit fake photos constitutes a violation of rights and sexual exploitation, which are harms under the AI Incident definition (violations of human rights and harm to individuals). The AI system's use directly led to these harms through the generation and dissemination of the images. The article describes realized harm, not just potential harm, and discusses the platforms' inadequate moderation and governmental responses. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Taylor Swift deepfakes spark calls for new legislation

2024-01-27
NME Music News, Reviews, Videos, Galleries, Tickets and Blogs | NME.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images of Taylor Swift have been created and widely shared, causing significant harm including emotional and reputational damage. The involvement of AI in creating these manipulated images is clear, and the harm is realized, not just potential. The event also includes societal and governance responses, but the primary focus is on the harm caused by the AI system's outputs. Therefore, this qualifies as an AI Incident due to direct harm caused by the use of AI in generating and spreading deepfakes.
Thumbnail Image

Outraged fans fight back against Taylor Swift AI porn images

2024-01-25
The A.V. Club
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images created using text-to-image AI generators, which are AI systems. The images are non-consensual and pornographic, constituting a violation of personal rights and privacy, which falls under violations of human rights or breach of applicable law protecting fundamental rights. The widespread sharing and viral nature of these images on social media platforms have caused harm to the individual and distress to communities (fans and the public). The AI system's use directly led to the harm described. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

The U.S. government isn't happy about those Taylor Swift porn deepfakes

2024-01-27
The A.V. Club
Why's our monitor labelling this an incident or hazard?
The article describes the creation and widespread dissemination of AI-generated deepfake images that are sexually explicit and nonconsensual, which directly harms the individual's rights and privacy. The AI system's use in generating these images is central to the harm caused. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and breaches of legal protections. The involvement of government and advocacy groups further confirms the recognition of harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Taylor Swift fans' fury over 'appalling' explicit AI pictures

2024-01-26
The Sunday Times
Why's our monitor labelling this an incident or hazard?
The article describes the creation and spread of AI-generated explicit deepfake images of Taylor Swift, which are causing harm by violating her rights and causing distress to her fans. The AI system's use in generating these images directly leads to harm (violation of rights and harm to community). The involvement of AI is explicit (AI-generated images), and the harm is realized, not just potential. Hence, this qualifies as an AI Incident under the framework definitions.
Thumbnail Image

Taylor Swift AI deepfakes: Can the popstar take legal action?

2024-01-27
Firstpost
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (generative AI models) used to create harmful deepfake content. The harm is direct and realized, as the images are circulating widely and causing emotional, reputational, and rights violations to Taylor Swift and potentially others. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm, including violations of rights and harm to communities. Although legal and legislative responses are mentioned, the main subject is the incident of harm itself, not just a complementary update or general AI news.
Thumbnail Image

The Swiftie Fight to Protect Taylor Swift From AI

2024-01-26
The Cut
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating deepfake images that have been widely disseminated, causing harm to the individual (Taylor Swift) through privacy violations and online harassment. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to the individual). The article also mentions legal and governance responses, but the primary focus is on the realized harm caused by the AI-generated content, not just the responses, so it is not merely Complementary Information.
Thumbnail Image

Taylor Swift's viral AI pictures: Singer may consider legal action

2024-01-26
The News International
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated obscene images of a person without consent, which constitutes a violation of rights and harm to the individual. The AI system's role in generating these images is central to the harm caused. The sharing and viral spread of these images on social media platforms further exacerbates the harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs and their distribution.
Thumbnail Image

After 'Deepfake' Scandal, Searches For 'Taylor Swift' Come Up Empty on X

2024-01-27
Mediaite
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that have been circulated widely, causing harm through non-consensual intimate imagery, which is a violation of rights and constitutes harm to communities. The AI system's use in creating and distributing these images directly led to this harm. The article describes realized harm, not just potential harm, and the AI system's role is pivotal in generating the deepfakes. Hence, this is classified as an AI Incident.
Thumbnail Image

AI-generated porn images of Taylor Swift flooded social media, angering fans. Here's how to spot a deepfake

2024-01-26
The Star
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated pornographic deepfake images of Taylor Swift being circulated on social media, causing harm through non-consensual sexual imagery. The AI system's use in generating these images directly led to harm to the individual (violation of rights and personal dignity) and harm to communities (distress among fans and public outrage). The involvement of AI in generating the images is clear, and the harm is realized, not just potential. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

AI-Generated NSFW Images Of Taylor Swift Spark Outrage Among Netizens: So Wrong And Inappropriate

2024-01-25
https://www.outlookindia.com/
Why's our monitor labelling this an incident or hazard?
The event explicitly describes AI-generated deepfake images that are inappropriate and violate the privacy of Taylor Swift. The harm includes emotional and reputational damage, which falls under violations of human rights and harm to communities. The AI system's use in creating and disseminating these images directly leads to these harms. Therefore, this qualifies as an AI Incident.
Thumbnail Image

SAG-AFTRA hits out at AI Taylor Swift deepfakes and George Carlin special, calls to make nonconsensual 'fake images' illegal

2024-01-28
VentureBeat
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake images that have been widely disseminated, causing harm to the privacy and rights of Taylor Swift, which fits the definition of an AI Incident due to violations of rights and harm to communities. The George Carlin case, while involving some uncertainty about AI involvement, is linked to AI-generated content and intellectual property infringement, also constituting harm. The article also discusses legislative and societal responses, but the primary focus is on the realized harms from AI-generated deepfakes and the infringement of rights, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Explicit AI deepfakes of Taylor Swift have fans and lawmakers up in arms

2024-01-25
VentureBeat
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake images depicting Taylor Swift in explicit, nonconsensual scenarios, which constitutes a violation of her rights and causes reputational and emotional harm. The AI systems used are generative image and video models, including open-source models like Stable Diffusion, which can produce such content. The harm is realized and ongoing, not merely potential. The involvement of lawmakers and proposed legislation further confirms the significance of the harm. Hence, this event meets the criteria for an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

Taylor Swift, Joe Biden, dead kids: Fake AI content floods in

2024-01-27
Deccan Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfakes causing harm by spreading false and explicit content about real people, including minors, which constitutes harm to communities and individuals. The AI systems' use in creating and disseminating this manipulated content directly leads to violations of rights and harms reputations and social trust. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs and their viral spread.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Napa Valley Register
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images of Taylor Swift have been widely shared, sexualizing her without consent, which constitutes a violation of rights and harm to the individual and community. The AI system's development and use have directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, as the harm is realized and directly linked to the AI system's outputs.
Thumbnail Image

Explicit deepfake images of Taylor Swift elude safeguards & swamp social media

2024-01-26
Deccan Herald
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI tools) to create explicit deepfake images that have been widely disseminated on social media, causing direct harm to individuals (nonconsensual intimate imagery) and communities (disinformation, harassment). The harms are realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident due to violations of rights and harm to communities directly linked to the AI-generated content.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media; her fans are fighting back

2024-01-27
The Korea Times
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate explicit deepfake images without consent, which directly leads to harm in the form of violations of rights and harm to the individual and community. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to communities through the dissemination of abusive content.
Thumbnail Image

White House 'alarmed' by deepfake explicit images of Taylor Swift circulating online

2024-01-27
Washington Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that are sexually explicit and non-consensual, which have been widely circulated online. This directly harms the individual depicted (Taylor Swift) by violating her rights and causing reputational and emotional harm. The AI system's use in generating these images is central to the harm. The White House's alarm and the responses from social media platforms further confirm the recognition of harm caused by AI misuse. Hence, this is an AI Incident due to realized harm linked to AI system use.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-27
Washington Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create explicit deepfake images without consent, directly leading to harm in the form of violation of human rights (privacy, dignity) and harm to the community (fans and public). The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. The article details the harm occurring, the AI system's role, and societal responses, confirming this classification.
Thumbnail Image

Explicit AI-generated Taylor Swift pictures spark outrage on X

2024-01-25
Washington Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems capable of generating realistic synthetic media. The images are non-consensual and sexualized, constituting a violation of the individual's rights and potentially illegal in several jurisdictions. This harm is realized and ongoing, as the images have been widely disseminated and caused public outrage. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in generating and distributing these images.
Thumbnail Image

Taylor Swift 'furious' over X-rated, AI-generated photos

2024-01-26
7NEWS.com.au
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create deepfake images, which are AI-generated manipulated media. The circulation of these images has caused harm to Taylor Swift's reputation and personal rights, constituting a violation of rights and harm to the individual. The harm is realized as the images have been viewed tens of millions of times and caused public outrage. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to community).
Thumbnail Image

The Taylor Swift AI Images Are 'Deeply Concerning,' SAG-AFTRA Says

2024-01-27
UPROXX
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated deepfake images without consent, which directly harms the individual's rights and privacy. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to the person depicted. Although the article also discusses responses and calls for legislation, the primary focus is on the harm caused by the AI-generated images, making it an AI Incident rather than Complementary Information or an AI Hazard.
Thumbnail Image

Taylor Swift Fans Furious Over Sexually Explicit AI Images Going Viral

2024-01-26
ProPakistani
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated sexually explicit images of Taylor Swift being widely circulated, causing harm to her reputation and privacy, which are violations of rights. The AI system (Microsoft Designer or similar generative AI) was used to create the harmful content, and its viral spread on social media platforms has caused significant harm. This meets the criteria for an AI Incident as the AI system's use has directly led to harm to a person and communities.
Thumbnail Image

Taylor Swift Is 'Furious' Over NSFW AI Pics - And Considering Legal Action! - Perez Hilton

2024-01-26
Perez Hilton
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating deepfake images that have caused harm to Taylor Swift by violating her privacy and creating offensive, exploitative content without consent. This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The harm is direct and realized, as the images have been widely shared and caused outrage. The article also discusses responses to the incident, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Taylor Swift Has Threatened Legal Action Over AI and Fake Nudes Before

2024-01-26
Futurism
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-generated fake nude images of Taylor Swift without her consent, which is a clear violation of her rights and causes harm to her as an individual. The AI systems involved were used maliciously to generate non-consensual intimate content, which is explicitly prohibited by Microsoft's AI user code of conduct, yet was circumvented. The harm is actual and ongoing, not just potential, as the images have been shared on social media and other platforms. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and reputational damage).
Thumbnail Image

Will There Be Any Consequences For All This Disgusting AI Taylor Swift Porn?

2024-01-25
Futurism
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating harmful content (nonconsensual sexualized images) that have been widely disseminated, causing harm to the individual depicted and potentially to communities by spreading abusive content. The AI system's use directly led to violations of rights and harm to the person involved. The article also discusses the failure or delay of content moderation, which is part of the AI system's use environment. This meets the criteria for an AI Incident as the harm is realized and directly linked to the AI system's outputs and their use.
Thumbnail Image

Taylor Swift Unsearchable on X Amid AI-Generated Explicit Pics Scandal

2024-01-27
Entertainment Tonight
Why's our monitor labelling this an incident or hazard?
The event describes explicit AI-generated images of Taylor Swift that were shared on X, causing harm related to privacy violation and exploitation. This fits the definition of an AI Incident because the AI system's use (generation of fake explicit images) directly led to harm (violation of rights and harm to the individual). The condemnation by SAG-AFTRA and calls for legislation further confirm the recognition of harm. The search bug is incidental and not clearly linked to harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Fake explicit Taylor Swift images: White House is 'alarmed'

2024-01-27
ABC7
Why's our monitor labelling this an incident or hazard?
The event explicitly describes AI-generated deepfake images that are sexually explicit and non-consensual, which is a clear violation of rights and causes harm to the person depicted and potentially to others. The involvement of AI in generating these images and their widespread distribution on social media platforms directly leads to harm as defined under violations of human rights and harm to communities. The article also discusses ongoing responses and legislative efforts, but the primary focus is on the harm caused by the AI-generated content. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Sen. Mike Lee to Taylor Swift: 'Would love your support' on bill

2024-01-27
Deseret News
Why's our monitor labelling this an incident or hazard?
The AI system's use in generating fake pornographic images without consent has directly led to harm to Taylor Swift, including reputational and emotional harm, which falls under violations of rights and exploitative content. The event describes an AI Incident because the harm has already occurred due to the AI-generated content. The legislative proposal and public discussion are responses to this incident, but the core event is the AI-generated non-consensual images causing harm.
Thumbnail Image

Taylor Swift is the New Rallying Cry in the Fight Against Deepfakes

2024-01-26
PetaPixel
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create deepfake images, which are falsified images generated by AI. The harm is realized as these images are non-consensual, explicit, and widely disseminated, causing violations of privacy and potentially other human rights. This fits the definition of an AI Incident because the AI system's use has directly led to harm to individuals and communities. The legislative response is complementary information but the primary focus is on the harm caused by the deepfakes themselves.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
St. Louis Post-Dispatch
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create explicit deepfake images without consent, which have been widely disseminated, causing harm to Taylor Swift and her community. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The harm is realized and ongoing, not merely potential. The involvement of AI in generating the harmful content is clear and central to the event.
Thumbnail Image

Fake explicit Taylor Swift Images: Lawmakers step up calls to regulate AI

2024-01-26
KTBB
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating explicit deepfake images, which directly cause harm by violating privacy and enabling online harassment and abuse. The harm is realized as the images have been circulated online, impacting the individual and potentially others. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content and the violation of rights through non-consensual intimate imagery.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
St. Louis Post-Dispatch
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images have been created and widely shared, causing non-consensual sexualization and harm to Taylor Swift. This is a clear violation of rights and harms communities by spreading abusive content. The involvement of AI in generating these images is confirmed by the mention of diffusion models and generative AI tools. The harm is actual and ongoing, not just potential, fulfilling the criteria for an AI Incident. The article also discusses responses and legal considerations, but the primary focus is on the harm caused by the AI-generated deepfakes.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-27
St. Louis Post-Dispatch
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI generative models to create deepfake images that are sexually explicit and non-consensual, which constitutes a violation of rights and harm to the individual and communities. The harm is realized and ongoing, as the images are actively spreading and causing distress. The AI system's development and use are central to the incident, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The article also discusses responses and legal considerations but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Taylor Swift Is No Longer Searchable on X After Fake AI-Generated NSFW Pics

2024-01-27
Miami Herald
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated non-consensual explicit images, which constitute a violation of privacy and rights, causing harm to the individual depicted. The dissemination of such images on a social media platform directly led to harm, fulfilling the criteria for an AI Incident. The involvement of AI in generating harmful content and the resulting impact on the individual and community confirm this classification.
Thumbnail Image

Why Taylor Swift Is Currently Unsearchable on X

2024-01-28
Miami Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake images of Taylor Swift being created and distributed without her consent, which is a violation of privacy and respect. This harm is realized and ongoing, as evidenced by the social media platform's response and public outcry. The AI system's use in generating these images is central to the harm, fulfilling the criteria for an AI Incident under violations of human rights and privacy. The involvement of legal responses and government attention further supports the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift is the latest victim of explicit AI-generated pictures

2024-01-26
ARY NEWS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated explicit images of Taylor Swift being circulated without her consent, which is a direct violation of her rights and an abusive use of AI technology. The harm is occurring as the images are spreading on social media, causing reputational and emotional harm. The AI system's role is pivotal as it generated the harmful content. This fits the definition of an AI Incident involving violations of human rights and harm to a person.
Thumbnail Image

White House, SAG-AFTRA issue statements on Taylor Swift's AI-generated pictures

2024-01-27
ARY NEWS
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake images causing harm to an individual (Taylor Swift) and potentially to women more broadly, through non-consensual intimate imagery. The harm is realized as the images have proliferated widely, causing reputational and privacy harm. The AI system's use in generating these images directly leads to violations of rights and harm to communities. Therefore, this qualifies as an AI Incident.
Thumbnail Image

SAG-AFTRA Calls for AI Image Theft to Be Outlawed After Release of Explicit Taylor Swift Fakes

2024-01-27
TheWrap
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake images (deepfakes) that are sexually explicit and non-consensual, directly harming the privacy and autonomy rights of the individual depicted (Taylor Swift). This fits the definition of an AI Incident as it involves the use of AI systems leading to violations of human rights and harm to individuals. The widespread sharing of these images on social media platforms further confirms the harm has occurred. The societal and governmental responses mentioned are complementary information but do not negate the fact that the incident itself is an AI Incident.
Thumbnail Image

Taylor Swift Not Searchable on X After AI Explicit Photos Go Viral

2024-01-27
TheWrap
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate deepfake images, which are nonconsensual and sexually explicit, causing harm to the individual and violating her rights. The harm is realized and ongoing, as the images have circulated and caused distress. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and harm to the community. The article also discusses societal and governance responses, but the primary focus is on the incident of harm caused by the AI-generated content.
Thumbnail Image

Taylor Swift deepfakes: White House pushes for AI deepfake Legislation

2024-01-27
Telangana Today
Why's our monitor labelling this an incident or hazard?
The incident involves AI-generated deepfake images that are non-consensual and explicit, which is a violation of personal rights and causes harm to the individual depicted. The viral spread on social media and the platform's delayed removal indicate that harm has occurred and is ongoing. The AI system's role in generating these images is central to the harm, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The legislative response and public statements are complementary but secondary to the primary incident of harm caused by AI misuse.
Thumbnail Image

Taylor Swift no longer searchable on X (Twitter) after deepfake scandal

2024-01-28
OpIndia
Why's our monitor labelling this an incident or hazard?
The event clearly describes the use of AI-generated deepfake technology to create explicit images without consent, which is a violation of personal rights and privacy, fitting the definition of harm under (c) violations of human rights or breach of obligations protecting fundamental rights. The harm is realized as the images have been widely viewed and caused public outcry. The AI system's use directly led to this harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

White House, SAG-AFTRA respond as Taylor Swift 'considers legal action' over sexually explicit AI images

2024-01-27
HELLO!
Why's our monitor labelling this an incident or hazard?
The AI system was used to create and disseminate sexually explicit deepfake images of a real person without consent, constituting a violation of rights and causing harm. This fits the definition of an AI Incident because the AI's use directly led to harm (violation of privacy and rights). The involvement of AI is explicit (deepfake images made using AI), and the harm is realized, not just potential. Although there are complementary responses mentioned, the primary focus is the incident of harm caused by the AI-generated content.
Thumbnail Image

Taylor Swift Fake AI Explicit Images: White House Responds to Alarming Photos | Just Jared: Celebrity News and Gossip | Entertainment

2024-01-27
Just Jared
Why's our monitor labelling this an incident or hazard?
The creation and spread of non-consensual, sexually explicit AI-generated images of a real person is a clear violation of rights and privacy, fitting the definition of an AI Incident due to harm to individuals (violation of rights). The involvement of AI in generating these images is explicit, and the harm is realized as the images are circulating and causing distress. The article also discusses responses and calls for legal action, but the primary event is the AI Incident itself.
Thumbnail Image

Explicit AI images of Taylor Swift got 22 million views before X cracked down

2024-01-25
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating explicit synthetic media (AI-generated images) of a real person without consent, which is a violation of personal rights and can be classified as harm to the individual and the community. The widespread dissemination and millions of views indicate that the harm is materialized, not just potential. The AI system's role is pivotal as it enabled the creation of realistic, non-consensual sexual images. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to the community. The event is not merely a hazard or complementary information, as the harm has already occurred and is ongoing.
Thumbnail Image

Fake online images of Taylor Swift alarm White House

2024-01-28
news.cgtn.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions fake images likely created by AI (deepfakes) that have been widely shared, causing harm through nonconsensual intimate imagery and misinformation. The AI system's use in generating these images directly leads to harm to the individual's rights and to communities through misinformation spread. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Fake online images of Taylor Swift alarm White House

2024-01-28
news.cgtn.com
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to create and disseminate fake sexually explicit images (deepfakes) of a real person without consent, which is a violation of rights and causes harm to the individual and potentially to communities by spreading misinformation and nonconsensual intimate imagery. The AI system's use in generating these images directly led to the harm described. Therefore, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

Taylor Swift Reportedly Considering Legal Action Over AI Porn Images

2024-01-26
UPROXX
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated pornographic images of Taylor Swift created and spread without her consent, which is a direct violation of her rights and constitutes harm. The AI system's use in generating these images is central to the incident. The harm is realized as the images have circulated online, causing offense and exploitation. The legal action consideration and public outcry further confirm the seriousness of the harm. Hence, this is an AI Incident involving violations of rights and harm to communities.
Thumbnail Image

Powerhouse Attorney Offers To Represent Taylor Swift Amid X-Rated AI Photos

2024-01-26
The Blast
Why's our monitor labelling this an incident or hazard?
The creation and distribution of non-consensual AI-generated explicit images of a person is a clear violation of human rights and personal dignity, fitting the definition of an AI Incident under violations of human rights or breach of applicable law. The AI system's role in generating these images is pivotal to the harm caused. Although legal action is not yet confirmed, the harm from the AI-generated content is occurring, and the event describes ongoing harm and responses, qualifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift Is No Longer Searchable On X After Explicit AI Images Went Viral

2024-01-27
The Blast
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that are non-consensual and explicit, which constitutes a violation of rights and harm to the individual (Taylor Swift). The circulation of these images on a social media platform and the subsequent platform response (search restrictions, account suspension) indicate direct harm caused by the AI system's outputs. The harm is realized, not just potential, and involves violation of personal rights and reputational damage, fitting the definition of an AI Incident. The article also discusses societal and governance challenges but the primary focus is on the realized harm from the AI-generated content.
Thumbnail Image

X races to remove explicit fake Taylor Swift images

2024-01-26
Irish Independent
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that have been shared widely, causing harm through non-consensual sexual imagery of a person, which is a violation of rights and privacy. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The platforms' responses are complementary but do not negate the incident itself. Therefore, this is classified as an AI Incident due to realized harm caused by AI-generated content violating rights.
Thumbnail Image

Taylor Swift AI deepfake images: White House 'alarmed', seeks law

2024-01-27
english.madhyamam.com
Why's our monitor labelling this an incident or hazard?
The event describes the creation and viral spread of AI-generated deepfake images of a real person without consent, which is a violation of personal rights and can cause significant harm to the individual's reputation and privacy. The AI system (generative adversarial networks and text-to-image generators) was used to create these images, and their dissemination on social media platforms led to realized harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

White House Urges Action After 'Alarming' Taylor Swift Deepfakes - BNN Bloomberg

2024-01-26
BNN
Why's our monitor labelling this an incident or hazard?
The event describes the actual creation and widespread dissemination of AI-generated sexually explicit deepfake images, which constitute a violation of individual rights and cause harm to the targeted person and potentially to communities through misinformation and abuse. The AI system's use in generating these images is central to the harm. The White House's response and calls for legislation indicate recognition of the realized harm. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-27
Democratic Underground
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system capable of generating realistic fake images. The creation and dissemination of nonconsensual explicit deepfake images of a person is a clear violation of their rights and causes harm. Since the article describes the active circulation of such AI-generated harmful content, this qualifies as an AI Incident due to violations of human rights and harm to the individual and community.
Thumbnail Image

Taylor Swift, Joe Biden, Dead Kids: Fake AI Content Floods In - BNN Bloomberg

2024-01-27
BNN
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating deepfake content that has been widely disseminated and caused harm, including privacy violations, emotional distress, and misinformation risks ahead of elections. The harms are realized and ongoing, not merely potential. The AI systems' use in creating and spreading manipulated media directly leads to violations of rights and harm to communities. The article also discusses the failure of platforms to promptly remove such content, exacerbating the harm. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
San Diego Union-Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create pornographic deepfake images without consent, which were widely shared and caused harm to Taylor Swift and her community. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The involvement of AI is clear, the harm is realized, and the event is not merely a potential risk or complementary information but a concrete incident of AI-caused harm.
Thumbnail Image

Taylor Swift Is Reportedly 'Furious' Over Pornographic AI Pictures Featuring Her

2024-01-25
StyleCaster
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created using AI systems capable of generating realistic synthetic media. The harm is realized as the images are non-consensual, sexually explicit, and widely spread, causing emotional and reputational harm to Taylor Swift, constituting a violation of rights. The article also discusses legislative efforts to criminalize such acts, underscoring the recognized harm. The AI system's use in generating and disseminating these images directly led to the harm, meeting the criteria for an AI Incident.
Thumbnail Image

AI's Latest Transgression: Fake Nudes of Taylor Swift

2024-01-26
Newser
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (AI diffusion models) used to generate fake explicit images, which have been distributed widely causing harm to individuals' rights and reputations. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The discussion of legislative efforts is complementary but the primary focus is on the realized harm from AI-generated deepfakes.
Thumbnail Image

X has temporarily blocked searches for Taylor Swift due to lots of of AI deepfake images

2024-01-28
Neowin
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Microsoft Designer) to create explicit deepfake images, which have been actively spread on X, causing harm to the individual depicted and raising safety concerns for content creators and consumers. The harm is realized (explicit deepfake images flooding the platform), and the platform's blocking of searches is a response to this harm. Therefore, this is an AI Incident due to the direct harm caused by the AI-generated content and its misuse on the platform.
Thumbnail Image

Deepfake explicit images of Taylor Swift cause alarm. Her fans fight back

2024-01-27
al
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create explicit deepfake images of Taylor Swift, which were widely circulated and caused harm by violating her rights and causing reputational and emotional damage. The involvement of AI in generating the harmful content and its dissemination on social media platforms directly led to realized harm. This fits the definition of an AI Incident because the AI system's use directly caused violations of rights and harm to communities. The article also discusses responses and legislative considerations, but the primary focus is on the realized harm caused by AI-generated content.
Thumbnail Image

Swifties clap back at graphic, fake AI Taylor Swift images flooding X

2024-01-25
ClutchPoints
Why's our monitor labelling this an incident or hazard?
The event explicitly describes AI-generated pornographic deepfake images of Taylor Swift being circulated widely, causing harm to the individual’s rights and dignity. The AI system's use in generating and disseminating these images directly led to violations of rights and harm to the community by spreading disturbing content. The prolonged availability of this content on the platform before removal further contributed to the harm. The fans' response to counteract the spread does not negate the fact that harm occurred. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Taylor Swift's Deepfake Scandal Sparks Concerns Over AI Regulation Gaps

2024-01-25
Tech Times
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating deepfake images, which are nonconsensual and sexually explicit, causing harm to the individuals depicted. This constitutes a violation of rights and harm to communities through the spread of manipulated content. The harm is realized as the images have circulated widely, causing distress and exploitation. Therefore, this qualifies as an AI Incident. The article also discusses regulatory and legal responses, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Top 7 Celebrities Victimized by AI Deepfakes: A Closer Look at the Rising Threat Amid Taylor Swift Fake Porn Image Controversy

2024-01-26
Tech Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create deepfake content, which is a clear example of AI-generated manipulated media causing harm to individuals' rights and reputations. The harm is realized, as the images have been widely disseminated and caused distress. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities (the celebrities and their fanbases).
Thumbnail Image

X Enforces Taylor Swift Search Restriction Amid AI Deepfake Surge

2024-01-29
Tech Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images of Taylor Swift that are graphic and non-consensual, which constitutes a violation of rights and harm to the individual. The platform's response to restrict searches and remove content confirms that harm has occurred. The AI system's use in creating manipulated media directly led to this harm. The involvement of AI in the creation and dissemination of harmful content meets the criteria for an AI Incident under violations of rights and harm to communities. The article also discusses governance responses, but the primary event is the realized harm from AI-generated deepfakes.
Thumbnail Image

Microsoft CEO Satya Nadella Expresses Alarm Over Taylor Swift Deepfakes Circulating Online

2024-01-27
Tech Times
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake images that are sexually explicit and nonconsensual, directly harming Taylor Swift by violating her rights and causing reputational and personal harm. The AI system's use in generating and disseminating this content is central to the incident. The harm is realized, not just potential, as the images are circulating online and being removed by platforms. This fits the definition of an AI Incident due to violation of rights and harm to the individual caused by AI-generated content.
Thumbnail Image

AI generated lascivious fake images of Taylor Swift storm over social media boiling up fans

2024-01-27
The Telegraph
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-driven diffusion models were used to create fake, sexually explicit images of Taylor Swift, which were then widely shared on social media, causing distress to fans and raising calls for legal protections. The harm includes violation of rights (nonconsensual explicit imagery) and harm to communities (disturbance and misinformation). The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or complementary information but a realized harm caused by AI-generated content.
Thumbnail Image

Taylor Swift deepfake nudes highlight threat of AI-generated porn, as NJ teens experienced

2024-01-27
NBC New York
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake images without consent, which have been widely shared, causing harm to individuals (including minors) and communities. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The article also mentions ongoing legal and societal responses, but the primary focus is on the realized harm caused by AI-generated content.
Thumbnail Image

Fake online images of Taylor Swift alarm White House | Technology

2024-01-26
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to create fake images (deepfakes) that are sexually explicit and non-consensual, which constitutes a violation of rights and harm to the individual and communities. The AI system's use in generating these images has directly led to harm through misinformation and reputational damage. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Taylor Swift 'deeply upset' over explicit deep fake images

2024-01-28
Perth Now
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to create explicit deepfake images of Taylor Swift, which were then widely shared on social media, causing emotional distress and potential rights violations. The AI system's use directly led to harm (emotional and reputational) to the individual depicted. This fits the definition of an AI Incident, as the AI system's use has directly led to harm to a person (harm to dignity and privacy) and a violation of rights (consent and privacy).
Thumbnail Image

Fake online images of Taylor Swift alarm White House | Politics

2024-01-26
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article describes the spread of fake images of Taylor Swift online, which are likely AI-generated or manipulated, causing misinformation and reputational harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation. The involvement of AI is reasonably inferred given the nature of fake images and the context of misinformation. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back | Technology

2024-01-26
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create sexually explicit deepfake images without consent, which were widely spread on social media, causing harm to the victim's rights and dignity. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential. The involvement of AI in generating the harmful content is clear and central to the incident.
Thumbnail Image

Taylor Swift Allegedly Contemplates Legal Action Over Deepfake Porn Site Publishing Her Explicit AI Images | 🎥 LatestLY

2024-01-26
LatestLY
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to create and distribute non-consensual explicit deepfake images, which constitutes a violation of personal rights and privacy. This is a clear example of harm caused by the use of an AI system (deepfake generation) leading to a breach of rights and harm to the individual. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

World News | Deepfake Explicit Images of Taylor Swift Spread on Social Media. Her Fans Are Fighting Back | LatestLY

2024-01-26
LatestLY
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create deepfake pornographic images without consent, which have been widely spread, causing harm to the victim's rights and dignity. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential. The involvement of AI in generating the harmful content is clear and central to the event.
Thumbnail Image

Taylor Swift Sexually Explicit AI Images: White House Condemns Viral Fake Pics of the Pop Star, Calls It 'Alarming' (Watch Video) | 🎥 LatestLY

2024-01-27
LatestLY
Why's our monitor labelling this an incident or hazard?
The AI system's use in generating and disseminating sexually explicit fake images directly leads to harm by violating Taylor Swift's rights and causing reputational and emotional harm. The widespread circulation of such content also harms communities by spreading misinformation and potentially enabling harassment. Therefore, this event qualifies as an AI Incident due to realized harm stemming from the AI system's misuse.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Financial Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created using AI systems. The harm is realized as these images are nonconsensual, explicit, and widely spread, causing violation of rights and harm to the individual and community. The AI system's use in generating and distributing these images directly leads to harm, fulfilling the criteria for an AI Incident. The involvement of social media platforms in the spread further supports the direct link to harm caused by AI misuse.
Thumbnail Image

Taylor Swift searches blocked by X amid AI-generated images controversy

2024-01-28
Asian News International (ANI)
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating deepfake images, which are a form of AI-generated content. The harm is realized as the images are non-consensual, sexually explicit, and harmful to the individual's privacy and autonomy, constituting a violation of rights. The spread of such images on a social media platform further amplifies the harm to communities and individuals. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs and their dissemination.
Thumbnail Image

Taylor Swift's Explicit Deepfake Images Go Viral With 45 Million+ Views, Angry Swifties React, "Those AI Creators Will Go To Hell"

2024-01-27
Koimoi
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake images that are sexually explicit and nonconsensual, which have been widely viewed and caused harm to Taylor Swift and her community. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The AI system's use in generating and spreading these images directly led to the harm described. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift AI Sparks Disgust and Multiple Reports on Twitter

2024-01-25
Newsd.in
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images that depict Taylor Swift in non-consensual, hypersexualized scenarios, which is a direct violation of her rights and causes harm to her and the community. The AI system's use in creating these images directly led to the harm described. The dissemination of such content on social media platforms further amplifies the harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. Although there is mention of limited federal regulation and ongoing legal responses, the primary focus is on the harm caused by the AI-generated content, not on responses or updates, so it is not Complementary Information.
Thumbnail Image

Taylor Swift Was the Victim of Deepfake Porn on Social Media

2024-01-26
Teen Vogue
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create deepfake images, which are nonconsensual and sexually explicit, constituting a violation of rights and harm to the individual (Taylor Swift) and potentially to communities affected by such content. The spread of these images on social media and the delayed response in removing them further exacerbates the harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI-generated content.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-27
The Vancouver Sun
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake explicit images of Taylor Swift being spread on social media, which constitutes a violation of her rights and causes harm to her as an individual. The AI system's use in creating these nonconsensual images directly leads to harm to the person depicted, fulfilling the definition of an AI Incident. The harm is realized and ongoing, not merely potential, and involves violations of rights and harm to the community around the individual.
Thumbnail Image

White House Calls Explicit AI Photos Of Taylor Swift 'Alarming' - Law360

2024-01-27
law360.com
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated explicit images of a real person being circulated, which is a clear violation of privacy and human rights. The AI system's role in creating these images and their dissemination causes direct harm to the individual involved. This fits the definition of an AI Incident as it involves harm to a person and violations of rights directly linked to the use of AI.
Thumbnail Image

Call for law change after explicit AI-generated images of Taylor Swift spread on social media | Newshub

2024-01-28
Newshub
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated explicit deepfake images of Taylor Swift being shared widely on social media, viewed millions of times before removal. The AI system's use directly led to harm through non-consensual intimate imagery, violating personal rights and causing reputational and emotional harm. The harm is realized, not just potential, and the event has prompted calls for legal reform to address such AI-enabled abuses. This fits the definition of an AI Incident as the AI system's use has directly led to violations of rights and harm to the individual and communities.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back - Business News

2024-01-27
Castanet
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (generative diffusion models) used to create explicit deepfake images, which have been widely circulated, causing harm to the individual depicted and the broader community. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The harm is realized, not just potential, as the images have already spread and caused distress. The article's focus is on the incident itself rather than just responses or broader context, so it is not merely Complementary Information.
Thumbnail Image

Taylor Swift fans condemn AI-generated explicit Deepfakes - The Statesman

2024-01-25
The Statesman
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that are explicit and disturbing, which have been spread online. This involves the use of AI systems to create harmful content without consent, violating privacy and potentially other rights. The harm is realized as the images have circulated and caused outrage and distress. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse in generating and spreading explicit deepfakes.
Thumbnail Image

US Gov't, Fans Express Outrage Over AI-Generated Porn Images Of Taylor Swift

2024-01-26
Leadership
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created using generative AI systems. The harm includes violations of rights (non-consensual use of images, harassment) and harm to communities (spread of toxic content). The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident. The widespread dissemination and the political response further confirm the realized harm rather than a potential future risk, distinguishing it from an AI Hazard or Complementary Information.
Thumbnail Image

SAG-AFTRA Supports Taylor Swift, Statement Urges AI/Deepfakes Laws

2024-01-27
Bleeding Cool
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images of Taylor Swift being circulated online without her consent, which constitutes a violation of privacy and personal rights, a form of harm to the individual. The AI system's use in creating these images is central to the harm described. The union's response and call for legislation further confirm the recognition of this harm. Hence, this is an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

Explicit AI deepfakes of Taylor Swift cause outrage

2024-01-26
ReadWrite
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of explicit AI deepfake images directly harms the individual depicted (Taylor Swift) by violating her rights and causing reputational and emotional harm. The AI system's use in generating these images is central to the harm, fulfilling the criteria for an AI Incident. The article describes realized harm through the spread of these images and the social and political response, including calls for regulation, confirming this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Las Vegas Sun
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (generative diffusion models) used to create explicit deepfake images without consent, which have been widely shared, causing harm to the individual (Taylor Swift) and potentially to communities by spreading abusive content. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The article describes realized harm, not just potential harm, and the AI system's role is pivotal in generating the harmful content. Therefore, this is classified as an AI Incident.
Thumbnail Image

Taylor Swift deepfakes unleash fury: Hollywood union, White House respond

2024-01-27
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake images that are sexually explicit and non-consensual, directly harming the individual (Taylor Swift) by violating her rights and causing reputational and emotional harm. The involvement of SAG-AFTRA and the White House highlights the seriousness of the harm and the need for legal and policy responses. The harm is realized, not just potential, as the images have been disseminated and caused public uproar. Hence, this is an AI Incident under the definition of violations of human rights and harm to communities due to the AI system's use leading to direct harm.
Thumbnail Image

X blocks 'Taylor Swift' searches as deepfakes raise concerns

2024-01-28
NewsBytes
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated manipulated images or videos that alter individuals' appearances, and their creation and distribution have directly led to harm in this case, including emotional and reputational damage to Taylor Swift. The article describes the actual occurrence of such content being shared widely, causing harm, and prompting political and platform responses. This fits the definition of an AI Incident because the AI system's use (deepfake generation) has directly led to harm (emotional, reputational) and violations of rights. The platform's mitigation efforts and political calls for legislation are responses to this incident, not the primary event itself.
Thumbnail Image

Swift fans fight back after fake explicit images spread on social media

2024-01-26
Honolulu Star Advertiser
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create pornographic deepfake images without consent, which were widely shared and caused harm to Taylor Swift and her community. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in generating the harmful content. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Obscene deepfakes of Taylor Swift flood social media

2024-01-28
The New Daily
Why's our monitor labelling this an incident or hazard?
The article describes the creation and widespread distribution of AI-generated deepfake images that are pornographic and degrading, targeting Taylor Swift without her consent. This constitutes a violation of rights and harm to the individual and community. The AI system (diffusion models) is explicitly involved in generating the harmful content, and the harm is realized as the images have spread to millions of users. The involvement of AI in causing direct harm through non-consensual deepfakes fits the definition of an AI Incident.
Thumbnail Image

Taylor Swift AI Reaction: Has She Reacted? Why Is 'Protect Taylor Swift' Trending?

2024-01-25
ComingSoon.net
Why's our monitor labelling this an incident or hazard?
The event describes explicit AI-generated images of Taylor Swift being shared widely, which is a direct result of the use of generative AI systems. The images are not real but are causing harm by violating her rights and potentially impacting mental health. The viral spread of such content on social media platforms is a clear example of harm to communities and violation of rights. The AI system's role is pivotal as it generated the harmful content. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake explicit images of Taylor Swift go viral

2024-01-26
Perth Now
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create explicit deepfake images without consent, which have spread widely and caused harm to the victim's rights and dignity. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential. The involvement of AI is clear and central to the incident, and the harm is significant and clearly articulated.
Thumbnail Image

Taylor Swift: What Are the AI Pictures? How Are the Fake Photos Made?

2024-01-25
ComingSoon.net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images created by generative AI systems that have caused emotional harm and outrage among fans. The AI system's use directly led to the creation and dissemination of harmful content, which is a form of harm to persons and communities. The harm is realized, not just potential, as the images have gone viral and caused distress. This fits the definition of an AI Incident because the AI system's use has directly led to harm. Although no legal action is currently available, the harm to emotional health and reputation is significant and clearly articulated. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

AI-generated nude images of Taylor Swift went viral on X, evading moderation and sparking outrage

2024-01-26
NBC Bay Area
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate deepfake images, which are nonconsensual and sexually explicit, constituting a violation of human rights and personal dignity. The harm is realized as the images went viral, causing reputational and emotional harm to the individual depicted. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The moderation failure and mass proliferation further emphasize the direct harm caused by the AI-generated content.
Thumbnail Image

White House Releases Statement Over Graphic Taylor Swift A.I. Images That Sent Shockwaves Across The Internet

2024-01-27
Total Pro Sports
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating fake, graphic images of a real individual without consent, which is a violation of rights and causes harm to the individual and community. The widespread circulation of these images on social media platforms has led to public backlash and governmental concern, indicating realized harm. The AI system's use in creating and disseminating these images directly led to the harm described. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Social Media Detectives Have ID'd The Person Who Created The Disturbing NFSW A.I. Photos Of Taylor Swift

2024-01-27
Total Pro Sports
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated images created without consent, which is a misuse of AI technology leading to harm to an individual's rights and dignity. The dissemination of such images on a social media platform has caused real harm, including reputational damage and emotional distress, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The AI system's use in generating these images is central to the incident, and the harm is realized, not just potential.
Thumbnail Image

Taylor Swift is the latest victim of 'disgusting' AI trend

2024-01-25
Post and Courier
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating nonconsensual deepfake images, which is a clear violation of rights and causes harm to the individuals depicted, fulfilling the criteria for an AI Incident. The harm is realized as the images have gone viral and caused distress, and the AI's role in creating and spreading these images is pivotal. The article also references ongoing regulatory efforts, but the primary focus is on the harm caused by the AI-generated content, not just on responses or future risks.
Thumbnail Image

Taylor Swift's Deepfake Explicit Images Go Viral! Singer 'Furious' Over Her Fake Pics Circulated Online - Reports | SpotboyE

2024-01-27
spotboye.com
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI-generated deepfake images that have been widely circulated, causing harm to the person depicted (Taylor Swift) and distress among her fan base. The creation and dissemination of such manipulated content constitute a violation of rights and harm to communities. Since the harm is realized and directly linked to the AI system's use, this qualifies as an AI Incident.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-27
Roanoke Times
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models (diffusion models) were used to create pornographic deepfake images of Taylor Swift, which have been widely circulated online. This has caused direct harm to the individual (Taylor Swift) and her community (fans), including reputational damage and violation of privacy rights. The harm is realized and ongoing, with platforms struggling to remove the content. The involvement of AI in generating the harmful content and the resulting violation of rights and harm to communities clearly fits the definition of an AI Incident.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Roanoke Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (diffusion models like Stable Diffusion, Midjourney, DALL-E) used to generate harmful deepfake images. The harm is realized and ongoing, including violations of rights (non-consensual intimate images), reputational damage, and emotional harm to the individual and community harm through the spread of abusive content. The article details the direct link between AI-generated content and the harm caused, meeting the criteria for an AI Incident. The societal and legislative responses mentioned are complementary information but do not override the primary classification of an incident.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Twin Cities
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create deepfake images that sexualize and objectify Taylor Swift without her consent, constituting a violation of rights and harm to the community. The widespread distribution of these images on social media platforms has caused real harm, fulfilling the criteria for an AI Incident. The involvement of AI in generating the harmful content and the resulting violation of rights and harm to the community justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

If anyone can get the US government to take deepfake porn seriously, it's Swifties

2024-01-27
Irish Examiner
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake pornographic content without consent, directly causing harm to individuals' rights and well-being, especially women and minors. The harm is realized and ongoing, including violations of privacy and human rights. Therefore, this qualifies as an AI Incident. The article also mentions societal and legal responses, but the primary focus is on the harm caused by the AI-generated deepfakes, not just the responses, so it is not merely Complementary Information.
Thumbnail Image

Taylor Swift deepfakes: White House seeks law, Nadella says it's 'alarming'

2024-01-27
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The incident involves AI-generated deepfake images that have been widely disseminated, causing harm to the individual depicted (Taylor Swift) through non-consensual explicit content. This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The involvement of AI in generating the deepfakes is explicit, and the harm is realized as the images went viral and caused public concern. The calls for legislation and platform enforcement are responses to this incident, not the primary event itself.
Thumbnail Image

Fans fight back as fake explicit images of Taylor Swift spread on social media

2024-01-26
Fox 8 News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create fake explicit images of Taylor Swift without her consent, which were widely shared and caused harm. This constitutes a violation of rights and harm to communities, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in generating the harmful content.
Thumbnail Image

Fake, AI-generated nudes of Taylor Swift go viral on social media

2024-01-26
WFTS
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that are nonconsensual and sexually explicit, which have been widely disseminated causing reputational and emotional harm to the individual depicted. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The harm is realized, not just potential, as the images have already gone viral and caused outrage. Therefore, this is classified as an AI Incident.
Thumbnail Image

Taylor Swift deepfakes: White House says 'alarming', seeks law

2024-01-27
National Herald
Why's our monitor labelling this an incident or hazard?
The deepfake images are generated by AI systems and have caused harm by spreading non-consensual, misleading content that affects an individual's rights and privacy. The viral spread and slow platform response indicate a realized harm to the individual and potentially to the community through misinformation and privacy violation. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content. The legislative call is a response to this incident, not the primary event itself.
Thumbnail Image

Taylor Swift 'may take legal action' against AI porn images

2024-01-26
indy100.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that are exploitative and abusive, created without consent, which is a violation of personal rights and causes harm. The AI system's use in generating these images directly led to this harm. The harm includes violation of rights and harm to the individual and community, fitting the definition of an AI Incident. The legal action consideration and social media removal efforts further confirm the harm has materialized. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Shocking Taylor Swift AI pictures spark outrage among fans

2024-01-25
indy100.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated images that are sexually explicit and non-consensual, which constitutes a violation of the individual's rights and dignity. The AI system's development and use directly led to the creation and dissemination of harmful content. This meets the criteria for an AI Incident as the harm is realized and directly linked to the AI system's outputs. The event is not merely a potential risk or a complementary update but a clear case of harm caused by AI misuse.
Thumbnail Image

Deepfake explicit images of Taylor Swift go viral

2024-01-26
The West Australian
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images of Taylor Swift have been widely circulated, causing harm by sexualizing and abusing her image without consent. The AI system involved is generative AI diffusion models, which are confirmed by researchers. The harm is realized and ongoing, involving violations of rights and reputational damage. The platforms' responses to remove content do not negate the fact that harm has occurred. Hence, this is an AI Incident as per the definitions provided, involving direct harm caused by the use of AI systems.
Thumbnail Image

Taylor Swift & Carlin estate may launch precedent setting lawsuits over AI use

2024-01-28
JoBlo's Movie Emporium
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake images and videos that have caused harm to individuals' rights and reputations, specifically Taylor Swift and the George Carlin estate. The AI systems' outputs have directly led to violations of human rights (use of likeness and persona without consent) and reputational harm, which fits the definition of an AI Incident. The legal actions being considered or filed are responses to these realized harms, not just potential risks, confirming this classification.
Thumbnail Image

X criticized for being too slow to moderate pornographic AI-generated deepfakes of Taylor Swift - SiliconANGLE

2024-01-26
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is a clear use of AI systems for generating synthetic media. The harm is realized as the content is nonconsensual, exploitative, and abusive, impacting the individual's rights and causing reputational and emotional harm. The slow moderation by the platform contributed to the extent of harm by allowing widespread dissemination. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and harm to the individual. The article describes actual harm occurring, not just potential harm, and the AI system's role is pivotal in creating the harmful content.
Thumbnail Image

Explicit AI-Generated Taylor Swift Pics Circulate Online

2024-01-26
Digital Music News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content (deepfake images) that has been widely viewed and circulated, causing harm through misinformation and potential reputational damage. The AI system's outputs have directly led to harm to communities by spreading deceptive content and necessitated platform moderation responses. This fits the definition of an AI Incident as the AI system's use has directly led to harm (harm to communities through misinformation and violation of platform policies).
Thumbnail Image

Who Made Those AI-Generated Explicit Taylor Swift Images?

2024-01-26
Digital Music News
Why's our monitor labelling this an incident or hazard?
The event describes the creation and viral spread of AI-generated sexually explicit images of Taylor Swift without her consent. This constitutes a violation of human rights, specifically privacy and dignity, and breaches platform policies against non-consensual nudity. The AI system's use directly led to harm by producing and enabling the distribution of harmful content. Therefore, this qualifies as an AI Incident due to realized harm linked to AI-generated content and its dissemination.
Thumbnail Image

Outrage grows over fake porn images of Taylor Swift - Taipei Times

2024-01-27
Taipei Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake pornographic images without consent, which constitutes a violation of rights and causes harm to individuals targeted, including harassment and reputational damage. The AI-generated content's viral spread on social media platforms directly leads to harm to communities and individuals. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use. The legal responses and public outrage further underscore the incident's significance.
Thumbnail Image

Taylor Swift fans fight explicit deepfake images

2024-01-28
Jamaica Gleaner
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (generative diffusion models) used to create explicit deepfake images without consent, which have been disseminated widely, causing harm to Taylor Swift and potentially others. This constitutes a violation of rights and harm to the individual and community, fitting the definition of an AI Incident. The harm is realized, not just potential, as the images have been shared and caused distress. The article also references platform moderation and legislative responses, but the primary focus is on the incident of harm caused by AI-generated deepfakes.
Thumbnail Image

SAG-AFTRA slams digital fakes of Taylor Swift and George Carlin

2024-01-27
NBC10 Philadelphia
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated deepfake images and content without consent, which is a violation of personal rights and intellectual property. The harm is realized as the images went viral, causing reputational and emotional harm, and the Carlin estate has filed a lawsuit, indicating legal recognition of harm. The AI system's use in generating these deepfakes is central to the incident, fulfilling the criteria for an AI Incident due to violations of rights and harm to individuals.
Thumbnail Image

SAG-AFTRA Condemns Fake Explicit Images of Taylor Swift, Calls for Legal Action | Cryptopolitan

2024-01-27
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated explicit images without consent, which is a violation of privacy and individual rights. This harm has already occurred as the images are circulating and causing distress. The AI system's use in generating these images is central to the incident. Therefore, this qualifies as an AI Incident due to the realized harm to individual rights and privacy caused by the AI system's misuse.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media

2024-01-27
The Manila times
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create deepfake pornographic images of Taylor Swift, which have been widely disseminated on social media. This constitutes a direct use of AI systems leading to harm, specifically violations of rights (non-consensual use of images, sexual abuse) and harm to communities (spread of abusive content). The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

AI deepfakes of Taylor Swift are spreading on X. Here's what to know.

2024-01-26
Anchorage Daily News
Why's our monitor labelling this an incident or hazard?
The event explicitly describes AI-generated deepfake images that are non-consensual and sexually explicit, which constitutes a violation of individual rights and harms the person depicted. The widespread dissemination of these images on social media platforms like X, with millions of views before removal, confirms that harm has occurred. The AI system's use in generating these images is central to the incident. Therefore, this qualifies as an AI Incident due to realized harm involving violation of rights and harm to communities. The article also discusses regulatory and societal responses, but the primary focus is on the incident itself.
Thumbnail Image

Swifties Take Action After Nonconsensual Deepfake Porn of Taylor Swift Spreads on X

2024-01-25
The Mary Sue
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake generation tools, including Microsoft AI tools) used maliciously to create nonconsensual explicit content, which is a clear violation of individual rights and causes harm to the victim and community. The spread of this content on social media platforms, with delayed removal, directly results in harm. Therefore, this qualifies as an AI Incident due to realized harm stemming from the use of AI systems.
Thumbnail Image

Silencing Taylor Swift: X's Battle Against AI-Generated Controversy - OtakuKart

2024-01-28
OtakuKart
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated explicit images without consent, which constitutes a violation of privacy and autonomy rights, a form of harm to individuals and communities. The AI system's role in generating these images is central to the harm. The platform's temporary disabling of searches for Taylor Swift's name is a response to this incident. The involvement of AI in producing harmful content that has been distributed and caused distress meets the criteria for an AI Incident, as the harm has materialized and is directly linked to AI misuse.
Thumbnail Image

Taylor Swift 'considering legal action' amid explicit AI-generated images

2024-01-26
Irish mirror
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that are explicit and shared without consent, which directly harms Taylor Swift's rights and dignity. The use of AI to create such images and their distribution constitutes a violation of rights and is exploitative, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the images have been posted and caused distress. Legal action is being considered, but the harm has already occurred. Therefore, this event is classified as an AI Incident.
Thumbnail Image

'They'll Never Find Me': Explicit AI-Generated Photos of Taylor Swift Land X User In Hot Water with Swifties

2024-01-28
Atlanta Black Star
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create explicit deepfake images, which constitutes a violation of rights and causes harm to the individual depicted (Taylor Swift). The harm is realized as the images have been widely circulated, leading to reputational damage and emotional distress. The involvement of legal actions and public condemnation further supports the classification as an AI Incident. The AI system's use in generating harmful content directly led to the harm described, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

Taylor Swift AI Photos: Will the Singer Sue?

2024-01-26
The Hollywood Gossip
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of AI-generated deepfake images that are harmful and violate the subject's rights. The AI system's use has directly led to harm (violation of privacy and rights, harm to community standards) and is central to the incident. Therefore, this qualifies as an AI Incident. The article also mentions potential legal actions and legislative efforts, but the primary focus is on the realized harm caused by the AI-generated content.
Thumbnail Image

Taylor Swift Explicit AI Images Get SAG-AFTRA Concerned, White House 'Alarmed'

2024-01-27
AceShowbiz
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating fake explicit images (deepfakes) of a real person without consent, which is a direct violation of privacy and personal rights, thus constituting harm under the framework. The dissemination of these images has already occurred, causing harm to the individual and potentially to communities by enabling harassment and abuse. Therefore, this qualifies as an AI Incident due to realized harm linked directly to the AI system's use.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-27
JournalStar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images of Taylor Swift have been widely circulated, constituting a violation of her rights and causing harm to her and her community. The AI system's use directly led to the creation and dissemination of harmful content, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The involvement of generative AI models (diffusion models) is confirmed, and the harm is materialized, not just potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

How Taylor Swift's legions of fans fought back against fake nudes

2024-01-26
Portland Press Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the fake nude images were probably created by generative AI tools, which fits the definition of an AI system. The harm caused includes violations of privacy and nonconsensual use of intimate imagery, which are breaches of fundamental rights. The spread of these images and the resulting distress to the individual targeted (Taylor Swift) and the broader implications for others (women and teens) constitute realized harm. The involvement of AI in generating these images is central to the incident, and the article discusses the direct consequences and responses, confirming this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Omaha.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models (diffusion models) were used to create non-consensual pornographic deepfake images of Taylor Swift, which have been widely disseminated on social media. This constitutes a violation of rights and harm to the individual and community. The harm is realized, not just potential, as the images have spread to millions and caused distress. The involvement of AI in generating the harmful content and the resulting violation of rights and harm to the community meet the criteria for an AI Incident.
Thumbnail Image

It's not just Taylor Swift: AI-generated porn is targeting women and kids all over the world

2024-01-27
Omaha.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI generative systems to create and disseminate explicit deepfake images without consent, directly causing harm to individuals, including minors and public figures. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The article documents actual occurrences of such harms, not just potential risks, and discusses the societal and legal implications, confirming the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

U.S. Lawmakers Push for Deepfake Image Criminalization in Wake of Taylor Swift Scandal

2024-01-28
cryptodaily.co.uk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create non-consensual deepfake images that have been widely circulated, causing harm to the individual depicted and raising broader societal concerns. The article explicitly mentions the use of AI for deepfake creation and the resulting harm, including privacy violations and potential human rights breaches. The legislative and platform responses are complementary information but do not negate the fact that harm has occurred. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Fury as extremely graphic AI pictures of Taylor Swift go viral and outraged fans call out image-makers for harassment and predatory behavior

2024-01-25
expressdigest.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating non-consensual deepfake images that sexually exploit a real individual, Taylor Swift, which constitutes a violation of her rights and causes harm to her and her community of fans. The harm is realized and ongoing, as the images are circulating widely and causing distress. This fits the definition of an AI Incident because the AI system's use directly leads to harm (violation of rights and harassment). Although the article also discusses legal and societal responses, the primary focus is on the harm caused by the AI-generated images themselves, not just the responses, so it is not merely Complementary Information.
Thumbnail Image

PROTECT TAYLOR SWIFT: Fans start powerful online movement in support of singer after vile explicit AI photos of her were shared on social media

2024-01-26
expressdigest.com
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI to create harmful deepfake images that have been disseminated online, directly causing harm to Taylor Swift by violating her rights and exposing her to abuse. The AI system's use here is malicious and leads to realized harm, fitting the definition of an AI Incident. The article also discusses societal and legislative responses, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-27
JournalStar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images of Taylor Swift have been widely circulated, causing harm through non-consensual explicit content. The AI system's use directly led to the harm (violation of privacy, reputational damage, and abusive content dissemination). The involvement of generative AI models is confirmed, and the harm is materialized, not just potential. Therefore, this event qualifies as an AI Incident under the framework, as it involves direct harm caused by the use of AI systems.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
JournalStar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models (diffusion models) were used to create pornographic deepfake images of Taylor Swift, which have been widely shared on social media, causing harm through non-consensual sexual content and abuse. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The involvement of AI in generating the harmful content is direct and central to the event. The article also discusses responses and potential legal actions, but the primary focus is on the realized harm caused by the AI-generated deepfakes.
Thumbnail Image

It's not just Taylor Swift: AI-generated porn is targeting women and kids all over the world

2024-01-27
JournalStar.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate harmful deepfake pornographic content without consent, directly causing violations of individuals' rights and harm to communities. The harm is realized, not just potential, as evidenced by reported cases of manipulated images being shared online and the distress caused to victims. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm to persons and communities. The article does not merely discuss potential risks or responses but documents ongoing harm caused by AI-generated content.
Thumbnail Image

White House 'alarmed' by Taylor Swift AI-generated sexually explicit images

2024-01-26
Colorado Springs Gazette
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake explicit images (deepfakes) of a real person, which constitutes a violation of rights and harm to the individual and community. The images have already spread widely, causing harm. Therefore, this is an AI Incident due to realized harm from AI-generated content causing violations of rights and harm to communities.
Thumbnail Image

White House "Alarmed" After Taylor Swift, Joe Biden Deepfakes Surface Online

2024-01-27
Pragativadi: Leading Odia Dailly
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake content that has directly led to harm by spreading false and damaging information about individuals, including public figures. The harm includes violations of personal rights and potential societal harm through misinformation. The AI system's use in creating and spreading these deepfakes is central to the incident. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated manipulated media.
Thumbnail Image

Taylor Swift deepfake images prompt US politicians to call for new laws

2024-01-26
Express & Star
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images of Taylor Swift were posted on social media, constituting sexual exploitation and non-consensual use of her likeness. This is a clear violation of rights and causes harm to the individual and communities. The involvement of AI in generating these images and the resulting harm meets the criteria for an AI Incident. The calls for legislation and platform actions are responses to this incident, but the primary event is the harm caused by the AI-generated content.
Thumbnail Image

Deepfake Explicit Images of Taylor Swift Spread on Social Media

2024-01-27
NTD
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create non-consensual pornographic deepfake images of Taylor Swift, which have spread widely on social media, causing harm to the individual and communities. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The involvement of AI is clear and the harm is realized, not just potential. The article also discusses responses and mitigation efforts, but the primary focus is on the incident itself.
Thumbnail Image

White House Responds to Explicit AI Generated Images of Taylor Swift

2024-01-28
NTD
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated sexually explicit images of a real person (Taylor Swift) being disseminated online, which is a direct violation of personal rights and constitutes harm to the individual and communities targeted by such abuse. The involvement of AI systems in generating these images is clear, and the harm is realized, not hypothetical. The White House's concern and calls for legislative action further underscore the significance of the harm. Therefore, this event meets the criteria for an AI Incident due to violations of rights and harm to communities caused by the AI system's outputs.
Thumbnail Image

Fans fume in anger as nude deepfake pictures of Taylor Swift go viral on social media, X suspends accused account

2024-01-27
NewsroomPost
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake images that depict Taylor Swift in explicit sexual scenarios without her consent. This use of AI directly leads to harm by violating her rights and causing reputational and emotional harm, which fits the definition of an AI Incident under violations of human rights and harm to communities. The involvement of AI in creating these images is clear, and the harm is realized as the images went viral and caused public outrage. The platform's suspension of accounts is a response but does not negate the incident. Hence, this event is classified as an AI Incident.
Thumbnail Image

Outrage over deepfake porn images of Taylor Swift

2024-01-26
RTL Today
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI to create deepfake pornographic images without consent, which is a clear violation of human rights and privacy. The harm is realized as the images went viral and were viewed millions of times, causing reputational and emotional harm to the individual targeted. The AI system's role is pivotal as it enabled the creation of these realistic fake images. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

X enforces Taylor Swift search ban after deepfake pornography floods social media

2024-01-29
WAtoday
Why's our monitor labelling this an incident or hazard?
The article describes the circulation of sexually explicit AI-generated deepfake images of Taylor Swift, which is a direct harm to the individual (harm to person) and a violation of rights. The AI system's use in generating these images has directly led to this harm. The platform's response to block searches is a mitigation measure but does not negate the fact that harm has occurred. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

Fake online images of Taylor Swift alarm White House

2024-01-27
Otago Daily Times Online News
Why's our monitor labelling this an incident or hazard?
The fake images are likely generated by AI systems (e.g., deepfake or generative AI) and have been widely disseminated, causing harm to the person depicted and potentially to communities by spreading misinformation and non-consensual intimate imagery. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The involvement of AI in creating the images and the resulting harm is direct and material.
Thumbnail Image

" Protect Taylor Swift" Trends on X after Pop Icon's Deepfake Explicit Images Went Viral, White Responds

2024-01-27
International Business Times, Singapore Edition
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (deepfake technology) to generate harmful content that has been disseminated, causing direct harm to the individual and communities through harassment and abuse. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm. The article also discusses societal and governance responses, but the primary focus is on the realized harm caused by the AI-generated images, not just the response, so it is not merely Complementary Information.
Thumbnail Image

Deepfake Dilemma: AI-Generated Manipulated Media Raises Alarms | Cryptopolitan

2024-01-27
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake content that has been widely shared and caused harm, such as fake explicit images of Taylor Swift that went viral and emotional distress to public figures. It also highlights the broader societal harm from AI-generated misinformation and manipulated media, especially around elections. The involvement of AI systems in generating and disseminating this harmful content is clear, and the harms are realized, not just potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly and indirectly led to harm to individuals and communities.
Thumbnail Image

Swifties Uncover Culprit Behind AI-Generated NSFW Taylor Swift Images

2024-01-27
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-generated non-consensual explicit images, which is a clear violation of privacy and can be considered a breach of fundamental rights. The AI system's role in generating this harmful content is central to the incident. The harm is realized as the images were circulated, causing reputational and emotional damage. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content and its distribution.
Thumbnail Image

Taylor Swift AI Photos Spark Outrage as Fans Call For Action

2024-01-25
The Nerd Stash
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in generating deepfake images without consent, which is a misuse of AI technology leading to violations of privacy and personal rights. The harm is realized as the images are circulating online, causing distress and outrage, fulfilling the criteria for harm to individuals and communities. The event is not merely a potential risk but an actual occurrence of harm caused by AI-generated content.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Napa Valley Register
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create explicit deepfake images of Taylor Swift without her consent, which have been widely disseminated on social media. This constitutes a violation of human rights and causes harm to the individual and communities, fitting the definition of an AI Incident. The harm is realized and ongoing, not merely potential. The involvement of AI in generating the harmful content is clear and central to the event. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Napa Valley Register
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create pornographic deepfake images of Taylor Swift without her consent. The circulation of these images on social media platforms constitutes a violation of her rights and causes harm to her and her community. The AI system's use is directly linked to this harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The article also discusses responses from platforms and lawmakers, but the primary focus is on the realized harm caused by the AI-generated content.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-27
Napa Valley Register
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (diffusion models like Stable Diffusion, Midjourney, DALL-E) used to generate harmful deepfake images. The harm is realized and ongoing, including violations of rights (non-consensual sexual content), reputational harm, and community harm through the spread of abusive content. The AI system's use directly led to these harms. The article also discusses platform responses and legislative efforts, but the primary focus is on the incident of harm caused by AI-generated content. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media

2024-01-28
Sun.Star Network Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created and spread using AI systems. The harm is realized as these images are non-consensual, sexually explicit, and abusive, directly impacting the victim's rights and dignity. The article describes the harm as occurring currently, with millions of users exposed to the content. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to the individual and community.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back | News Channel 3-12

2024-01-26
NewsChannel 3-12
Why's our monitor labelling this an incident or hazard?
The creation and distribution of non-consensual deepfake explicit images involve AI systems generating harmful content without consent, violating human rights and privacy. The harm is realized as the images are actively spreading on social media, causing reputational and emotional damage. The AI system's role is pivotal in generating the deepfakes, making this an AI Incident under the framework's definition of violations of human rights and harm to communities.
Thumbnail Image

Fake online images of Taylor Swift alarm White House

2024-01-27
Global Village Space
Why's our monitor labelling this an incident or hazard?
The fake images are likely generated or manipulated using AI technologies, such as deepfake or generative AI, which directly leads to harm by spreading misinformation and violating the rights of the individual depicted. The harm is realized as the images have been widely viewed and shared, causing reputational and emotional harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-generated content.
Thumbnail Image

Taylor Swift's AI-Generated Nude Images Went Viral On X

2024-01-26
CTN News l Chiang Rai Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake images, which are synthetic and manipulated visual content created by AI algorithms. The harm is realized as the images are nonconsensual, sexually explicit, and widely disseminated, causing direct harm to the individual's rights and dignity, as well as harm to communities through misinformation and harassment. The platform's inadequate response exacerbates the harm. Therefore, this qualifies as an AI Incident due to the direct and significant harm caused by the AI-generated content's creation and distribution.
Thumbnail Image

Taylor Swift explicit deepfake images spread online, sparking outrage

2024-01-27
Socialite Life
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI tools were used to create deepfake images that are sexually explicit and nonconsensual, which have been widely viewed and shared, causing harm to the individual depicted and potentially to the broader community. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to the community. The harm is realized, not just potential, and the AI system's role is pivotal in creating and spreading the harmful content.
Thumbnail Image

NSFW Taylor Swift AI Photos Draw Shock and Outrage From Fans

2024-01-25
PopCrush
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated fake explicit images of Taylor Swift being circulated, which is a direct use of AI systems to create harmful content. This nonconsensual creation and distribution of explicit images violate personal rights and cause psychological and reputational harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The harm is occurring, not just potential, so it is not an AI Hazard. It is not merely complementary information or unrelated news, as the AI system's use directly leads to harm.
Thumbnail Image

White House Alarmed Over Taylor Swift Deepfakes, Calls for New Legislation Rise in Congress

2024-01-27
The Tech Report
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the creation and distribution of AI-generated deepfake images without consent, which constitutes a violation of rights and harm to the individual and communities. The AI system's use in generating these images is central to the harm caused. The widespread distribution and public reaction confirm that harm has occurred. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities. The legislative response and platform actions are complementary information but do not change the classification of the core event.
Thumbnail Image

Taylor Swift A.I. Pornographic Images Spark Massive Outrage - Rare

2024-01-25
Rare
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated pornographic images created without consent, which constitutes a violation of privacy and potentially other rights. The AI system's use directly leads to harm by producing and disseminating harmful content. The harm is realized, not just potential, as the images are actively circulating and causing outrage. This fits the definition of an AI Incident due to the direct link between AI use and harm to a person's rights and dignity.
Thumbnail Image

It's not just Taylor Swift: AI-generated porn is targeting women and kids all over the world

2024-01-27
The Daily Progress
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deepfake images and videos that have been used maliciously to create and spread non-consensual pornographic content, which constitutes a violation of rights and harm to individuals and communities. The harms are realized and ongoing, with examples including Taylor Swift and minors. The AI systems' use in generating these images is central to the harm described. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly and indirectly led to significant harm.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
The Daily Progress
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images (created by diffusion models, a type of generative AI) have been widely circulated, causing harm to Taylor Swift and contributing to a broader issue of non-consensual pornographic deepfakes targeting women. This constitutes a violation of rights and harm to communities, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in generating the harmful content. The article also discusses responses and legal considerations, but the primary focus is on the harm caused by the AI-generated deepfakes.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
pantagraph.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create deepfake pornographic images of Taylor Swift without her consent. The circulation of these images on social media platforms constitutes a violation of human rights and causes harm to the individual and communities, fitting the definition of an AI Incident. The harm is realized and ongoing, not merely potential. The involvement of AI in generating the images and the resulting non-consensual distribution directly links the AI system's use to the harm described. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
pantagraph.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI generative models (diffusion models) used to create deepfake images that sexualize and objectify Taylor Swift without her consent. The widespread sharing of these images on social media platforms constitutes a violation of rights and harm to the individual and community. The harm is realized and ongoing, meeting the criteria for an AI Incident. The involvement of AI in generating the harmful content is direct and central to the incident.
Thumbnail Image

Taylor Swift AI Deepfake Porn and the Future - Live Trading News

2024-01-28
Live Trading News
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create deepfake pornographic images without consent, directly causing harm to the individual depicted (Taylor Swift) and potentially to wider communities through the spread of harmful content. This fits the definition of an AI Incident as it involves harm to persons and communities (harms a and d), and violations of rights (harms c). The article describes realized harm, not just potential harm, and discusses the platform's response and legislative considerations, but the primary focus is on the incident and its consequences rather than just complementary information or future hazards.
Thumbnail Image

How Taylor Swift's legions of fans fought back against fake nudes

2024-01-26
Lewiston Sun Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake images that sexualize Taylor Swift without her consent, which is a violation of rights and causes harm to the individual. The AI system's role in generating and spreading these images is direct and pivotal. The harm is realized, not just potential, as the images have been widely viewed and circulated. The event fits the definition of an AI Incident because it involves harm to a person through violations of rights and reputational damage caused by AI-generated content.
Thumbnail Image

Outrage over deepfake porn images of Taylor Swift

2024-01-26
Daily Journal
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create deepfake pornographic images without consent, which constitutes a violation of personal rights and causes harm to the individual targeted. The widespread dissemination of such content on social media platforms directly leads to harm to the individual and communities by spreading toxic and harmful material. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Waterloo Cedar Falls Courier
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images sexually abusing Taylor Swift without her consent have been widely spread on social media, causing harm to her personal rights and dignity. The AI system (diffusion models like Stable Diffusion, Midjourney, DALL-E) was used to generate these images, and their dissemination has led to realized harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The article also discusses responses from platforms and lawmakers, but the primary focus is on the incident of harm caused by AI-generated content.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Waterloo Cedar Falls Courier
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create deepfake images that are sexually explicit and non-consensual, causing harm to Taylor Swift and potentially others. The harm includes violations of rights and harm to communities through abusive content dissemination. The AI system's use is central to the creation and spread of these images, fulfilling the criteria for an AI Incident. The article also mentions ongoing efforts to remove the content, but the harm is already occurring, so it is not merely complementary information or a hazard.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-27
Waterloo Cedar Falls Courier
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the creation and circulation of AI-generated deepfake images that are pornographic and non-consensual, which constitutes a violation of human rights and causes harm to the individual depicted and the broader community. The AI system (diffusion models like Stable Diffusion, Midjourney, and DALL-E) is directly involved in generating these harmful images. The harm is realized and ongoing, as the images have spread widely and caused distress, making this an AI Incident under the framework definitions.
Thumbnail Image

US-tech-Swift-deepfake

2024-01-26
nampa.org
Why's our monitor labelling this an incident or hazard?
The creation and viral spread of AI-generated deepfake pornographic images of Taylor Swift clearly involve an AI system (deepfake generation). The harm is realized as the images are non-consensual, violate privacy and potentially other rights, and have caused outrage and reputational harm. This fits the definition of an AI Incident as the AI system's use directly led to violations of rights and harm to communities.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Waterloo Cedar Falls Courier
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (diffusion models like Stable Diffusion, Midjourney, and DALL-E) used to generate explicit deepfake images without consent. The spread of these images constitutes a violation of rights and harm to the individual and communities, fulfilling the criteria for an AI Incident. The article describes actual harm caused by the AI-generated content, not just potential harm, and details responses by platforms and lawmakers, confirming the incident's significance.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Magic Valley
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images were created and circulated, causing harm to Taylor Swift and potentially others. The harm includes violation of rights (non-consensual intimate images) and harm to communities (social and reputational harm). The AI system's use is central to the incident, as the images are generated by diffusion models, a type of generative AI. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

It's not just Taylor Swift: AI-generated porn is targeting women and kids all over the world

2024-01-27
Waterloo Cedar Falls Courier
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate fake pornographic images and videos without consent, targeting women and children globally. The harms are realized and significant, including violations of rights and psychological harm. The AI's role is pivotal as the technology enables the creation of convincingly real but fake explicit content that is shared widely, causing direct harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to individuals and communities.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
NewsAdvance.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative diffusion models) used to create harmful deepfake images that are non-consensual and sexually explicit, which directly harms the individual depicted (Taylor Swift) and potentially others. The harm includes violation of rights and reputational/emotional damage, fitting the definition of an AI Incident. The article describes the harm as occurring and ongoing, not merely potential, and the AI system's role is pivotal in generating the harmful content. Hence, it is classified as an AI Incident.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-27
NewsAdvance.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models (diffusion models) were used to create explicit deepfake images of Taylor Swift, which have been widely disseminated on social media, causing harm to her rights and dignity. This constitutes a violation of rights and harm to the individual and community, fitting the definition of an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in generating the harmful content. The article also discusses societal and governance responses, but the primary focus is on the harm caused by the AI-generated deepfakes.
Thumbnail Image

'Very powerful law': Victoria lawyer on new intimate images protection act in B.C.

2024-01-28
CHEK
Why's our monitor labelling this an incident or hazard?
The article centers on the enactment of legislation and the establishment of mechanisms (like the Civil Resolution Tribunal portal) to address harms caused by AI-generated intimate images. It does not describe a specific AI incident where harm has occurred due to AI system malfunction or misuse, nor does it describe a plausible future harm event. Instead, it provides complementary information about societal and legal responses to an existing AI-related harm problem. Therefore, it fits the definition of Complementary Information, as it enhances understanding of AI harms and responses without reporting a new incident or hazard.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Sioux City Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models were used to create pornographic deepfake images of Taylor Swift without her consent. The circulation of these images on social media platforms constitutes a violation of her rights and causes harm to her and her community of fans. The harm is direct and realized, not merely potential. The involvement of AI in generating these images is clear and central to the incident. Hence, this event meets the criteria for an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Fake online images of Taylor Swift alarm White House

2024-01-27
chinadailyhk
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to create fake explicit images of a real person, leading to harm in the form of misinformation and violation of privacy and rights. The harm is realized as the images have been widely viewed and spread, causing reputational and emotional damage. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person and communities through misinformation and non-consensual imagery.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-27
Sioux City Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images of Taylor Swift, a public figure, have been widely circulated on social media, causing harm through non-consensual sexual content. The AI system's use directly led to violations of rights and harm to the individual and community. The involvement of diffusion models (a type of generative AI) is confirmed, and the harm is realized, not just potential. The incident fits the definition of an AI Incident as it involves the use of AI systems leading to violations of human rights and harm to communities. The article also discusses responses and mitigation efforts, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Sioux City Journal
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative diffusion models) to create explicit deepfake images, which have been widely disseminated, causing harm to Taylor Swift's rights and dignity. This is a direct harm caused by the AI system's outputs, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The article describes realized harm, not just potential harm, and thus it is classified as an AI Incident.
Thumbnail Image

It's not just Taylor Swift: AI-generated porn is targeting women and kids all over the world

2024-01-27
Sioux City Journal
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for deepfake image creation) that have directly led to harm through the non-consensual creation and distribution of explicit images, affecting individuals' privacy, dignity, and causing emotional and social harm. This fits the definition of an AI Incident because the AI's use has directly caused violations of rights and harm to communities. The article details actual harm occurring, not just potential harm, and discusses the societal impact and responses, confirming the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Sioux City Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images of Taylor Swift are being spread on social media, constituting non-consensual pornography. This is a clear violation of rights and causes harm to the individual depicted and the broader community. The AI system's development and use have directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, as the harm is realized and directly linked to the AI system's outputs.
Thumbnail Image

Taylor Swift Searches Fail on Twitter After Fake Explicit Images Go Viral

2024-01-28
92.7 WOBM
Why's our monitor labelling this an incident or hazard?
The AI system was used to create non-consensual explicit deepfake images, which directly harms the individual's rights and dignity, constituting a violation of human rights and causing harm to the community. The platform's response to remove search queries and content is a mitigation effort but does not negate the fact that harm has occurred. Therefore, this qualifies as an AI Incident due to the realized harm from AI-generated content violating rights and causing reputational and emotional damage.
Thumbnail Image

It's not just Taylor Swift: AI-generated porn is targeting women and kids all over the world - KION546

2024-01-26
KION546
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate explicit deepfake images and videos without consent, directly causing harm to individuals, including women and minors. The harms include violations of privacy, dignity, and psychological well-being, which are recognized as violations of human rights and harm to communities. The article documents actual incidents of such harms occurring globally, including specific cases involving high school students and public figures. The AI system's role is pivotal as it enables the creation of convincingly real but fake pornographic content. The ongoing circulation and impact of these images constitute a direct AI Incident rather than a potential hazard or complementary information.
Thumbnail Image

Fake online images of Taylor Swift alarm White House

2024-01-26
Shore News Network
Why's our monitor labelling this an incident or hazard?
The fake images are likely generated or manipulated using AI technologies, such as deepfakes or generative AI, which directly leads to harm by violating the individual's rights and spreading misinformation. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to the individual and community). The article describes realized harm rather than potential harm, so it is not a hazard. The focus is on the harm caused by AI-generated content, not just a general update or policy discussion, so it is not merely complementary information.
Thumbnail Image

Taylor Swift Ai Photos Graphic: Twitter, Pictures 4chan, Link Information! - DODBUZZ

2024-01-27
DODBUZZ
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to generate explicit fake images of Taylor Swift, which were then spread widely online. This constitutes a violation of rights (privacy and possibly intellectual property) and causes harm to the individual and communities (fans and public). The AI system's development and use directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, as it involves realized harm caused by AI-generated content.
Thumbnail Image

Explicit AI deepfakes of Taylor Swift have fans and lawmakers up in arms - RocketNews

2024-01-26
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The event describes explicit AI-generated deepfake content depicting Taylor Swift without her consent, which constitutes a violation of personal rights and causes harm to the individual and community. The AI systems used for generating these deepfakes are directly involved in producing harmful content. The harm is realized and ongoing, as evidenced by public condemnation and legislative attention. This fits the definition of an AI Incident due to violations of rights and harm to communities caused by the AI system's outputs.
Thumbnail Image

Taylor Swift Weighs Response as AI-Generated Nudes of the Pop Star Sweep the Internet

2024-01-26
The New York Sun
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake pornographic images without consent, which directly harms the individual depicted by violating privacy and causing emotional distress. This fits the definition of an AI Incident as it involves harm to a person (violation of rights and dignity) caused by the use of AI-generated content. The article details the harm already occurring, the legal implications, and societal reactions, confirming the realized harm rather than just potential risk.
Thumbnail Image

US-tech-Swift-deepfake newseries

2024-01-26
nampa.org
Why's our monitor labelling this an incident or hazard?
The use of AI to generate fake pornographic images constitutes a violation of human rights, specifically privacy and dignity, and causes harm to the community by spreading harmful and non-consensual content. The AI system's use directly led to this harm, qualifying the event as an AI Incident.
Thumbnail Image

It's not just Taylor Swift: AI-generated porn is targeting women and kids all over the world

2024-01-27
WAAY TV 31
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating manipulated explicit images and videos without consent, which constitutes a violation of human rights and causes harm to individuals and communities. The harm is realized and ongoing, as evidenced by the sharing of these images and the distress caused to victims. The AI system's use in creating and spreading this content is central to the incident. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

X Blocks Searches For Taylor Swift After Explicit Deepfake Images Go Viral

2024-01-28
KIIS 1011 Melbourne
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are outputs of AI systems designed to create realistic but fake content. The viral spread of these images caused harm by violating the privacy and dignity of Taylor Swift and potentially misleading the public. The social media platform's intervention indicates recognition of the harm caused. Therefore, the AI system's use directly led to harm as defined under AI Incident criteria, specifically violations of rights and harm to communities.
Thumbnail Image

BREAKING: Furious Taylor Swift Considers LEGAL ACTION After Disturbing Graphic AI Images Emerge

2024-01-26
Small Joys
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system generating harmful content (disturbing graphic images) that has been distributed without consent, causing reputational and emotional harm to Taylor Swift. This fits the definition of an AI Incident because the AI system's use has directly led to harm, specifically violations of rights and harm to the individual's reputation and community. The mention of legal action and calls for greater security further support the recognition of realized harm rather than just potential harm.
Thumbnail Image

BREAKING: Fans Rush To Taylor Swift's Side As 'Protect Taylor Swift' TRENDS On X After Alarming AI Incident

2024-01-26
Small Joys
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated images of Taylor Swift created without her consent, which constitutes a violation of her rights, specifically intellectual property and personal rights. This unauthorized use of AI-generated content has led to public outcry and a trending social media movement. The AI system's role in creating these images is direct and central to the issue. Although no physical harm or legal outcomes are mentioned, the violation of rights through unauthorized AI-generated content fits the definition of an AI Incident. The harm is realized in terms of rights violations and reputational impact, not merely potential or future harm, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

AI nudes of Taylor Swift go viral - what will she do about it? - Tortoise

2024-01-26
Tortoise
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content created using text-to-image AI systems. The images are non-consensual and sexually explicit, which constitutes a violation of personal rights and is abusive and exploitative, fulfilling the criteria for harm to individuals and communities. The widespread sharing and viral nature of the content demonstrate direct harm caused by the AI system's use. Therefore, this qualifies as an AI Incident due to realized harm stemming from the use of AI systems.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
The Trentonian
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the deepfake images were created using generative AI diffusion models and that these images are non-consensual and sexually explicit, constituting a violation of rights and harm to the individual and community. The widespread circulation of these images on social media platforms directly led to harm, fulfilling the criteria for an AI Incident. The involvement of AI in generating the harmful content and the resulting violation of rights and harm to the community justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake porn images of Taylor Swift are spreading online

2024-01-27
Silicon Valley
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models (diffusion models like Stable Diffusion, Midjourney, and DALL-E) were used to create deepfake pornographic images of Taylor Swift without her consent. The widespread circulation of these images on social media platforms constitutes a violation of human rights and causes harm to the individual and communities (harm to reputation, privacy, and emotional well-being). The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The article also discusses responses from platforms and lawmakers, but the primary focus is on the realized harm caused by the AI-generated content.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-27
Dothan Eagle
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI generative models (diffusion models) were used to create pornographic deepfake images of Taylor Swift, which have been widely circulated on social media, causing harm to her and her community of fans. The harm is realized, not just potential, as the images are actively spreading and causing distress. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to communities through non-consensual explicit content.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-28
The Virgin Islands Daily News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative diffusion models) used to create harmful deepfake images that have been disseminated widely, causing direct harm to the individual depicted and potentially to communities through abusive content. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The harm is realized, not just potential, as the images have circulated and caused distress. The involvement of AI in generating the images is confirmed by researchers with high confidence. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
Beckley Register-Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images of Taylor Swift have been widely shared, constituting non-consensual sexualized content. This is a clear violation of rights and causes harm to the individual and community. The AI system's development and use (diffusion models generating photorealistic images) directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, as the harm is realized and directly linked to AI system use.
Thumbnail Image

Graphic deepfakes of Taylor Swift spark outrage online - Preen.ph

2024-01-26
Preen.ph
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake images of Taylor Swift that have been widely circulated online, causing outrage and harm. The AI system was used to create non-consensual explicit content, which is a violation of rights and constitutes harm to the individual and communities. The involvement of AI in generating these images and their distribution leading to harm fits the definition of an AI Incident, as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

Taylor Swift "Furious" Over AI Generated Sexually Explicit Images

2024-01-26
Courier-Tribune
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate sexually explicit deepfake images without consent, which is a direct violation of human rights and privacy. The harm is realized as the images are circulating online, causing reputational and emotional harm to Taylor Swift. The involvement of AI in creating these images and their distribution leading to harm fits the definition of an AI Incident, specifically under violations of human rights and harm to communities.
Thumbnail Image

Taylor Swift deepfake images prompt US politicians to call for new laws

2024-01-26
Rhyl, Prestatyn and Abergele Journal
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated deepfake images causing harm through sexual exploitation and violation of individuals' rights, which fits the definition of an AI Incident due to violation of human rights and harm to individuals. The harm is realized as the images have been circulated, and the AI system's role in generating these images is pivotal. The political response and platform actions are complementary but do not negate the incident classification.
Thumbnail Image

Taylor Swift, the new victim of "deepfake" pornography in X - Softonic

2024-01-26
Softonic
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake pornography, which is a direct use of AI systems to create harmful content. The harm includes violation of human rights, specifically privacy and dignity, and the widespread dissemination of such content causes significant harm to the individual and communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through the creation and spread of non-consensual explicit images.
Thumbnail Image

X suspends account that posted Taylor Swift AI porn - only for another account to show it - as same graphic images now circulate on Facebook and Instagram

2024-01-25
This is where news and blogging come alive
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated pornographic images of Taylor Swift being shared on social media, which is a clear example of AI system use leading to harm. The harm includes violation of personal rights and reputational damage, which falls under violations of human rights and breach of applicable laws protecting individual rights. The AI system's role is pivotal as it generated the harmful content. The circulation of these images and the resulting public backlash confirm that harm has occurred, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI-Generated Taylor Swift Porn Went Viral on Twitter. Here's How It Got There

2024-01-25
404 Media
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content (text-to-image AI generators) used to create non-consensual sexual images, which constitutes a violation of rights and harm to the individual and communities. The widespread viral dissemination of these images on Twitter indicates realized harm. Therefore, this qualifies as an AI Incident due to violations of rights and harm to communities caused by the AI system's use.
Thumbnail Image

Explicit AI deepfakes of Taylor Swift have fans and lawmakers up in arms - Business Telegraph

2024-01-26
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The event clearly describes the use of AI image generation tools, including open-source models like Stable Diffusion, to create explicit deepfake content without consent, which constitutes a violation of rights and harm to the individual depicted. The widespread dissemination of this content on social media platforms and the resulting public and legislative reactions confirm that harm has occurred. Therefore, this qualifies as an AI Incident because the AI system's use directly led to violations of rights and harm to the community. The legislative efforts and discussions about regulation are complementary information but do not change the classification of the primary event as an incident.
Thumbnail Image

New vivo Y27s comes with 256GB storage

2024-01-26
GadgetMatch
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create explicit deepfake content, which constitutes a violation of personal rights and can be considered harm to communities and individuals. The AI system's use directly led to the dissemination of harmful content, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The article describes realized harm through the spread of explicit AI-generated images and the resulting social backlash, not just potential harm.
Thumbnail Image

Fans fight back as 'disgusting' Taylor Swift deepfakes shared on X

2024-01-26
PinkNews | Latest lesbian, gay, bi and trans news | LGBTQ+ news
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the creation and sharing of AI-generated deepfake images that are sexually explicit and nonconsensual, which is a clear violation of personal rights and causes harm to the individual depicted. The AI system's role in fabricating these images and enabling their spread on social media platforms directly leads to harm to the person and communities involved. Therefore, this qualifies as an AI Incident under the framework, as it involves realized harm linked to AI misuse.
Thumbnail Image

The internet is filled with 'deepfake' Taylor Swift porn, evidencing the dangers of AI for women

2024-01-26
EL PAÍS English
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the creation and dissemination of AI-generated deepfake pornography targeting women, which is a clear violation of rights and causes harm to the individuals involved. The AI system's role in generating these images is central to the incident, and the harm is realized through the spread and impact of these images. Therefore, this qualifies as an AI Incident due to violations of human rights and harm to communities caused by the AI system's use.
Thumbnail Image

Fake, AI-generated nudes of Taylor Swift go viral on social media

2024-01-26
Scripps News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that are nonconsensual and sexually explicit, which have been widely disseminated causing harm to the individual and communities. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The article also references legal and governance responses, but the primary focus is on the harm caused by the AI-generated content itself, not just the responses, so it is not merely Complementary Information.
Thumbnail Image

Explicit deepfake Taylor Swift images spread online, fans fight back

2024-01-26
1 News
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create explicit deepfake images without consent, which have been widely shared online, causing harm to the individual depicted and potentially to the broader community. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The presence of harm is explicit, not just potential, and the AI system's role is pivotal in generating the harmful content. Therefore, this is classified as an AI Incident.
Thumbnail Image

White House sounds alarm over explicit AI-generated Taylor Swift

2024-01-27
The Business Standard
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated explicit images of Taylor Swift being widely circulated, which constitutes a violation of privacy and non-consensual use of her likeness, causing harm to her and potentially to others. The AI system's use in generating these images is central to the harm, fulfilling the criteria for an AI Incident due to violations of rights and harm to communities. The White House's response and calls for legislation further confirm the recognition of actual harm caused by the AI system's outputs.
Thumbnail Image

AI horror: Outrage over deepfake images of singer Taylor Swift

2024-01-27
HT Tech
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create deepfake images that are non-consensual and sexually explicit, which constitutes a violation of human rights and causes harm to the individual and communities. The harm is direct and realized, as the images were widely viewed and caused outrage. The AI system's use directly led to the harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The event is not merely a potential risk or complementary information but a clear case of harm caused by AI-generated content.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media; Sparks Outrage

2024-01-27
ETV Bharat News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images of Taylor Swift, created using diffusion models (a type of generative AI), have been widely circulated on social media, causing harm through non-consensual sexualized and violent depictions. This constitutes a violation of rights and harm to communities, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, as the images have spread to millions of users and have prompted public outrage and legislative responses. The AI system's use in generating these images directly led to the harm described.
Thumbnail Image

Why We Need Action To 'Protect Taylor Swift' (And Other Women) Onl...

2024-01-27
SheThePeople
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (diffusion models) to create deepfake images that have been widely shared, causing harm to Taylor Swift and highlighting broader risks to women online. This constitutes an AI Incident because the AI system's use directly led to violations of rights and harm to the individual and community. The article focuses on the realized harm and the societal and governance responses, but the primary event is the AI-driven harm itself, qualifying it as an AI Incident rather than merely complementary information or a hazard.
Thumbnail Image

Fake online images of Taylor Swift alarm White House

2024-01-27
ARN News Centre
Why's our monitor labelling this an incident or hazard?
The fake images are likely generated or manipulated using AI technologies, such as generative AI models, which fits the definition of an AI system. The spread of these images causes harm by violating personal rights and spreading misinformation, which affects communities. Since the harm is occurring and the AI system's use is a contributing factor, this qualifies as an AI Incident.
Thumbnail Image

Index - Tech-Science - Taylor Swift's fake nude pictures have been clicked on so many times that there could be a law against deepfakes

2024-01-27
newsbeezer.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate deepfake images, which are manipulated content created by AI. The harm is realized as these images have been widely shared, causing emotional and reputational damage to the individuals depicted, which fits the definition of harm to persons and communities. The article also discusses legal and policy responses, but the primary focus is on the ongoing harm caused by the AI-generated deepfakes. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Taylor Swift's AI 'deepfakes' shock fans! What happened and where did the images come from?

2024-01-27
News9live
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that are sexually explicit and non-consensual, which have been widely disseminated and caused harm to Taylor Swift. This constitutes a violation of rights and abuse facilitated by AI technology. The harm is realized and ongoing, meeting the criteria for an AI Incident. The involvement of AI in creating the deepfakes and the resulting harm to the individual and community is clear and direct.
Thumbnail Image

U.S. lawmakers propose quick legislation in response to Taylor Swift deepfake

2024-01-27
TradingView
Why's our monitor labelling this an incident or hazard?
The event describes the creation and widespread sharing of AI-generated deepfake images that are explicit and non-consensual, which is a clear violation of rights and causes harm to the individual depicted. The AI system's role in producing these manipulated images is central to the harm. The lawmakers' call for legislation and the platform's removal efforts indicate the harm is occurring and recognized. Therefore, this qualifies as an AI Incident due to realized harm linked to AI system use.
Thumbnail Image

White House Is 'Alarmed' by Graphic Taylor Swift AI Photos | Sada Elbalad

2024-01-27
see.news
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to create fake graphic images that have been disseminated, causing harm related to misinformation and violation of personal rights (non-consensual intimate imagery). This constitutes a violation of rights and harm to individuals and communities. Since the harm is occurring due to the AI-generated content, this qualifies as an AI Incident. The White House's response and call for action further confirm the recognition of harm caused by AI misuse.
Thumbnail Image

Taylor Swift becomes latest victim to AI fake porn, US calls for rules

2024-01-27
Techlusive
Why's our monitor labelling this an incident or hazard?
The event describes the creation and viral spread of AI-generated deepfake pornographic images of a public figure, Taylor Swift, which is a direct violation of her rights and causes significant harm. The AI system's use in generating these images is explicit, and the harm (violation of rights and reputational damage) has already occurred. The article also mentions calls for legislative action and platform responsibility, but the primary focus is on the realized harm caused by the AI-generated content. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

White House, Microsoft respond to crude AI images of Taylor Swift

2024-01-27
Consequence
Why's our monitor labelling this an incident or hazard?
The event describes actual harm caused by AI-generated deepfake images that are sexually explicit and nonconsensual, directly impacting Taylor Swift's rights and privacy. The AI system's use in generating these images is explicit, and the harm is realized, not hypothetical. Therefore, this qualifies as an AI Incident. Although the article also discusses responses and potential legislative measures, the primary focus is on the incident of harm caused by AI-generated content, which takes precedence over the complementary information about responses.
Thumbnail Image

X Bans Any Taylor Swift Search Results From App After AI-Generated, Pornographic Images Of The Singer Surfaced

2024-01-28
OutKick
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content (deepfake pornographic images) that has been disseminated without consent, causing harm to the individual's privacy and autonomy, which is a violation of fundamental rights. The AI system's use in generating these images directly led to this harm. The platform's response to ban search results is a mitigation measure but does not negate the incident. Therefore, this is an AI Incident due to realized harm from AI-generated non-consensual intimate images.
Thumbnail Image

White House responds to AI-faked Taylor Swift nudes, calls for regulation

2024-01-28
THE DECODER
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake nude images (deepfakes) of real people, which have been distributed widely, causing psychological and reputational harm. This constitutes a violation of rights and harm to individuals, fitting the definition of an AI Incident. The involvement of AI in generating the images is clear, and the harm is realized, not just potential. The White House's call for regulation is a response to this incident, but the primary event is the harm caused by the AI-generated content.
Thumbnail Image

Taylor Swift Is Not Searchable on X After Disturbing Deepfakes

2024-01-28
The Messenger
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that are sexually explicit and non-consensual, which is a direct violation of privacy and personal rights, causing harm to the individual depicted and potentially to broader communities by enabling harassment and abuse. The AI system's development and use in creating these images is central to the harm described. The article describes realized harm rather than just potential harm, qualifying this as an AI Incident under the framework definitions.
Thumbnail Image

Outrage over deepfake porn images of Taylor Swift

2024-01-28
The New Paper
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake pornographic images, which are non-consensual and sexually explicit, targeting a public figure and others, causing harm to individuals' rights and communities. The harm is realized as the images went viral and continue to circulate, leading to public outrage and calls for legislative action. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities through harassment and dissemination of harmful content.
Thumbnail Image

X implements restrictions on Taylor Swift searches amidst deep-fake image controversy

2024-01-28
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated explicit images (deepfakes) of a person causing significant harm by spreading non-consensual explicit content, which is a violation of rights and harms community standards. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The platform's blocking of searches and hiring of trust and safety staff are responses to this incident, not the primary event. Hence, the classification is AI Incident.
Thumbnail Image

Deepfakes of Taylor Swift have gone viral. How does this keep happening?

2024-01-26
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for deepfake creation) that have directly led to harm (psychological, personal, and reputational) to individuals through nonconsensual sexual imagery. This fits the definition of an AI Incident because the AI system's use has directly caused violations of rights and harm to communities. The article details actual harm occurring, not just potential harm, and discusses the societal and legal context, but the primary focus is on the incident of harm itself rather than just responses or updates. Therefore, the classification is AI Incident.
Thumbnail Image

Deepfaked nudes of Taylor Swift demonstrate that we need regulation of AI now: experts

2024-01-28
CTV News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images that have been widely circulated, causing harm through non-consensual intimate imagery, which is a violation of rights and harassment. This meets the criteria for an AI Incident because the AI system's use directly led to harm (violation of rights and harm to the individual and community). Although the article discusses regulatory responses and potential legislation, these are complementary to the main incident of harm. Therefore, the classification is AI Incident.
Thumbnail Image

Taylor Swift AI Nudes Provoke Fandom Uproar on X: "Disgusting as Hell"

2024-01-25
The Hollywood Reporter
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created using AI systems capable of generating realistic fake content. The harm is realized as the images are non-consensual, sexualized, and widely disseminated, causing distress and violating rights. The involvement of AI in the creation and spread of these images directly led to the harm. The article also references legal and governance responses, but the primary focus is on the incident of harm caused by the AI-generated content. Hence, the classification is AI Incident.
Thumbnail Image

US lawmakers weigh-in on deepfakes after explicit Taylor Swift images are shared online

2024-01-28
TechSpot
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating non-consensual explicit deepfake images, which have been widely disseminated, causing harm to the individual (Taylor Swift) and potentially to communities through reputational and emotional damage. This fits the definition of an AI Incident as the AI system's use has directly led to harm. Although legislative responses are discussed, the main subject is the incident of harm itself, not just the response, so it is not merely Complementary Information. Therefore, the classification is AI Incident.
Thumbnail Image

Outrage over deepfake explicit images of Taylor Swift

2024-01-27
GULF NEWS
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create explicit deepfake images without consent, which have been widely disseminated, causing direct harm to the individual (Taylor Swift) and potentially to other victims (non-celebrities). This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The article describes actual harm occurring, not just potential harm, and the AI system's role is pivotal in generating the harmful content. Therefore, the classification is AI Incident.
Thumbnail Image

New Law Would Illegalize AI Taylor Swift Porn Flooding Internet

2024-01-26
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate nonconsensual deepfake pornography, which has directly led to harm to individuals' rights and dignity, including minors. The article documents actual harm occurring due to the AI-generated content being spread online. The legislative response is a reaction to this harm but does not negate the fact that the AI system's use has caused an AI Incident. The harm includes violation of personal rights and emotional distress, fitting the definition of an AI Incident under violations of human rights and harm to communities. Thus, the classification is AI Incident.
Thumbnail Image

Taylor Swift Considering Legal Action Against Deepfake Porn Site Circulating Explicit AI Images

2024-01-26
Tech Times
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating explicit deepfake images without consent, which is a direct violation of personal rights and constitutes harm to the individual (Taylor Swift). The AI system's use in creating and disseminating these images has directly led to harm, fulfilling the criteria for an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights. The article details the harm already occurring and the legal considerations arising from it, rather than just potential future harm or general AI news, so it is not an AI Hazard or Complementary Information. Therefore, the classification is AI Incident.
Thumbnail Image

Entertainment News | Taylor Swift Searches Blocked by X Amid AI-generated Images Controversy | LatestLY

2024-01-28
LatestLY
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake images of Taylor Swift that are non-consensual and sexually explicit, which constitutes a violation of privacy and rights (a breach of fundamental rights). The AI system's use in generating these images has directly led to harm, including emotional distress and potential reputational damage. The dissemination of these images on social media platforms further compounds the harm. The event meets the criteria for an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to the individual). The legislative and social responses confirm the recognition of this harm. Hence, the classification is AI Incident.
Thumbnail Image

Taylor Swift's Deepfake Photos Sparks New Call for AI Regulation

2024-01-26
Coingape
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images and videos that have been widely disseminated, causing harm to individuals' privacy and enabling fraudulent schemes. The involvement of AI systems in creating these manipulated media is clear, and the harms have already occurred, fulfilling the criteria for an AI Incident. While the article also discusses regulatory efforts and platform actions, these serve as context and responses to the incident rather than the main event. Hence, the classification is AI Incident.
Thumbnail Image

Protect Taylor Swift: Social media users defend Swift amid AI pictures trend | Al Bawaba

2024-01-26
Al Bawaba
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (deepfake technology) used maliciously to create and disseminate harmful content without consent, which is a violation of rights and causes harm to the individual targeted. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and reputational harm). Although there is mention of legal and societal responses, the main subject is the harm caused by the AI-generated images, not just the responses, so it is not merely Complementary Information. Therefore, the classification is AI Incident.
Thumbnail Image

It's not just Taylor Swift: AI-generated porn is targeting women and kids all over the world

2024-01-27
nwi.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for deepfake creation) that have directly led to harm by producing and disseminating non-consensual explicit images and videos of real individuals, including minors. This constitutes violations of personal rights and harm to individuals and communities. The article documents actual incidents and ongoing harm, not just potential risks, thus qualifying as an AI Incident. The involvement of AI in generating the harmful content is explicit and central to the harm described. Therefore, the classification is AI Incident.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-26
nwi.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (generative diffusion models) used to create harmful deepfake images. The harm is realized and ongoing, including violations of rights and harm to the community through non-consensual explicit content. This meets the criteria for an AI Incident because the AI system's use has directly led to significant harm. The article's focus is on the incident itself rather than just responses or broader context, so it is not Complementary Information. Therefore, the classification is AI Incident.
Thumbnail Image

It's not just Taylor Swift: AI-generated porn is targeting women and kids all over the world

2024-01-27
Winston-Salem Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to generate manipulated pornographic images and videos without consent, which have been widely circulated, causing harm to individuals' privacy, dignity, and reputations. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The harm is realized and ongoing, with examples including Taylor Swift and other women and girls worldwide. The AI system's use in creating and spreading these deepfakes is central to the harm described. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Deepfake explicit images of Taylor Swift spread on social media. Her fans are fighting back

2024-01-27
pantagraph.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate explicit deepfake images without consent, which have been widely shared, causing harm to the individual and potentially to communities by spreading abusive content. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The article describes actual harm occurring, not just potential harm, and the AI system's role is pivotal in creating the harmful content. Therefore, the classification is AI Incident.
Thumbnail Image

U.S Lawmakers Advocate Swift Legislation against Deepfakes Following Taylor Swift Incident

2024-01-27
Binance
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake images, which have been widely disseminated causing harm to the individual (Taylor Swift) and potentially to communities by spreading non-consensual explicit content. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to communities). The legislative and platform responses are complementary information but the core event is the realized harm from AI-generated deepfakes. Therefore, the classification is AI Incident.
Thumbnail Image

Deepfakes pornographiques: pourquoi l'intelligence artificielle rend la lutte impossible

2024-01-29
BFMTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI generative systems to create and disseminate non-consensual deepfake pornography, which constitutes a violation of human rights and causes harm to individuals and communities. The harm is realized and ongoing, as the content is actively spreading and affecting victims. The AI system's role is pivotal in generating these realistic fake images and videos, making this an AI Incident under the framework. The article also discusses legal and societal responses, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Les deepfakes pornographiques ciblant Taylor Swift provoquent une prise de conscience sur les dangers de l'IA

2024-01-29
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The article describes the creation and widespread sharing of AI-generated deepfake pornographic images targeting a specific individual, Taylor Swift. The use of AI tools to produce these images constitutes an AI system's involvement. The harm includes violation of personal rights and reputational damage, which falls under violations of human rights and harm to communities. The event has already occurred with significant impact, qualifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Indignation générale après la diffusion de fausses images pornographiques de Taylor Swift

2024-01-26
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create false pornographic images of Taylor Swift, which were widely viewed and shared, causing harm to the individual and potentially to communities by spreading non-consensual, degrading content. This meets the criteria for an AI Incident as the AI system's use directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Taylor Swift victime de fausses images pornographiques, la communauté des " Swifties " à la rescousse

2024-01-26
Le Monde.fr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated pornographic images of a real person without consent, which is a violation of rights and personal dignity, fitting the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized as the images have been widely viewed and shared, causing reputational and psychological harm. The AI system's use in generating these images is central to the incident. Therefore, this is classified as an AI Incident.
Thumbnail Image

Réseaux sociaux : de fausses images pornographiques de Taylor Swift provoquent l'indignation

2024-01-26
Ouest France
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create and distribute non-consensual deepfake pornographic images, which directly violates the rights of the individual depicted and causes social harm. The AI system's use has directly led to harm (violation of rights and harm to communities). Therefore, this qualifies as an AI Incident under the framework, as the harm is realized and the AI system's role is pivotal.
Thumbnail Image

De fausses images pornographiques de Taylor Swift provoquent l'indignation générale

2024-01-26
France 24
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create non-consensual deepfake pornographic images, which have been widely disseminated and viewed, causing harm to the individual targeted (Taylor Swift) and potentially to other women as well. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The article also discusses societal and regulatory responses, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

X bloque les recherches " Taylor Swift " à cause de deepfakes

2024-01-29
20minutes
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images that have been widely disseminated, causing harm to Taylor Swift's rights and reputation, which fits the definition of harm to individuals and communities. The AI system's use (deepfake generation) directly led to this harm. The platform's response to block searches is a mitigation measure but does not negate the incident. Hence, this is an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Taylor Swift : De fausses images pornographiques font exploser X

2024-01-26
20minutes
Why's our monitor labelling this an incident or hazard?
The article describes an AI-generated deepfake video causing reputational harm and non-consensual explicit content dissemination, which is a violation of rights and harm to the community. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The widespread sharing and delayed moderation exacerbated the harm. The involvement of AI in generating the content and the resulting harm to the individual and community is clear and direct, justifying classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

" Vous êtes des malades " : de fausses images porno de Taylor Swift provoquent l'indignation générale

2024-01-26
Le Parisien
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the images were created using generative AI and were widely shared, causing harm to the individual depicted and provoking public indignation. The AI system's use directly led to the creation and spread of harmful content, fitting the definition of an AI Incident due to violation of rights and harm to communities.
Thumbnail Image

Aux États-Unis, les deepfakes de Taylor Swift vont-ils déclencher une modification de la loi ?

2024-01-29
Franceinfo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images and audio that have been widely shared and have caused harm to the individuals depicted and to the broader community through misinformation and political interference. The harms include reputational damage, spread of false information, and potential influence on democratic processes. These harms fall under violations of rights and harm to communities, meeting the criteria for an AI Incident. The involvement of AI in generating and disseminating these deepfakes is clear and central to the event.
Thumbnail Image

Face à la diffusion massive de fausses images pornographiques de Taylor Swift, le réseau social X retire "toutes les images identifiées"

2024-01-26
Franceinfo
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the creation and widespread sharing of fake pornographic images generated by AI, which directly harms the individual depicted and contributes to online harassment and violation of rights. The AI system's use in generating these images is central to the harm. The harm is realized, not just potential, as millions have viewed the images and the platform had to intervene. This fits the definition of an AI Incident involving violations of rights and harm to communities caused by AI-generated content.
Thumbnail Image

De fausses images pornographiques de Taylor Swift provoquent l'indignation

2024-01-26
La Presse.ca
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of generative AI to create fake pornographic images, which have been widely shared and caused public outrage. This constitutes a violation of rights and harm to the individual depicted, as well as harm to communities by spreading degrading content. The AI system's use is directly linked to these harms, fulfilling the criteria for an AI Incident. The article describes realized harm, not just potential harm, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Taylor Swift: les recherches sur X désactivées après la diffusion de fausses images pornographiques

2024-01-28
BFMTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves generative AI systems creating fake pornographic images without consent, which constitutes a violation of personal rights and causes harm to the individual and communities. The harm is realized as the images were widely viewed and caused public indignation. The platform's response to remove content and disable searches confirms the harm's materialization. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

IA : De fausses images de Taylor Swift obligent X à bloquer les recherches sur la chanteuse

2024-01-28
LesEchos.fr
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake images that have been widely shared, causing harm to Taylor Swift's reputation and privacy, which constitutes a violation of rights and harm to the community. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk but a realized harm, and the social media platform's intervention and government response further confirm the incident's significance.
Thumbnail Image

Victime d'un deepfake porno, Taylor Swift a pu compter sur la réaction massive de ses fans

2024-01-26
BFMTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a deepfake pornographic video without consent, which is a violation of rights and causes harm to the individual and community. The video was widely disseminated, leading to reputational and emotional harm. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The platform's inadequate moderation further contributed to the harm. Hence, this is not merely a potential hazard or complementary information but a realized incident involving AI-generated content causing harm.
Thumbnail Image

Fausses images porno de Taylor Swift: des élus américains veulent criminaliser les deepfakes

2024-01-29
BFMTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and spread deepfake content that directly causes harm to individuals (emotional, reputational) and disproportionately affects women, which fits the definition of an AI Incident. The article reports on actual harm caused by the AI-generated content and the societal response, but the primary focus is on the harm caused by the AI system's outputs, not just the response, so it is classified as an AI Incident rather than Complementary Information.
Thumbnail Image

Taylor Swift victime de deep fakes "dégoûtants" générés par IA

2024-01-26
www.Bluewin.ch
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-generated deep fake images that are pornographic and non-consensual, causing harm to the individual depicted and distress to communities. The AI system's use in generating these images is central to the harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The harm is realized, not just potential, as the images have already spread on the platform.
Thumbnail Image

Deepfake. Les fans de Taylor Swift vont-ils mettre fin au fléau des fausses images pornographiques ?

2024-01-28
Courrier international
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that are sexually explicit and non-consensual, which have been widely disseminated and caused real harm to the individual targeted and to communities, particularly women. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The involvement of AI in creating these images is clear, and the harm is realized, not just potential. Although there is mention of legal and societal responses, the main subject is the harm caused by the AI-generated deepfakes, making this an AI Incident rather than Complementary Information or AI Hazard.
Thumbnail Image

Publication de fausses photos explicites | X interrompt les recherches sur Taylor Swift

2024-01-29
La Presse.ca
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated fake explicit images (deepfakes) of Taylor Swift being widely circulated, which is a direct use of AI systems to create harmful content. This has led to harm in terms of violation of personal rights and reputational damage, fulfilling the criteria for an AI Incident. The platform's blocking of searches is a response to this harm but does not negate the incident itself. Therefore, this is classified as an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Taylor Swift: des photos pornographiques de la chanteuse diffusées à son insu

2024-01-26
DH.be
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images (ultrarealistic, pornographic) of a real person being spread without consent, which is a clear violation of personal rights and privacy. The harm is realized as the images have gone viral, causing reputational and emotional harm to Taylor Swift. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to community).
Thumbnail Image

Des images pornos générées par IA de Taylor Swift sur X

2024-01-25
Radio Canada
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating non-consensual pornographic images, which is a clear violation of human rights and personal dignity. The harm is actual and ongoing, as the images have been widely viewed and shared, causing reputational and emotional harm to Taylor Swift. The AI system's use in creating these images and their dissemination on a social media platform directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and breach of obligations intended to protect fundamental rights.
Thumbnail Image

X freine temporairement les recherches sur Taylor Swift

2024-01-29
Radio Canada
Why's our monitor labelling this an incident or hazard?
AI systems were used to generate explicit, non-consensual images of Taylor Swift, which were widely disseminated on X, causing harm to her privacy and dignity, constituting a violation of rights. The platform's temporary blocking of searches is a response to this AI-driven harm. Since the AI-generated content directly led to harm (violation of rights and harm to community standards), this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

Après la diffusion de deepfakes, les résultats sur " Taylor Swift " sont limités sur X

2024-01-29
Le Huffington Post
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake images, which have been widely disseminated, causing harm to Taylor Swift's personal rights and potentially to the community by spreading harmful misinformation and non-consensual explicit content. This meets the criteria for an AI Incident as the AI system's use has directly led to violations of rights and harm. The platform's mitigation measures and legal actions are responses to this incident, but the core event is the realized harm from AI-generated deepfakes.
Thumbnail Image

Les fans de Taylor Swift se mobilisent face à des photos pornos de la star générées à l'IA

2024-01-26
Le Huffington Post
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating explicit deepfake content without consent, which constitutes a violation of personal rights and privacy, a recognized form of harm under the AI Incident definition (c). The harm is realized as the content has been widely shared and viewed, causing reputational and personal harm to Taylor Swift. The involvement of AI in creating the content and the resulting harm meets the criteria for an AI Incident rather than a hazard or complementary information. The platform's moderation and legal responses are reactions to the incident, not the main focus of the article.
Thumbnail Image

Des montages pornographiques de Taylor Swift, créés par IA, font scandale

2024-01-27
Le Point.fr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is a clear use of AI systems to create manipulated media. The harm is realized as the non-consensual sexualized images violate Taylor Swift's rights and cause reputational and emotional harm, fitting the definition of harm to individuals and communities. The widespread circulation and platform policy violations confirm the direct link between AI use and harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Affaire Taylor Swift: la prolifération des "deepfakes" est jugée "alarmante et terrible". Comment les combattre? - Le Temps

2024-01-29
Le Temps
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated synthetic media that can cause significant harm by spreading misinformation, violating privacy, and potentially damaging reputations. The article highlights the massive spread of such AI-generated content involving a public figure and minors, indicating realized harm through misinformation and potential rights violations. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm to communities and violations of rights.
Thumbnail Image

Vague d'indignation suite à la publication de deepfakes pornographiques de Taylor Swift

2024-01-26
rts.ch
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create deepfake pornographic images, which are then widely shared, causing harm to the individuals depicted (violation of rights and harassment). The harm is realized and ongoing, as the images have been viewed millions of times and have caused public indignation and concern from authorities. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to communities (harassment and reputational damage).
Thumbnail Image

Taylor Swift victime de l'IA : " vous êtes des malades ", de fausses images pornos provoquent l'indignation générale

2024-01-27
SudOuest.fr
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create and distribute fake pornographic images of a public figure, Taylor Swift, without her consent. This constitutes a violation of rights and causes harm to the individual and communities targeted by such harassment. The harm is direct and realized, as the images have been widely viewed and shared, leading to public and political outcry. The AI system's role is pivotal as it enabled the creation of these realistic fake images. Hence, this is classified as an AI Incident.
Thumbnail Image

Taylor Swift victime de fausses images porno générées par IA, les Etats-Unis indignés

2024-01-27
Libération
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI generative models to create fake pornographic images (deepfakes) of Taylor Swift, which were widely shared on social media platforms. This constitutes a violation of rights and harm to the individual and community, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, as the images were viewed millions of times and caused public outrage and calls for legal action. Therefore, this event is classified as an AI Incident.
Thumbnail Image

X (Twitter) : vous ne pouvez faire de recherche sur Taylor Swift, voici pourquoi

2024-01-29
PhonAndroid
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake explicit images (deepfakes) of a person, which have been widely disseminated, causing harm to the individual's privacy and potentially reputational harm. The platform's blocking of search functionality is a response to this harm. The AI system's use in creating and spreading false content that harms a person's rights and safety fits the definition of an AI Incident, as the harm has materialized and is directly linked to the AI-generated content.
Thumbnail Image

De fausses photos sexuelles de Taylor Swift circulent sur le web

2024-01-26
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated fake sexual images of Taylor Swift being circulated widely, which is a direct violation of her rights and dignity, thus constituting harm under the definition of an AI Incident. The AI system's use in generating these images directly led to the harm. The harm is realized, not just potential, as the images have been viewed millions of times and shared extensively. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fausses photos sexuelles : impossible de rechercher Taylor Swift sur X

2024-01-28
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated pornographic images of Taylor Swift being circulated on X, causing harm through misinformation and violation of personal rights. The AI system's outputs directly led to harm by spreading manipulated media that can confuse and harm the individual and community. The platform's disabling of search is a response to this harm. Therefore, this is an AI Incident due to realized harm from AI-generated content violating rights and causing reputational and emotional harm.
Thumbnail Image

Taylor Swift : les images porno de la chanteuse créées par l'IA, vues plus de 47 millions de fois, devraient pousser à une législation

2024-01-27
CNEWS
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems creating realistic but fake content. The harm is direct and realized: non-consensual pornography violates personal rights and causes reputational and emotional harm. The large scale of circulation (over 47 million views) and the political reaction underline the significance of the harm. Therefore, this qualifies as an AI Incident under the definition of violations of human rights and harm to communities caused by AI systems.
Thumbnail Image

Voici pourquoi il est désormais impossible de chercher "Taylor Swift" dans X

2024-01-28
Soirmag
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated fake images (deepfakes) of a person, which have been widely disseminated causing harm to the individual's privacy and reputation. This constitutes a violation of rights and harm to the individual, fitting the AI Incident category. The AI system's use in generating these images directly led to the harm. The platform's actions are responses to the incident but do not change the classification of the event as an AI Incident.
Thumbnail Image

Taylor Swift victime de deepfakes pornographiques, ses fans réagissent en masse: "Horrible et inexcusable"

2024-01-26
Soirmag
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated deepfake images depicting Taylor Swift in pornographic contexts without her consent. The dissemination of these images on social media platforms has caused significant harm to her personal rights and emotional well-being. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to an individual. The presence of AI is clear (Microsoft Designer used to generate images), and the harm is realized, not just potential. Therefore, this is classified as an AI Incident.
Thumbnail Image

De fausses images pornographiques de Taylor Swift provoquent l'indignation générale

2024-01-26
La Croix
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create non-consensual pornographic deepfake images of a public figure, Taylor Swift. These images have been widely shared and viewed, causing harm through violation of privacy, harassment, and reputational damage. The harm is direct and realized, as the images have been publicly disseminated and have led to public indignation and calls for legal action. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The article also references the broader societal impact and regulatory concerns, but the primary focus is on the realized harm caused by the AI-generated content.
Thumbnail Image

"Protégez Taylor Swift" : des photos générées par une IA montrent la chanteuse nue sur X, ses fans volent à son secours

2024-01-26
midilibre.fr
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated explicit images of Taylor Swift being disseminated on a social media platform, causing harm to her personal rights and privacy. The AI system's use in generating these images and their distribution directly led to this harm. The incident involves violation of rights and potential legal action, fitting the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights.
Thumbnail Image

Derrière les deepfakes porno de Taylor Swift, l'impossible régulation des réseaux sociaux

2024-01-29
Journal du Geek
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems capable of generating deepfake images, which are explicitly mentioned as the cause of the non-consensual pornographic content. This content has been widely disseminated, causing harm to the individual's rights and dignity, which qualifies as a violation of human rights under the framework. The harm is realized and ongoing, not merely potential, making this an AI Incident. The article also touches on regulatory and platform moderation challenges, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Intelligence artificielle : de fausses images pornographiques de Taylor Swift suscitent l'indignation

2024-01-27
RTL.fr
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of AI-generated fake pornographic images, which is a direct violation of individual rights and causes harm to the targeted person and communities. The AI system's use in generating these images and their dissemination on social media platforms directly led to harm, fulfilling the criteria for an AI Incident. The harm includes violation of privacy, reputational damage, and online harassment, which align with violations of human rights and harm to communities as defined. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Taylor Swift peut-elle vaincre les deepfakes pornographiques ?

2024-01-29
RTL.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the creation and distribution of AI-generated deepfake pornographic content targeting Taylor Swift, which constitutes a violation of rights and causes harm to the individual and communities. The AI system's use in generating these realistic fake images is central to the harm. The widespread dissemination and platform moderation issues further confirm the direct involvement of AI in causing harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Le PDG de Microsoft appelle l'industrie technologique à " agir " après des photos pornos de Taylor Swift générées par IA, certaines d'entre elles ont été générées par un outil de Microsoft

2024-01-29
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Microsoft's Designer) to generate harmful deepfake images, which are non-consensual and pornographic, thus violating human rights and causing harm to the individual and communities. The harm is realized and ongoing, as the images have been widely disseminated and caused alarm at the highest levels, including the White House. The misuse of the AI tool and the failure of safeguards to prevent such generation constitute direct involvement of AI in causing harm. Therefore, this qualifies as an AI Incident under the framework, as it involves direct harm to rights and communities due to AI-generated content.
Thumbnail Image

De fausses images pornos de Taylor Swift provoquent l'indignation générale

2024-01-27
Le Matin
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI generative systems creating fake pornographic images (deepfakes) of a person without consent. The widespread sharing of these images constitutes a violation of personal rights and causes harm to the individual and communities (harassment, reputational damage). The harm is realized and ongoing, as the images were viewed millions of times before removal. Therefore, this qualifies as an AI Incident due to violations of rights and harm to communities directly linked to the use of AI systems.
Thumbnail Image

Des images porno de Taylor Swift générées par l'IA provoquent la colère des fans

2024-01-27
Toms Guide : actualités high-tech et logiciels
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated pornographic images of Taylor Swift were created and shared without her consent, causing harm to her rights and distress to her fan community. The AI system (Microsoft Designer) was used in the development and misuse phase to generate these images. The harm is direct and realized, including violation of privacy and intellectual property rights, and harm to the community through the spread of harmful content. The involvement of AI in generating deepfake images that cause reputational and emotional harm fits the definition of an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

D'où viennent toutes ces photos dénudées de Taylor Swift qui circulent sur X ?

2024-01-26
La Libre.be
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake pornographic images of a public figure, which have been widely shared and viewed, causing harm to the individual's rights and reputation. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to the community. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard. It is not Complementary Information since the main focus is on the incident itself, nor is it Unrelated as AI is central to the event.
Thumbnail Image

Comment de fausses images pornographiques ont fait disparaître Taylor Swift de Twitter

2024-01-28
La Libre.be
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the creation of deepfake images using AI, which were spread widely and caused harm by falsely depicting a public figure in a pornographic context. The platform's response to remove the images and penalize accounts confirms the harm occurred. The AI system's use in generating these images directly led to the harm, fitting the definition of an AI Incident due to violation of rights and harm to communities.
Thumbnail Image

Pourquoi X (ex-Twitter) a bloqué les recherches sur Taylor Swift ?

2024-01-29
L'Éclaireur Fnac
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake pornographic images, which is a direct misuse of AI technology causing harm to an individual (Taylor Swift) and the community. The harm includes violation of rights and dissemination of abusive content. The platform's blocking of searches and account removals are responses to this AI-driven harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm.
Thumbnail Image

X bloque les recherches sur Taylor Swift à cause des fausses images pornographiques

2024-01-29
Numerama.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake pornographic images (deepfakes) of Taylor Swift, which are non-consensual and harmful, thus constituting a violation of rights. The social media platform's blocking of searches and removal of accounts sharing such content is a direct response to this harm. The AI system's role in creating these images is central to the incident, and the harm is realized (not just potential). Therefore, this qualifies as an AI Incident due to violation of rights caused by AI-generated harmful content.
Thumbnail Image

La maison blanche veut une loi pour stopper les fausses images pornographiques de Taylor Swift

2024-01-27
Numerama.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated pornographic images of a real person being spread on social media, causing harm through harassment and abuse, which fits the definition of an AI Incident under violations of human rights and harm to communities. The AI system's use in generating these images is central to the harm caused. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Taylor Swift victime d'une deepfake pornographique, ses fans volent à son secours

2024-01-27
Paris Match
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake pornographic images without consent, which is a direct violation of individual rights and constitutes harm. The spread of these images on social media platforms has caused significant distress and is recognized as exploitation. The involvement of AI in creating these images and the resulting harm to the individual and community clearly fits the definition of an AI Incident. Although there is mention of legislative and regulatory responses, the main focus is on the harm already caused by the AI-generated content, not just potential future harm or complementary information.
Thumbnail Image

Les recherches sur Taylor Swift bloquées sur X - CNET France

2024-01-29
CNET France
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are outputs of an AI system. The dissemination of these images caused harm by violating the rights and dignity of Taylor Swift, a clear violation of human rights and harm to communities. The harm is realized as the images were viewed millions of times and caused public alarm, including official concern from the White House. The AI system's use in generating these images and their spread on the platform directly led to the harm described. Hence, this is an AI Incident.
Thumbnail Image

De fausses images pornographiques de Taylor Swift provoquent l'indignation générale

2024-01-26
RTL Info
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create false pornographic images, which were widely shared on social media platforms. This constitutes a violation of personal rights and can be considered harm to the individual and communities. Since the harm (distribution of fake explicit images) has already occurred, this qualifies as an AI Incident under the category of violations of human rights or breach of obligations protecting fundamental rights.
Thumbnail Image

De fausses images pornographiques de Taylor Swift provoquent l'indignation générale

2024-01-26
RTL Info
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of AI-generated fake pornographic images of Taylor Swift is a direct use of AI systems (deepfake technology) that has caused harm by violating personal rights and contributing to online harassment. The article explicitly mentions the role of AI in making such images easier and cheaper to produce, and the harm is ongoing as the images circulate on social media platforms. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

De fausses images pornographiques de Taylor Swift provoquent l'indignation générale aux États-Unis

2024-01-26
Le Telegramme
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are non-consensual and pornographic, causing harm to the individual (Taylor Swift) and raising broader societal concerns about harassment and rights violations. The harm is realized as the images are circulating and causing indignation and calls for legal action. The AI system's use in generating these images directly leads to violations of rights and harm to the individual and communities, fitting the definition of an AI Incident. The article also discusses the role of social media platforms and the need for regulation, but the primary focus is on the harm caused by the AI-generated content, not just on responses or policy discussions, so it is not merely Complementary Information.
Thumbnail Image

De fausses images pornographiques de Taylor Swift provoquent l'indignation sur les réseaux sociaux

2024-01-27
LaProvence.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI was used to create fake pornographic images (deepfakes) of Taylor Swift, which were widely shared on social media, causing harm to the individual and distress to communities. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The harm is realized, not just potential, as the images were viewed millions of times and caused public indignation. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

De fausses images pornographiques de Taylor Swift provoquent l'indignation générale

2024-01-26
Le Soleil
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create deepfake pornographic images, which have been widely disseminated and caused harm to the individual depicted and to societal norms. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The harm is realized, not just potential, as the images have been viewed millions of times and caused public indignation. Therefore, this is classified as an AI Incident.
Thumbnail Image

X pris au dépourvu par la prolifération de deepfakes pornographiques de Taylor Swift

2024-01-29
Siècle Digital
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation and dissemination of deepfake images, which are AI-generated synthetic media. The harm is realized as the non-consensual use of Taylor Swift's likeness in pornographic deepfakes constitutes a violation of rights and causes reputational and psychological harm. The platform's struggle to control the spread and the political alarm further confirm the significance of the harm. Therefore, this is an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

X inondé de fausses images explicites de Taylor Swift générées par IA !

2024-01-27
Fredzone
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems. The harm is realized as the images are abusive, exploitative, and non-consensual, violating the rights of Taylor Swift and causing harm to communities by spreading offensive content. The dissemination on a major social media platform with millions of views and reposts confirms the direct role of AI in causing harm. The event fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The ongoing investigation into the platform's role further supports the significance of the harm caused.
Thumbnail Image

Taylor Swift victime de deepfakes pornographiques "abusifs et offensants" : ses fans se mobilisent face à la modération à la traîne du réseau social X

2024-01-26
Communes, régions, Belgique, monde, sports – Toute l'actu 24h/24 sur Lavenir.net
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating deepfake images, which are non-consensual and offensive, causing harm to the individual’s rights and dignity. The dissemination of such content on a major social media platform and the delayed moderation response have directly led to harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to the individual and community. The fans' response is complementary but does not change the classification.
Thumbnail Image

X suspend certaines recherches de Taylor Swift alors que de fausses images explicites se propagent | Divertissement

2024-01-29
News 24
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI image generation systems to create and disseminate non-consensual explicit deepfake images, which constitutes a violation of fundamental rights and harms the individual and community. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The platform's temporary suspension of certain searches is a response to this harm but does not negate the incident itself. Therefore, this event is classified as an AI Incident.
Thumbnail Image

2024-01-27
News 24
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (text-to-image generative AI) to create fake explicit images without consent, which have been widely shared, causing harm to the individual depicted and potentially to others. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The article also discusses responses and legislative efforts, but the primary focus is on the realized harm caused by the AI-generated content, not just on the responses, so it is not merely Complementary Information.
Thumbnail Image

Sexualisée à son insu grâce à l'IA, Taylor Swift est à nouveau sauvée par ses fans

2024-01-26
parismatch.be
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI was used to generate images of Taylor Swift in a sexualized manner without her consent. This constitutes a violation of her rights and causes harm to her reputation, which falls under harm to a person or group. The AI system's use in generating these images directly led to this harm, qualifying the event as an AI Incident.
Thumbnail Image

X (Twitter) bloque les recherches sur Taylor Swift après des deepfakes pornos réalisés par IA

2024-01-29
KultureGeek
Why's our monitor labelling this an incident or hazard?
Deepfake pornography created by AI constitutes a violation of personal rights and can cause significant harm to the individual depicted and the community. The AI system's use here directly led to the dissemination of harmful content, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The platform's response is a mitigation effort but does not negate the fact that harm has occurred due to AI-generated content.
Thumbnail Image

Les deepfakes pornographiques de Taylor Swift relancent le débat sur la régulation du phénomène - Next

2024-01-29
Next
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of generative AI models to create deepfake pornographic images, which have been widely viewed and shared, causing harm to the individual (Taylor Swift) and the broader community. The harm includes violation of rights and reputational damage, fitting the definition of harm to communities and violations of rights under the AI Incident framework. The AI system's use is central to the harm, as the deepfakes would not exist without the generative AI technology. Hence, this is an AI Incident.
Thumbnail Image

De fausses images pornographiques de Taylor Swift provoquent l'indignation générale

2024-01-27
TF1 INFO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI generative systems creating non-consensual deepfake pornographic images, which have been widely disseminated, causing harm to the individual's rights and reputational harm, as well as harm to communities through harassment and online abuse. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The article also mentions the failure of content moderation on social media platforms, which contributed to the harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Fausses images explicites de Taylor Swift: il faut des lois sur l'intelligence artificielle, préviennent des experts

2024-01-29
Noovo Info
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic deepfake images of a real person without consent, which is a direct violation of rights and causes harm through harassment and misinformation. The harm is realized as the images have been widely viewed and circulated before removal. The article also discusses the societal and legal responses, but the primary focus is on the harm caused by the AI-generated content. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

0

2024-01-29
developpez.net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate harmful deepfake images, which have been widely disseminated, causing harm to the individual and potentially to communities through harassment and abuse. The misuse of Microsoft's AI tool to bypass safeguards and create non-consensual explicit images constitutes a violation of rights and harms communities. The involvement of AI in the creation and spread of these images is direct and central to the harm. The article also discusses regulatory responses and industry calls to action, but the primary focus is on the realized harm caused by AI misuse, fitting the definition of an AI Incident.
Thumbnail Image

Taylor Swift Geram Usai Jadi Korban Foto Porno Hasil Modifikasi AI, Pertimbangkan Tempuh Jalur Hukum

2024-01-27
Liputan 6
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated pornographic images of Taylor Swift that were created and distributed without her consent, causing harm to her personal rights and reputation. The AI system's use in generating these images directly led to a violation of rights and harm to the individual, fitting the definition of an AI Incident. The dissemination on social media and the distress caused confirm the harm has occurred, not just a potential risk.
Thumbnail Image

X hingga Instagram Blokir Pencarian Nama Taylor Swift, Akibat Beredarnya Foto Porno Hasil Deepfake

2024-01-28
Liputan 6
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated deepfake pornographic images, which is a clear violation of human rights and legal protections. The AI system's use in producing these images has directly caused harm to Taylor Swift and potentially to the broader community by spreading harmful manipulated content. Therefore, this qualifies as an AI Incident due to realized harm stemming from AI misuse.
Thumbnail Image

DPR Gercep Gaungkan UU Baru Imbas Foto Porno AI Taylor Swift

2024-01-28
suara.com
Why's our monitor labelling this an incident or hazard?
The article describes an AI system used to create deepfake pornographic images, which have been widely disseminated, causing harm to the individual depicted and potentially to the community by spreading misinformation and violating rights. The AI system's use directly led to harm (violation of rights and reputational harm). The political and legal responses are reactions to this incident, not the primary event. Therefore, this qualifies as an AI Incident under the framework because the AI-generated content has directly led to harm.
Thumbnail Image

Taylor Swift Hilang di Twitter X Buntut Foto Porno AI Viral

2024-01-28
suara.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake image that combines real and fake photos to create a sexually explicit image of Taylor Swift. This AI-generated content has been widely viewed and liked, causing reputational and privacy harm to the individual, which falls under harm to communities and violation of rights. The spread of such deepfake content is a direct consequence of AI use, and the platform's response confirms the harm occurred. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Ahli Ungkap Cara Terhindar dari AI Pornografi seperti Taylor Swift

2024-01-29
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to generate deepfake pornographic content, which has directly led to harms including violations of privacy, reputational harm, and extortion. These harms affect individuals' rights and well-being, fulfilling the criteria for an AI Incident. The discussion of real cases, such as the Taylor Swift deepfake and Twitch streamer incidents, confirms that harm has occurred, not just potential harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Twitter X Akui Blokir Taylor Swift Buntut Foto Porno Deepfake AI

2024-01-29
suara.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is an AI system producing harmful outputs. The harm includes violation of privacy, reputational damage, and exposure to non-consensual pornography, which are clear harms to individuals and communities. The platform's blocking of search terms and removal of content is a response to an ongoing AI Incident. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

Bos Microsoft Tanggapi Gambar Porno AI Taylor Swift: Mengkhawatirkan dan Mengerikan

2024-01-29
Liputan 6
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated non-consensual pornographic images, which is a clear violation of individual rights and privacy, thus constituting harm. The AI system (Microsoft Designer) is implicated as the tool used to produce these images, either directly or through its vulnerabilities being exploited. This meets the criteria for an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to the individual). The CEO's comments and the reported technical weaknesses further confirm the AI system's role in the incident.
Thumbnail Image

X Ambil Langkah Usai Foto-foto Panas Deepfake Taylor Swift Viral, Muncul Wacana Aturan Soal AI di AS - Banjarmasinpost.co.id

2024-01-29
Banjarmasin Post
Why's our monitor labelling this an incident or hazard?
The event describes the circulation of AI-generated deepfake images that are inappropriate and non-consensual, which constitutes a violation of personal rights and harms the individual depicted. The AI system's use in generating and disseminating these images directly led to harm. The platform's response and legislative discussions are complementary information but the core event is an AI Incident due to realized harm from AI misuse. Therefore, this qualifies as an AI Incident under the framework, as it involves direct harm caused by AI-generated content violating rights and causing reputational and emotional harm.
Thumbnail Image

Gara-Gara Foto Deepfake Porno Taylor Swift, Kongres AS Serukan Pembuatan UU Baru : Okezone techno

2024-01-28
https://techno.okezone.com/
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake images, which have been viewed millions of times and have caused harm to the person depicted (Taylor Swift) and potentially others. The harm includes violations of privacy and reputational damage, which fall under violations of human rights and harm to communities. The political calls for new laws and the active removal of such content by platforms further confirm the recognition of harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm.
Thumbnail Image

Heboh Foto Mesum, X hingga Meta Blokir Pencarian Taylor Swift

2024-01-29
detiki net
Why's our monitor labelling this an incident or hazard?
The event describes the creation and widespread sharing of AI-generated deepfake explicit images, which have caused reputational harm to Taylor Swift. The involvement of AI systems in generating these images is explicit, and the harm (reputational damage and potential rights violations) is realized and ongoing. The platforms' responses to block and remove content confirm the harm's seriousness. Hence, this is an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

Imbas Foto Mesum Taylor Swift Rekayasa AI, Politisi Desak Aturan Baru

2024-01-28
detiki net
Why's our monitor labelling this an incident or hazard?
The event describes the creation and widespread sharing of AI-generated deepfake images that caused reputational and emotional harm to Taylor Swift, a clear violation of rights and harm to an individual. The AI system's use in generating these images is central to the incident. The harm is realized, not just potential, as the images were viewed millions of times and caused public outcry. Therefore, this qualifies as an AI Incident. The political response and calls for new laws are complementary information but do not change the primary classification.
Thumbnail Image

Heboh Foto Mesum Taylor Swift Rekayasa AI Beredar

2024-01-26
detiki net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create realistic but fake explicit images of Taylor Swift without her consent, which have been widely disseminated, causing reputational and emotional harm (harm to person and community). The AI's role is pivotal as the images are AI-generated, and the harm is direct and realized. The article also discusses the failure of AI-based content moderation systems to prevent the spread, reinforcing the AI system's involvement in the harm. Hence, this is an AI Incident.
Thumbnail Image

Gambar Porno AI Taylor Swift Dilihat Jutaan Kali, Gedung Putih Shock

2024-01-29
detiki net
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to generate deepfake images, which are non-consensual and pornographic, causing harm to the individual's reputation and emotional well-being. The widespread sharing of these images on social media platforms constitutes a violation of rights and harm to the community. The involvement of the White House and calls for legislation further underscore the recognition of harm caused. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use.
Thumbnail Image

Jadi Korban Pelecehan Deepfake, Kominfo: Laporkan Saja!

2024-01-29
detiki net
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that manipulates images and videos to create realistic but fake content. The article reports actual incidents where deepfake AI-generated content has been used to harass and defame individuals, causing harm to their rights and reputations. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals. The discussion of legal recourse and the call for reporting victims further confirms the harm has materialized and is recognized.
Thumbnail Image

Bikin Geger AS, Dari Mana Sumber Gambar AI Porno Taylor Swift?

2024-01-28
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI text-to-image generators to produce and spread non-consensual pornographic deepfake images of Taylor Swift, which is a direct violation of personal rights and privacy. The AI system's misuse has directly led to harm by disseminating harmful content that targets an individual, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The involvement of AI in generating and spreading the images is clear, and the harm is realized, not just potential. The discussion of legislative responses further confirms the recognition of harm caused by this AI misuse.
Thumbnail Image

Nama Taylor Swift Hilang di Pencarian X, Simak Alasannya

2024-01-29
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images (an AI system generating manipulated content) that have been widely disseminated, causing harm to the individual depicted and potentially to the community by spreading misinformation and explicit content. The platform's action to block search terms is a response to this harm. Since the AI system's use directly led to the spread of harmful content, this is an AI Incident under the framework, specifically under harm to communities and violation of rights. The harm is realized, not just potential, so it is not merely a hazard or complementary information.
Thumbnail Image

Marah, Taylor Swift Timbang Langkah Hukum Usai Jadi Korban AI Porno

2024-01-26
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to generate explicit fake images (deepfake pornography) without consent, which constitutes a violation of personal rights and exploitation, fitting the definition of harm to individuals. The harm has already occurred as the images were circulated online, causing distress and reputational damage. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content. The discussion of potential legal and regulatory responses is complementary but does not change the primary classification.
Thumbnail Image

Taylor Swift Jadi Korban AI, Gedung Putih sampai Turun Tangan

2024-01-27
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images causing harm to Taylor Swift through non-consensual pornography, which is a violation of rights and harms the individual and community. The widespread sharing of these images on social media platforms and the governmental response confirm that harm has occurred. Therefore, this qualifies as an AI Incident because the AI system's use directly led to realized harm.
Thumbnail Image

Foto Mesum Taylor Swift Buatan AI Viral, Begini Kronologinya

2024-01-29
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to create explicit images without consent, which have been widely disseminated, causing harm to the individual and the community. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The AI system's use directly led to the harm through the generation and spread of these images. The event is not merely a potential risk or complementary information but a realized harm caused by AI misuse.
Thumbnail Image

Heboh Nama Taylor Swift Hilang dari Twitter X

2024-01-29
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is a product of AI systems creating realistic but fake images. The deepfake content is sexually explicit and falsely depicts Taylor Swift, causing reputational and emotional harm, which falls under harm to communities and violation of rights. The widespread dissemination (27 million views) confirms the harm is realized, not just potential. The platform's slow response to removing the content further contributes to the harm. Hence, this is an AI Incident as the AI system's use directly led to harm.
Thumbnail Image

X Blok Pencarian Taylor Swift dan Taylor Swift AI

2024-01-29
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems. The harm is realized and ongoing, as the images were widely viewed and shared, causing reputational and privacy harm to Taylor Swift. The platform's response to block searches and remove content confirms the recognition of harm. The use of AI to create non-consensual explicit images is a clear violation of rights and thus fits the definition of an AI Incident. The article also mentions potential legal actions and calls for regulation, but the primary event is the harm caused by the AI-generated content already disseminated.
Thumbnail Image

Foto AI Taylor Swift Berbau Sensual Buat Geger, Ancam Seret ke Hukum

2024-01-28
intipseleb.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to create manipulated images of Taylor Swift with a sensual nature, which were then spread online. This use of AI directly led to harm in terms of violation of personal rights and reputational damage. The incident involves misuse of AI-generated content causing harm to an individual, fitting the definition of an AI Incident due to violation of rights and harm to the person.
Thumbnail Image

X batasi pencarian "Taylor Swift" perlambat penyebaran gambar palsu

2024-01-29
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The description indicates that X is using automated or AI-driven filtering to block certain search terms to limit the spread of fake images. This is a use of AI systems in content moderation. However, there is no indication of harm caused by the AI system's malfunction or misuse, nor is there a credible risk of harm arising from this action. The event is primarily about a platform's response to misinformation risks, which fits the definition of Complementary Information as it provides context on governance or societal responses to AI-related issues.
Thumbnail Image

Deepfake AI Tak Senonoh Taylor Swift Banjiri Media Sosial X, Tuai Kecaman dari Berbagai Pihak - Jawa Pos

2024-01-27
JawaPos.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create deepfake content, which is a form of AI-generated synthetic media. The content is sexually explicit and unauthorized, constituting harm to the individual’s rights and reputational harm, which falls under violations of human rights and harm to communities. The widespread dissemination and millions of views confirm the harm is realized, not just potential. The involvement of AI in generating the content and its role in causing harm meets the criteria for an AI Incident.
Thumbnail Image

Setelah Gambar Deepfake Eksplisit Viral, Nama Taylor Swift Tidak Muncul dalam Kolom Pencarian di Media Sosial X - Jawa Pos

2024-01-28
JawaPos.com
Why's our monitor labelling this an incident or hazard?
The event describes the creation and viral spread of AI-generated explicit deepfake images, which directly violate Taylor Swift's rights and cause harm to her reputation and privacy. The AI system's role in generating these images is central to the incident. The harm is realized, not just potential, as the images have been widely viewed and shared. Therefore, this qualifies as an AI Incident due to violation of rights and harm to the individual and community.
Thumbnail Image

Beda Penyebab Hilangnya Pencarian Nama Mahfud dan Taylor Swift di X - Teknologi Katadata.co.id

2024-01-29
katadata.co.id
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake images, which are harmful and violate personal rights. The spread of these images on the platform caused real harm, and the platform's action to block searches is a response to this harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (violation of rights and harm to community reputation).
Thumbnail Image

Taylor Swift Jadi Korban "Deepfake" AI, Kuasa Hukum Segera Layangkan Gugatan

2024-01-29
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create deepfake images that are pornographic and falsely represent Taylor Swift, causing harm to her privacy and dignity. This is a direct harm caused by the AI system's outputs. The spread of such content on social media platforms and the response by the affected party indicate realized harm. Therefore, this qualifies as an AI Incident due to violations of rights and harm to the individual and community.
Thumbnail Image

Foto Porno Hasil AI Beredar, Pencarian Taylor Swift Diblokir di Platform X Twitter

2024-01-29
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake images (deepfakes) that have been widely viewed and caused concern, prompting platform intervention. The harm is realized as the fake images spread misinformation and cause reputational and personal harm to the individual depicted. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violation of rights. Blocking the search is a response to this harm, but the incident itself is the circulation of harmful AI-generated content.
Thumbnail Image

Akun Taylor Swift di Platform X Tak Bisa Dicari Buntut Foto Skandal AI

2024-01-28
SINDOnews.com
Why's our monitor labelling this an incident or hazard?
The event describes the creation and widespread sharing of AI-generated deepfake images that are sexually explicit, directly harming Taylor Swift's reputation and privacy. The platforms' responses to block search terms and suspend accounts indicate recognition of the harm caused. The harm is realized, not just potential, and the AI system's use in generating the deepfakes is central to the incident. This fits the definition of an AI Incident due to violation of rights and harm to the individual and community through dissemination of harmful AI-generated content.
Thumbnail Image

Gambar AI Mirip Taylor Swift Viral di Situs Porno, Swifties Marah Besar

2024-01-26
SINDOnews.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that are pornographic and non-consensual, targeting a real person, Taylor Swift. The dissemination of such content causes harm to the individual's rights and dignity, and the distress and outrage among fans indicate social harm. The AI system's use in generating and spreading these images directly leads to these harms, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The mention of legal frameworks and takedown actions further supports the recognition of actual harm rather than potential harm.
Thumbnail Image

X Blokir Penelusuran Kata Kunci Taylor Swift Usai Foto Deepfake AI Beredar |Republika Online

2024-01-29
Republika Online
Why's our monitor labelling this an incident or hazard?
The event describes the circulation of AI-generated deepfake pornographic images, which are non-consensual and have caused harm to the individual depicted (Taylor Swift) and potentially to the community by spreading harmful content. The AI system's use in generating these images directly led to this harm. The platform's response to block searches and remove content is a mitigation effort. Since the harm has already occurred due to the AI-generated content, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights and harm to communities.
Thumbnail Image

Jadi Korban "Deepfake" AI, Taylor Swift Siapkan Gugatan

2024-01-29
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake AI) used to create and distribute harmful pornographic images without consent. This has directly led to harm in terms of violation of privacy and rights, as well as reputational damage to Taylor Swift. The circulation of such content on social media platforms constitutes a breach of fundamental rights and causes harm to the individual and communities. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use has directly led to significant harm.
Thumbnail Image

X Twitter Blokir Pencarian "Taylor Swift", Buntut Kasus Deepfake AI

2024-01-29
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create deepfake images, which are manipulated media generated by AI. The dissemination of these images constitutes a violation of privacy and potentially other rights, fulfilling the criteria for harm under violations of human rights or breach of obligations protecting fundamental rights. The social media platform's blocking of search terms is a response to this harm. The preparation of legal action further confirms the recognition of harm caused by AI-generated content. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated deepfake content.
Thumbnail Image

Deepfake Taylor Swift Memantik Anggota Parlemen Amerika Bentuk Undang-undang Baru

2024-01-28
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated manipulated images or videos that can cause significant harm, including reputational damage and emotional distress. The article reports that deepfake images of Taylor Swift have been widely viewed and spread, causing harm. This is a direct harm caused by the use of an AI system (deepfake generation). Therefore, this event qualifies as an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Taylor Swift Hilang dari Fitur Pencarian di X Usai Gaduh Foto Deepfake

2024-01-29
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated deepfake images of a public figure, which is a direct misuse of AI technology causing harm to the individual's reputation and potentially violating rights. The platform's temporary blocking of search results and active removal of such content show harm is occurring and being addressed. The involvement of lawmakers proposing legislation further supports the recognition of harm. Therefore, this qualifies as an AI Incident due to realized harm from AI misuse.
Thumbnail Image

Ramai Kasus Deepfake Taylor Swift, Ahli Ungkap Cara Terhindar dari AI Pornografi

2024-01-29
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake pornographic content without consent, which has directly led to harm to individuals' rights and reputations, including harassment and potential psychological harm. The article mentions specific harms such as non-consensual sharing and the difficulty in distinguishing real from fake content, which impacts victims significantly. The platform's response to remove such content after a delay further confirms the harm has occurred. Therefore, this is an AI Incident due to realized harm caused by AI-generated deepfake pornography.
Thumbnail Image

Platform Media Sosial X Batasi Pengguna yang Mencari Taylor Swift, Ini Alasannya - Warta Tidore

2024-01-29
Warta Tidore
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated fake images (deepfakes) of Taylor Swift, which have spread widely on the platform, causing harm to the individual and potentially to the community by spreading misinformation and harmful content. The AI system's use in generating these images directly led to the harm, prompting the platform to restrict searches to mitigate further damage. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to community).
Thumbnail Image

Heboh Foto AI Tak Senonoh Taylor Swift Membuat Marah para Penggemarnya - Zona Priangan

2024-01-25
Zona Priangan
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake images that are pornographic and non-consensual, which constitutes a violation of personal rights and harms the community of fans. The harm is realized as the images have already spread and caused distress. Therefore, this qualifies as an AI Incident due to violations of human rights and harm to communities caused by the AI-generated content.
Thumbnail Image

Jadi Korban Foto Porno AI, Taylor Swift Pertimbangkan Ambil Jalur Hukum

2024-01-28
Yoursay.id
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated manipulated images (deepfakes) that have been distributed online, causing harm to Taylor Swift's reputation and privacy. The use of AI to create non-consensual pornographic images is a clear violation of personal rights and can be classified as harm to the individual (a form of harm to persons and violation of rights). The harm has already materialized as the images were widely shared before removal, and the discussion of legal action confirms the recognition of harm. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

X Memblokir Pencarian Taylor Swift Buntut Viralnya Foto Deep Fake Taylor Swift dengan Rating X - Impresi

2024-01-27
Nigel Lythgoe Bercanda Ingin Jadi 'Orang Berikutnya yang Melakukan Pelecehan' kepada Paula Abdul - Impresi
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake images of Taylor Swift, which are non-consensual and illegal, thus constituting a violation of rights (human rights and intellectual property). The platform's response to block searches and remove content indicates that harm has occurred or is ongoing. Therefore, this qualifies as an AI Incident because the AI-generated content has directly led to harm to the individual and community (reputational and privacy harm).
Thumbnail Image

VIRAL! Fotonya Diedit Pakai AI dan Deepfake, Taylor Swift Laporkan Pelaku - Portal Pati

2024-01-27
Portal Pati
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI and deepfake technology being used to create inappropriate images of Taylor Swift, which have spread online. This constitutes a violation of rights and harm to the individual involved. Since the AI system's use has directly led to this harm, it qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations protecting fundamental rights.
Thumbnail Image

Taylor Swift Jadi Korban <em>Deepfake Porn</em>, Fan Gaungkan 'Protect Taylor Swift' |Republika Online

2024-01-26
Republika Online
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated deepfake images that are pornographic and non-consensual, which is a clear violation of personal rights and privacy. The harm is realized as the images have been circulated widely, causing distress to Taylor Swift and her family. The AI system's role is pivotal as it was used to create the fake images. Therefore, this qualifies as an AI Incident under the category of violations of human rights and breach of obligations intended to protect fundamental rights.
Thumbnail Image

Skandal Taylor Swift: Gambar AI Tak Senonoh Viral di Internet - Zona Priangan

2024-01-26
Zona Priangan
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated images that are sexually explicit and offensive, which have been widely shared and caused harm to the subject's reputation and distress to the community. The AI system's use directly led to this harm, fitting the definition of an AI Incident under violations of human rights and harm to communities. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Skandal Penyebaran Gambar Palsu Taylor Swift Menggemparkan Dunia, Begini Langkah yang Diambil Gedung Putih - Flores Terkini

2024-01-29
Flores Terkini
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate and disseminate fake explicit images of a real person, which constitutes a violation of rights and harm to the individual and community. The harm is realized as the images have been widely shared, causing reputational and emotional damage. The AI system's use directly led to this harm. The article also discusses platform responses and ongoing challenges, but the primary focus is on the incident of harm caused by AI-generated content. Therefore, this is classified as an AI Incident.
Thumbnail Image

Berkaca dari Kasus Foto Palsu Taylor Swift, Begini Dampak yang Ditimbulkan AI Dalam Menghasilkan Gambar - Flores Terkini

2024-01-29
Flores Terkini
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images that have been actively spread, causing real harm to the individual’s privacy and reputation. The AI system's role in generating these images is explicit and central to the harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to the community (Taylor Swift and potentially others).
Thumbnail Image

The Social Pulse: Fake-KI-Fotos von Taylor Swift sorgen für Ärger

2024-01-26
Yahoo!
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images, which are created by AI systems specialized in generating fake nude images of celebrities. The widespread sharing and sexualized nature of these images constitute harm to the individual and the community, including violations of rights and emotional harm. The AI system's use directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to a person and the community.
Thumbnail Image

KI-generierte Nacktfotos von Taylor Swift sorgen für Empörung

2024-01-26
Yahoo!
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created using AI systems that manipulate and generate realistic but fake content. The harm is realized as the images were publicly shared and viewed millions of times, constituting a violation of rights and causing reputational and emotional harm. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and harm to the individual and community). The political and social reactions further underscore the significance of the harm caused.
Thumbnail Image

Musks Plattform X lässt wieder nach Taylor Swift suchen

2024-01-30
GMX
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI image generators (Microsoft's 'Designer') being used to create fake pornographic images of Taylor Swift, which were then disseminated on the platform X. This use of AI has directly caused harm by producing and spreading non-consensual intimate images, violating personal rights and legal protections. The harm is realized, not just potential, and the AI system's role is pivotal in generating the harmful content. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Nacktfoto-Skandal um Taylor Swift - Deepfakes sorgen für Empörung | "Ihr seid krank"

2024-01-26
T-online.de
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates realistic synthetic images or videos. The creation and spread of non-consensual deepfake nude images directly violates the individual's rights and causes harm to the person targeted. The article highlights ongoing harm caused by AI-generated deepfakes, which is a clear AI Incident as it involves realized harm through the use of AI systems.
Thumbnail Image

Deepfakes: X geht gegen gefälschte Nacktbilder von Taylor Swift vor

2024-01-29
Focus
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that have been used maliciously to create fake pornographic content of a real person, Taylor Swift. The dissemination of these images on a major social media platform caused harm to the individual and potentially to the community by spreading false and harmful content. The platform's intervention to remove the images and restrict search results confirms the recognition of harm. This meets the criteria for an AI Incident as the AI system's use directly led to violations of rights and harm to the community.
Thumbnail Image

Gefälschte Sex-Fotos von Taylor Swift bringen Fans weltweit in Rage

2024-01-26
Focus
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the images are AI-generated deepfakes, which are synthetic media created by AI systems. The spread of these images has caused harm to Taylor Swift's reputation and distress to her fans, which falls under harm to communities and violation of rights. The AI system's use directly led to this harm by generating and enabling the distribution of manipulated explicit content. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI-generated deepfake images.
Thumbnail Image

Deepfake-Alarm: "X" stoppt Suche nach Taylor Swift

2024-01-29
Focus
Why's our monitor labelling this an incident or hazard?
Deepfake images are generated using AI systems capable of creating realistic manipulated content. The spread of such images constitutes harm to the individual (Taylor Swift) and the community by disseminating false and pornographic content, which is a violation of rights and reputational harm. The platform's blocking of searches is a response to this harm. Since the AI-generated deepfakes have already circulated and caused harm, this qualifies as an AI Incident under the definition of harm to communities and violation of rights due to AI misuse.
Thumbnail Image

Wegen Deep Fakes: X sperrt Suche nach Taylor Swift

2024-01-29
Focus
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images causing harm by spreading non-consensual explicit content of a real person, Taylor Swift. This is a clear violation of personal rights and causes harm to the individual and the community. The AI system's use directly led to this harm. The platform's response to block search terms is a mitigation measure but does not negate the fact that harm has occurred. Hence, this is an AI Incident involving the use of AI systems to generate harmful content that has been disseminated widely, causing real harm.
Thumbnail Image

Der Tag: KI-Nacktbilder von Taylor Swift empören US-Politiker

2024-01-26
N-tv
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images of a celebrity, which were widely viewed before removal. The creation and spread of such non-consensual deepfake content is a direct violation of rights and causes harm to the individual and community. The involvement of AI in generating these images and the resulting harm meets the criteria for an AI Incident. The political response further underscores the recognition of harm caused by AI misuse.
Thumbnail Image

Social Media: Musks Plattform X lässt wieder nach Taylor Swift suchen

2024-01-30
ZEIT ONLINE
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated fake images, indicating the presence of AI systems generating harmful content. The platform's temporary blocking of search was a response to this misuse. However, the article does not describe any realized harm such as injury, rights violations, or significant community harm caused by the AI system's development, use, or malfunction. Nor does it indicate a credible risk of future harm beyond the current mitigation. The main focus is on the platform's operational response to AI-generated harmful content, which fits the definition of Complementary Information as it provides context and updates on managing AI-related challenges rather than reporting a new AI Incident or Hazard.
Thumbnail Image

Deepfake-Pornografie: Don't fuck with Swifties

2024-01-26
ZEIT ONLINE
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate deepfake images, which are non-consensual and harmful, constituting a violation of rights and harm to the community. The widespread dissemination of these images before removal indicates realized harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

Nach Sperre: X lässt wieder nach Taylor Swift suchen - oe3.ORF.at

2024-01-30
oe3.ORF.at
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred here because the fake pornographic images are AI-generated (deepfakes), which involves AI technology for content creation. The platform's use of automated or algorithmic content moderation tools to block and monitor such content also implies AI system involvement. The event involves the use and malfunction (or inadequacy) of AI systems in content moderation and the harmful spread of AI-generated fake images, which constitute harm to the community and individuals' rights (reputational harm, violation of privacy). Since the fake images have already been disseminated and caused harm, this qualifies as an AI Incident. The platform's response and ongoing monitoring are part of managing this incident.
Thumbnail Image

X geht gegen gefälschte Nacktbilder von Taylor Swift vor - WELT

2024-01-29
DIE WELT
Why's our monitor labelling this an incident or hazard?
The article describes the creation and spread of AI-generated deepfake images that falsely depict Taylor Swift in pornographic poses. This is a clear case of AI-generated content causing harm through violation of personal rights and reputational damage. The platform's response to remove the content and restrict searches confirms the recognition of harm. Since the AI system's use directly led to this harm, it meets the criteria for an AI Incident under violations of rights and harm to communities or individuals.
Thumbnail Image

Porno-Deepfakes von Taylor Swift gehen bei X viral

2024-01-27
Frankfurter Allgemeine
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate deepfake images, which are non-consensual and pornographic, thus violating the individual's rights and causing harm to the community. The images were widely spread and viewed millions of times, indicating realized harm. The AI system's use directly led to this harm. Hence, this event meets the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

X blockiert Suchanfragen für diese Künstlerin

2024-01-29
computerbild.de
Why's our monitor labelling this an incident or hazard?
AI-generated sexually explicit images of Taylor Swift were created and spread on the platform, constituting a violation of rights and harm to the individual. The AI system's use in generating such content directly led to this harm. The platform's blocking of search terms and deletion of images are responses to this AI Incident. Therefore, this event qualifies as an AI Incident due to realized harm caused by AI-generated content and its dissemination.
Thumbnail Image

Twitter/X lässt wieder nach Taylor Swift suchen

2024-01-30
Kronen Zeitung
Why's our monitor labelling this an incident or hazard?
The article mentions the platform's response to unauthorized images, likely AI-generated, but does not describe any realized harm such as injury, rights violations, or disruption. The focus is on monitoring and controlling content, which is a governance or operational update rather than an incident or hazard. Therefore, this is Complementary Information providing context on platform moderation and response to AI-related content issues.
Thumbnail Image

KI-Nacktfotos von Taylor Swift sorgen für Empörung

2024-01-27
Kronen Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being used to create nude images of a public figure without consent, which is a violation of rights and causes harm to the individual. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights and reputational harm). The public and political reaction further supports the recognition of harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Taylor Swift: KI-generierte pornografische Bilder lösen politische Debatte aus

2024-01-28
heise online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate sexually explicit deepfake images of a real person without consent. The spread of these images has caused harm to the individual's rights and dignity, which fits the definition of an AI Incident under violations of human rights and harm to communities. The involvement of AI in generating and disseminating these images directly led to the harm described. Therefore, this is classified as an AI Incident.
Thumbnail Image

Taylor Swift: Hast du schon gefälschte Bilder von dir online gefunden?

2024-01-29
20 Minuten
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images causing harm by spreading manipulated content of a person without consent, which is a violation of personal rights and can cause significant reputational and emotional harm. This fits the definition of an AI Incident as the AI system's use directly led to harm. The platform's intervention is a response but does not negate the incident itself.
Thumbnail Image

Taylor Swifts gefälschte KI-Pornobilder könnten die Welt verändern

2024-01-28
20 Minuten
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of AI-generated fake nude images of a person without consent is a direct violation of personal rights and can cause significant harm to the individual and communities. The AI system's use in generating these images directly led to this harm. Therefore, this qualifies as an AI Incident under the category of violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

X blockiert Suche nach Popstar Taylor Swift

2024-01-29
Westdeutscher Rundfunk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images (deepfake pornographic content) of Taylor Swift circulating on the platform X. These deepfakes are a form of AI-generated manipulated content causing harm to the individual, including privacy violations and reputational damage, which falls under harm to persons or groups. The platform's blocking of search is a response to this realized harm. Therefore, this event qualifies as an AI Incident because the AI system's use (generation of deepfake images) has directly led to harm.
Thumbnail Image

Plattform X lässt wieder nach Taylor Swift suchen

2024-01-30
Westdeutscher Rundfunk
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through content moderation and detection of prohibited content, which is typically AI-driven on large platforms. The harm (fake pornographic images) is a recognized issue, but the article focuses on the platform's response (blocking and unblocking searches) and the impact of staff reductions on safety teams. There is no direct or indirect AI Incident described (no specific harm caused by AI malfunction or misuse), nor is there a plausible future harm scenario presented as a hazard. Instead, the article provides supporting information about the platform's handling of harmful content and the challenges faced, fitting the definition of Complementary Information.
Thumbnail Image

X blockiert wegen Fake-Pornos Suchanfragen zu Taylor Swift

2024-01-29
www.Bluewin.ch
Why's our monitor labelling this an incident or hazard?
The AI system is involved in generating deepfake pornographic images, which are harmful and violate the rights of the individual depicted (Taylor Swift). The platform's inability to control this AI-generated content has caused direct harm to the community and the individual, fulfilling the criteria for an AI Incident. The blocking of search terms is a response to this harm but does not negate the fact that harm has occurred due to the AI system's outputs. Therefore, this event qualifies as an AI Incident due to realized harm from AI-generated content and the platform's failure to manage it effectively.
Thumbnail Image

Fans kämpfen gegen Fake-Porno-Bilder von Taylor Swift

2024-01-26
www.Bluewin.ch
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that are pornographic and involve a real person without consent, which is a violation of rights and causes harm to the individual and community. The AI system's use in generating and spreading these images directly leads to harm, fulfilling the criteria for an AI Incident. The fans' efforts to remove the images and the legal challenges highlight the realized harm rather than a potential one, confirming the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Online Plattform X lässt wieder nach Taylor Swift suchen

2024-01-30
der Standard
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for content moderation and search functionalities on the platform X. The circulation of AI-generated fake pornographic images caused harm to the individual (Taylor Swift) and the platform responded by blocking search queries as a mitigation. The AI system's failure to prevent the spread of harmful content and the subsequent intervention directly relate to harm caused by AI-generated content. Hence, this is an AI Incident due to realized harm linked to AI system use and malfunction.
Thumbnail Image

Deepfake-Nacktbilder von Taylor Swift zeigen Gefahren von KI-Tools auf

2024-01-26
der Standard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI tools to create deepfake images that have been widely disseminated, causing harm to the privacy and dignity of Taylor Swift. The harm is direct and realized, as the images were viewed millions of times and caused public outcry and distress. The AI system's use in generating manipulated content that violates personal rights and spreads harmful material fits the definition of an AI Incident involving violations of human rights and harm to communities. The ongoing spread despite platform efforts further confirms the incident's significance.
Thumbnail Image

Der Name Taylor Swift ist auf X nicht mehr auffindbar

2024-01-28
der Standard
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake images that have been widely disseminated on a major social media platform, causing harm to the reputation and privacy of the individual depicted (Taylor Swift). The platform's inability to effectively moderate and prevent the spread of such harmful AI-generated content constitutes an AI Incident, as it has directly led to violations of personal rights and harm to the community through misinformation and manipulated media. The article also mentions platform and governmental responses, but the primary focus is on the realized harm caused by AI-generated deepfakes, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI-Fake-Porn von Taylor Swift: US-Administration will verschärfte Regeln gegen Deepfakes

2024-01-28
ComputerBase
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create and disseminate harmful deepfake content that has directly led to significant harm to individuals (Taylor Swift) and communities (women and girls targeted by online abuse). The AI system's use in generating non-consensual explicit images and the platform's failure to promptly moderate these images constitute an AI Incident under the framework, as the harm is realized and directly linked to the AI system's use and the platform's moderation failures. The discussion of regulatory responses is complementary but does not overshadow the primary incident of harm caused by AI-generated deepfakes.
Thumbnail Image

Wie Taylor Swift mit sexualisierten Deepfakes verleumdet wird

2024-01-26
Süddeutsche Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI (Microsoft Designer) was used to create sexualized deepfake images of Taylor Swift, which have been widely distributed and caused harm. This is a clear case of AI-generated content leading to violations of rights (privacy, dignity) and harm to the individual and community. The harm is realized, not just potential, and the AI system's use is central to the incident. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Deepfakes im Netz: X geht gegen gefälschte Nacktbilder von Taylor Swift vor

2024-01-29
RP Online
Why's our monitor labelling this an incident or hazard?
Deepfake technology is a form of AI system that generates manipulated digital media. The creation and dissemination of fake pornographic images constitute a violation of personal rights and can cause harm to the individual and communities. Since the AI-generated content has already been distributed and the platform is responding to this harm, this qualifies as an AI Incident involving violations of rights and harm to communities.
Thumbnail Image

Über 45 Millionen Klicks: KI-Nacktfotos von Taylor Swift sorgen für Empörung

2024-01-26
RP Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate non-consensual explicit images of a public figure, which is a clear violation of rights and causes harm to the individual and potentially to communities by spreading harmful content. The AI system's development and use have directly led to this harm, fitting the definition of an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

Taylor Swift: Nackt-Fotos von gehen viral - das steckt dahinter

2024-01-26
Express.de
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems creating manipulated content. The harm is realized as these images violate Taylor Swift's rights and cause reputational and emotional harm. The widespread dissemination on social media platforms, despite content policies, shows the AI system's outputs have directly led to harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities through misinformation and manipulated content.
Thumbnail Image

X sperrt Suche nach Taylor Swift - der Grund

2024-01-29
Express.de
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that have been widely disseminated, causing harm to Taylor Swift's privacy and reputation. The platform's inability to fully control the spread of these images and the subsequent search term blocking demonstrate the direct impact of AI misuse. The harm is realized, not just potential, as the deepfakes have been viewed millions of times and have caused significant concern among fans and authorities. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to a person and communities through the spread of harmful content.
Thumbnail Image

Gefälschte Nacktbilder von Taylor Swift gehen auf X viral

2024-01-26
watson.ch/
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are synthetic media created by AI systems. The images are pornographic and fake, violating the depicted person's rights and causing reputational and emotional harm. The content was widely spread and viewed millions of times, indicating significant harm to the individual and communities. The platforms' failure to promptly remove the content contributed to the harm. This fits the definition of an AI Incident as the AI system's use directly led to violations of rights and harm to the individual and community.
Thumbnail Image

Suche nach Taylor Swift auf X gesperrt und schon wieder freigegeben

2024-01-30
TAG24
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred to be involved because the platform X likely uses AI-based content moderation and search algorithms to detect and block harmful or inappropriate content. The event involves the use and malfunction (or limitation) of such AI systems in content moderation leading to a temporary restriction of search functionality. However, the harm (circulation of fake pornographic images) is occurring, but the AI system's role is indirect and the event mainly concerns the platform's response to this misuse. Since the event describes a realized harm (spread of fake pornographic images) and the AI system's involvement in content moderation that led to a temporary search block, this qualifies as an AI Incident due to violation of rights (reputation and privacy) and harm to community (harmful content dissemination).
Thumbnail Image

Wirbel um KI-Nacktfotos von Taylor Swift

2024-01-26
oe24
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake images, which are AI-generated manipulations of real people's faces into fake photos. The harm is realized as the deepfake was widely viewed and distributed, constituting a violation of rights and causing harm to the individual. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and harm to the individual). The article also discusses societal and regulatory concerns, but the primary focus is on the realized harm from the AI-generated deepfake content.
Thumbnail Image

KI-Nacktbilder von Taylor Swift: Eifrige Fans und eine Sperre auf X

2024-01-30
Die Presse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images of Taylor Swift, which are non-consensual and pornographic, constituting a clear violation of rights and harm to the individual. The AI system's use directly led to the creation and spread of harmful content. The platform's response and community actions are reactions to this realized harm. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of rights and harm to community).
Thumbnail Image

KI-generierte Nacktfotos von Taylor Swift sorgen für Empörung

2024-01-26
Die Presse
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of AI-generated deepfake images without consent, which constitutes a violation of individual rights and causes harm to the person depicted. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The widespread dissemination of the content and the public and political reaction further confirm the realized harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Nach Taylor Swift - Deepfakes - Nun aber wirklich raus aus X? | 30.01.2024

2024-01-30
swr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfakes created using AI that depict Taylor Swift in fake pornographic images. The use of AI to generate such harmful content directly leads to reputational and privacy harm, which falls under harm to communities and violation of rights. The platform's response to restrict searches indicates the harm is ongoing. Hence, this is an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

Popstar im KI-Sturm - Wegen Fake-Bildern blockiert X gesamte Suche nach Taylor Swift

2024-01-28
Tages Anzeiger
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated images created by a text-to-image AI system (Microsoft Designer). The creation and spread of these sexualized fake images constitute a violation of rights and harm to the individual depicted, fulfilling the criteria for harm under (c) violations of human rights or breach of obligations protecting fundamental rights. The platform's response to block searches indicates a direct impact caused by the AI-generated content. Therefore, this is an AI Incident due to realized harm caused by the AI system's outputs and their dissemination.
Thumbnail Image

Von Taylor Swift kursieren Nacktfotos aus der KI

2024-01-26
Kurier
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content (nude images) which is a product of AI systems generating realistic but fake images. The circulation of such content can cause harm to the individual's reputation and privacy, constituting a violation of rights. Since the AI-generated images are already circulating, the harm is realized. Therefore, this qualifies as an AI Incident due to violation of rights caused by the use of AI systems to create and distribute harmful content.
Thumbnail Image

Nach Nacktbild-Sperre: Taylor Swift kann auf X wieder gesucht werden

2024-01-30
Berliner Zeitung
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through content moderation on a social media platform, which is a typical AI application. The search block was a response to harmful content (fake nude images), but the AI system's role is in monitoring and controlling content rather than causing harm. No direct or indirect harm caused by AI malfunction or misuse is described. The event focuses on the platform's management decisions and ongoing monitoring efforts, which aligns with societal and governance responses to AI challenges. Hence, it fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Empörung über Ki-generierte Nacktfotos von Taylor Swift

2024-01-26
stol.it
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of AI-generated deepfake nude images of a real person without consent, which constitutes a violation of personal rights and causes harm to the individual and potentially to communities by spreading harmful content. The AI system's role in generating these images is central and directly linked to the harm. The event is not merely a potential risk but an actual incident where harm has occurred, meeting the criteria for an AI Incident under violations of rights and harm to communities.
Thumbnail Image

Musks Plattform X lässt wieder nach Taylor Swift suchen

2024-01-30
inFranken.de
Why's our monitor labelling this an incident or hazard?
The platform X uses AI systems for content moderation and search functionalities. The spread of fake pornographic images and the resulting temporary search block indicate a failure or limitation in the AI system's ability to effectively moderate content, leading to harm (dissemination of harmful fake images) and disruption (search functionality disabled). The reduction in safety teams further exacerbated this issue. Since harm has occurred and is directly linked to the AI system's use and malfunction, this event is classified as an AI Incident.
Thumbnail Image

Plattform X stoppt nach Deepfakes Suche nach Taylor Swift

2024-01-29
Deutschlandfunk Kultur
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated synthetic media that can cause significant harm by spreading false and damaging content about individuals. The article describes actual harm occurring due to AI-generated deepfake images, which have led to platform intervention to mitigate further harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm (privacy violation, reputational damage) and disruption requiring action. The platform's response and the community's counteraction are complementary but do not negate the incident classification.
Thumbnail Image

Taylor Swift kämpft gegen KI-Fotos - was hilft gegen Fakes?

2024-01-30
Kölner Stadt-Anzeiger
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images that have been widely shared, causing harm to the reputation and privacy of Taylor Swift and potentially misleading the public. This constitutes harm to communities and individuals through misinformation and violation of personal rights. The AI system's use in creating and spreading these images is directly linked to the harm, fulfilling the criteria for an AI Incident. The article does not merely discuss potential future harm or general AI developments but reports on actual harm caused by AI-generated content.
Thumbnail Image

Taylor Swift kämpft gegen KI-Fotos - was hilft gegen Fakes?

2024-01-29
Kölner Stadt-Anzeiger
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images (deepfakes) of Taylor Swift being widely shared, including harmful and misleading depictions. The AI system's use in creating and spreading these images has directly led to harm, including reputational damage and misinformation, which affect the individual and potentially the community. The platform's response to remove content and restrict searches further confirms the harm's materialization. Hence, this event meets the criteria for an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

Taylor Swift kämpft gegen KI-Fotos - was hilft gegen Fakes?

2024-01-29
Kölner Stadt-Anzeiger
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake images that have been widely shared, causing harm to the reputation and privacy of Taylor Swift, which constitutes harm to a person and communities. The AI-generated deepfakes have directly led to this harm, fulfilling the criteria for an AI Incident. The platform's intervention confirms the harm is materialized and significant. The article also highlights the societal impact and the need for responses, but the primary focus is on the realized harm from AI misuse.
Thumbnail Image

Suche nach Taylor Swift wegen Deepfake-Nacktbildern eingeschränkt

2024-01-28
futurezone.at
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology) used to create non-consensual explicit images of a person, which is a clear violation of rights and causes harm. The dissemination of these images on social media platforms has led to direct harm to the individual and potentially to communities. The blocking of search terms and content removal are responses to this harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through the creation and spread of manipulated images violating personal rights.
Thumbnail Image

Taylor Swift schäumt vor Wut: Intim-Fotos von US-Popsängerin im Netz aufgetaucht

2024-01-29
News.de
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the intimate photos are AI-generated deepfakes, which have been widely disseminated online, causing harm to Taylor Swift's privacy and reputation. This constitutes a violation of rights and harm to the individual and community. The AI system's use directly led to this harm. The event is not merely a potential risk but an actual occurrence with significant impact, thus classifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Social Media: Musks Plattform X lässt wieder nach Taylor Swift suchen

2024-01-30
Rhein-Neckar-Zeitung
Why's our monitor labelling this an incident or hazard?
The circulation of fake pornographic images of Taylor Swift likely involved AI-generated or manipulated content, which is a known AI application. The platform's response to block search queries indicates that the AI-generated content caused harm (reputational and privacy harm) to the individual. This harm fits under violations of rights (privacy and possibly intellectual property). Since the AI system's outputs (fake images) directly led to harm and platform action, this is an AI Incident. The event is not merely a potential hazard or complementary information but a realized harm scenario involving AI-generated content.
Thumbnail Image

X hebt Sperre für Suchanfragen nach Taylor Swift auf

2024-01-30
Salzburger Nachrichten
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated fake pornographic images of a public figure, which is a direct harm to the individual's rights and reputation. The platform's response to block search queries was due to the AI-generated harmful content circulating on the platform. The AI system's role in generating the fake images is central to the harm, and the event describes realized harm and platform actions to mitigate it. Therefore, this is an AI Incident as the AI system's use has directly led to harm (violation of rights) and platform disruption (blocking search).
Thumbnail Image

Empörung über Ki-generierte Nacktfotos von Taylor Swift

2024-01-26
Salzburger Nachrichten
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake images without consent, which is a direct violation of rights and causes harm to the individual. The harm is realized, not just potential, as the images have been created and disseminated. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to the individual).
Thumbnail Image

Plattform X geht gegen gefälschte Nacktbilder von Taylor Swift vor | Tiroler Tageszeitung Online

2024-01-29
Tiroler Tageszeitung Online
Why's our monitor labelling this an incident or hazard?
The incident involves AI-generated fake images (deepfakes) that have been disseminated on a social media platform, causing harm to the individual depicted (Taylor Swift). The platform's response to remove such content and restrict search indicates recognition of the harm caused. Since the AI system's use (generative AI for fake images) has directly led to harm (reputational and psychological harm, violation of privacy and rights), this qualifies as an AI Incident under the framework, specifically under harm to communities and violations of rights.
Thumbnail Image

Sexualisierte Bilderfälschungen: Die Deepfake-Flut - netzpolitik.org

2024-01-29
netzpolitik.org
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI models like DALL-E 3) to create sexualized deepfake images without consent, which constitutes a violation of human rights and causes harm to individuals and communities. The harm is realized and ongoing, as the images have gone viral and caused public outrage and distress. The AI system's use directly led to the creation and dissemination of harmful content. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

Musks Plattform X lässt wieder nach Taylor Swift suchen

2024-01-30
Westfälische Nachrichten
Why's our monitor labelling this an incident or hazard?
The AI system was used to create fake pornographic images without consent, which is a violation of personal rights and can cause significant harm to the individual and community. The spread of these images on the platform caused the platform to take emergency measures, indicating direct harm caused by the AI-generated content. The involvement of AI in generating illegal content and the resulting platform disruption fits the definition of an AI Incident due to violation of rights and harm to communities.
Thumbnail Image

X geht gegen gefälschte Nacktbilder von Taylor Swift vor

2024-01-29
Freie Presse
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake images, which are manipulated digital media created by AI. The spread of these images has caused harm to Taylor Swift's reputation and privacy, which is a violation of rights and harm to communities. The article confirms the images were widely disseminated and viewed, indicating realized harm rather than just potential harm. The involvement of AI in creating the deepfakes and the resulting harm meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Empörung über Ki-generierte Nacktfotos von Taylor Swift

2024-01-26
NEWS Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create deepfake images, which are AI-generated synthetic media. The use of these images without consent constitutes a violation of rights and causes harm to the individual targeted and potentially to the broader community by spreading harmful misinformation and non-consensual content. The harm has already occurred as the images were viewed millions of times before removal. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

Taylor Swift und pornografische Deepfake-Bilder: Wann kommen endlich KI-Regelungen, die Frauen schützen?

2024-01-29
GLAMOUR
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images, which are explicitly stated to be created by AI systems. The distribution of these images has caused direct harm to Taylor Swift and, by extension, to other women who face similar abuses, including reputational damage, privacy violations, and mental health impacts. These harms fall under violations of human rights and harm to communities. The incident is ongoing and has already caused significant harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

KI-generierte Nacktfotos von Taylor Swift empören Fans und US-Politiker

2024-01-27
بوابتك العربية
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created using AI systems that manipulate and generate realistic fake content. The harm is realized as the images were widely viewed and caused distress to the individual and the community, constituting a violation of rights and harm to communities. The article describes the actual occurrence of harm, not just potential harm, making this an AI Incident. The involvement of AI in creating the harmful content and the resulting violation of rights and harm to communities justifies classification as an AI Incident.
Thumbnail Image

Taylor Swift: KI-generierte pornografische Bilder lösen politische Debatte aus

2024-01-28
بوابتك العربية
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated pornographic images created using AI text-to-image tools, which have been widely disseminated, causing harm to the individual depicted and raising concerns about the broader impact on women and girls. The harm includes violations of rights (non-consensual use of likeness, potential defamation, and psychological harm) and harm to communities through the spread of harmful content. The AI system's use directly led to these harms, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swifts gefälschte Sexfotos verärgern Fans auf der ganzen Welt

2024-01-26
بوابتك العربية
Why's our monitor labelling this an incident or hazard?
The article describes the creation and viral spread of AI-generated deepfake explicit images of a public figure, Taylor Swift. The AI system's use directly leads to harm by violating her rights and causing reputational and emotional harm. The harm is realized as the images have been widely viewed and shared, and the incident involves the misuse of AI-generated content. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of rights and harm to the community.
Thumbnail Image

Explizite gefälschte Bilder von Taylor Swift beweisen, dass die Gesetze nicht mit der Technologie Schritt gehalten haben, sagen Experten

2024-01-26
بوابتك العربية
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images that depict a person in sexually explicit scenarios without consent, which constitutes a violation of rights and harm to the individual and communities. The harm is realized as the images were widely viewed and shared, causing reputational and emotional damage. The AI system's role is pivotal as it enabled the creation of these fake images. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

X blockiert die Suche nach "Taylor Swift", nachdem explizite Deepfakes viral gegangen sind

2024-01-30
بوابتك العربية
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generative AI) to create explicit non-consensual images, which have been widely spread on the platform, causing harm to the individual depicted and the community. The platform's response to block searches and remove content confirms the recognition of harm. The AI system's use has directly led to violations of rights and harm to communities, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

X-bewertete KI-Bilder von Taylor Swift verbreiteten sich auf X und lösten Forderungen nach hartem Durchgreifen aus - National

2024-01-26
بوابتك العربية
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate deepfake images that are sexually explicit and non-consensual, directly harming the individual depicted (Taylor Swift) and potentially others. The harm includes violation of rights and harm to communities through the spread of deceptive and harmful content. The AI system's role is pivotal in creating and enabling the rapid dissemination of these images. The article reports that the harm has already occurred, with millions of views and widespread circulation before removal, fulfilling the criteria for an AI Incident. The discussion of platform moderation challenges and societal responses further supports the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

X unterbricht die Suche nach Taylor Swift, da sich explizite Deepfake-Bilder verbreiten

2024-01-30
بوابتك العربية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that are sexually explicit and non-consensual, which constitute a violation of privacy and human rights. The AI system's use directly led to the creation and spread of harmful content. The platform's response to block searches is a mitigation measure but does not negate the fact that harm has occurred. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

KI-generierte Nacktfotos von Taylor Swift sorgen für Empörung

2024-01-26
Baden online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake images that directly harm the subject's privacy and dignity, constituting a violation of rights. The widespread dissemination of these images on a major platform caused harm to the individual and communities, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the images were viewed millions of times before removal. The political and social reactions further confirm the significance of the harm caused.
Thumbnail Image

Nach Blockade wegen Deepfake-Pornos: Suche nach ''Taylor Swift'' bei X wieder möglich

2024-01-30
unternehmen-heute.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to produce a deepfake pornographic video, which is a direct use of an AI system to create harmful content. The harm includes violation of privacy and rights of the individual depicted (Taylor Swift), as well as broader harm to communities through online harassment and abuse. The blocking of search terms was a response to this harm. Therefore, this event qualifies as an AI Incident because the AI system's use directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Social Media: Musks Plattform X lässt wieder nach Taylor Swift suchen - Verlagshaus Jaumann

2024-01-30
Die Oberbadische - Markgräfler Tagblatt - Weiler Zeitung
Why's our monitor labelling this an incident or hazard?
The platform X likely uses AI systems for content moderation and search functionalities. The circulation of fake pornographic images constitutes harm to the individual's reputation and privacy, which can be considered harm to communities or violation of rights. However, the article describes a temporary measure (search block) and ongoing monitoring rather than a direct or realized harm caused by the AI system itself. The AI system's role is in content moderation and search, but the harm stems from the malicious use of manipulated content by users. Since the article focuses on the platform's response and monitoring efforts rather than a new incident of harm caused by AI, this is best classified as Complementary Information.
Thumbnail Image

Social Media: Musks Plattform X lässt wieder nach Taylor Swift suchen

2024-01-30
General-Anzeiger Bonn
Why's our monitor labelling this an incident or hazard?
The platform X uses AI-driven algorithms for content moderation and search functionalities. The temporary block and subsequent monitoring relate to the use of AI systems to detect and manage harmful content. However, the event does not describe any direct or indirect harm caused by the AI system itself, nor does it indicate a plausible future harm stemming from AI malfunction or misuse. Instead, it reports on the platform's response to harmful content and its moderation strategy, which is a governance or operational update. Therefore, this is Complementary Information about AI system use in content moderation rather than an AI Incident or Hazard.
Thumbnail Image

المركزية - البيت الأبيض قلق والسبب.. تايلور سويفت!

2024-01-27
المركزية
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate fake pornographic images, which constitutes a violation of rights and causes harm to the individual depicted. The harm is realized as the images have been widely viewed and spread, impacting the person and potentially communities. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-generated content. The article focuses on the harm and the response to it, not just general AI news or potential future harm.
Thumbnail Image

موقع إكس يوقف البحث عن اسم تايلور سويفت.. فما الأسباب؟

2024-01-29
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake explicit images, which directly leads to harm by violating the rights and dignity of Taylor Swift, a real person. The platform's action to block searches is a response to this harm. The spread of such AI-generated non-consensual intimate images constitutes a violation of rights and harm to the individual and community. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and reputational harm).
Thumbnail Image

صور إباحية مفبركة لنجمة البوب تثير غضبًا في الأوساط السياسية بأمريكا

2024-01-28
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to create fake pornographic images, which is a direct misuse of AI systems leading to harm. This harm includes violation of the individual's rights and potential psychological and social damage, as well as political and social disruption. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm as defined in the framework.
Thumbnail Image

صور تايلور سويفت.. وضعيات غير لائقة والمتهم مجهول

2024-01-28
صدى البلد
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to create fake indecent images, which have been widely disseminated, causing harm to the individual's reputation and potentially to communities by spreading misinformation and violating rights. This constitutes an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to community trust). The involvement of AI in generating harmful content and the resulting social impact fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

من بعد تحذير البيت الأبيض.. "إكس" حذف تصاور إباحية مفبركة ديال تايلور سويفت - كود: جريدة إلكترونية مغربية شاملة.

2024-01-29
كود: جريدة إلكترونية مغربية شاملة.
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create fabricated explicit images (deepfakes) of a public figure, which constitutes a violation of rights and harm to the individual and community. The harm is realized as the images have already spread widely, prompting official concern and calls for regulation. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

البيت الأبيض يعلق ومنصة X تحجب.. آخر تطورات صور تايلور سويفت الإباحية - اليوم السابع

2024-01-28
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The AI system was used to create explicit fake images without consent, directly causing harm to the individual's rights and reputation, which fits the definition of an AI Incident under violations of human rights and harm to communities. The platform's measures and governmental responses are reactions to this realized harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI-generated content and the societal and legal implications arising from it.
Thumbnail Image

صور تايلور سويفت الإباحية التي هزت البيت الأبيض.. ما قصة المشاهدات المليونية؟

2024-01-28
الشروق أونلاين
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create fake pornographic images (deepfakes) of Taylor Swift, which have been widely disseminated online, causing harm to her and raising concerns at the White House. This constitutes a violation of rights and harm to the individual and community, fitting the definition of an AI Incident. The involvement of AI in generating the harmful content and the resulting real harm justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

تايلور سويفت... البحث محظور بسبب صور عارية مزيفة

2024-01-29
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The event describes the circulation of AI-generated fake nude images of a public figure, which is a direct harm involving AI misuse. The platform's response to block searches is a mitigation measure but does not negate the fact that harm has occurred. The AI system's role in generating the fake images is pivotal to the incident. The harm includes violation of privacy, reputational damage, and societal harm from misinformation and non-consensual content. Therefore, this is classified as an AI Incident.
Thumbnail Image

مفبركة| عمرو أديب يحذر من انتشار صور فاضحة لنجمة مشهورة .. فيديو

2024-01-29
صدى البلد
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake explicit images, which constitutes a violation of personal rights and can cause harm to the individual's reputation and dignity. The harm is realized as the images have already spread and caused controversy, prompting official and platform responses. Therefore, this qualifies as an AI Incident due to violations of rights and harm to the individual and community through misinformation and defamation.
Thumbnail Image

صور غير لائقة.. إيلون ماسك يحظر النجمة "تايلور سويفت" |ماذا حدث؟

2024-01-29
صدى البلد
Why's our monitor labelling this an incident or hazard?
The incident involves AI systems generating deepfake images that are inappropriate and potentially harmful, constituting a violation of personal rights and harm to the individual and community. The AI system's misuse has directly led to reputational and informational harm, which fits the definition of an AI Incident. The platform's action to restrict searches is a response to this realized harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by AI-generated content.
Thumbnail Image

انتشار صور إباحية مزيفة لتايلور سويفت يثير غضب محبيها وسياسيين أمريكيين

2024-01-27
القدس العربي
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI to create fake pornographic images (deepfakes) of Taylor Swift, which were widely circulated and caused harm. This constitutes a violation of rights (privacy, dignity) and harm to communities (harassment, misinformation). The AI system's use directly led to these harms. Therefore, this qualifies as an AI Incident under the framework, as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

صورة "جنسية" للنجمة الأمريكية تايلور سويفت تثير قلق البيت الأبيض

2024-01-27
أخبار الآن
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images that are sexualized and harmful, which have been widely viewed and then removed. The White House's concern highlights the harm caused, including disproportionate effects on women and girls, which aligns with violations of rights and harm to communities. The AI system's role in generating and spreading manipulated media that causes harm is direct and realized. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

صور فاضحة بالذكاء الاصطناعي لتايلور سويفت تفجّر غضباً... والبيت الأبيض "قلِق"

2024-01-27
صحيفة الشرق الأوسط
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated manipulated explicit images (deepfake-like content) of Taylor Swift being widely shared, which is a direct use of AI systems to create harmful content. This has led to harm in terms of violation of personal rights, reputational damage, and community harm through misinformation and non-consensual pornography. The White House's concern and calls for legislative action further confirm the recognition of harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm as defined by violations of rights and harm to communities.
Thumbnail Image

صور إباحية لـ تايلور سويفت تثير غضب محبيها.. ما القصة؟

2024-01-27
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI systems to fabricate pornographic images of a public figure without consent, which constitutes a violation of rights and causes harm to the individual and communities. The harm is realized as the images have been widely viewed and caused public outrage and distress. The AI system's role is pivotal as it enabled the creation of realistic fake content that would not be possible otherwise. Therefore, this meets the criteria for an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

صور تايلور سويفت الإباحية تربك "إكس".. والبيت الأبيض يتدخل

2024-01-29
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake pornographic images, which are non-consensual and harmful, constituting a violation of rights and harm to the individual and community. The harm is realized and ongoing, as the images have spread widely and caused disruption, prompting platform and governmental responses. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated content violating rights and causing societal harm.
Thumbnail Image

إكس يلغي البحث عن صور اباحية لتايلور سويفت

2024-01-28
صحيفة السوسنة الأردنية
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake pornographic images, which have been widely disseminated, causing harm to the individual (Taylor Swift) and raising concerns about misinformation and privacy violations. The platform's response to remove such images and the White House's involvement indicate that harm has occurred and is ongoing. Therefore, this qualifies as an AI Incident because the AI-generated content has directly led to harm (violation of rights and harm to the individual and community).
Thumbnail Image

مقادة ب"الديب فايك".. البيت الأبيض شاعل مزيان من بعد ما انتشرات تصاور إباحية ديال مغنية البوب تايلور سويفت - كود: جريدة إلكترونية مغربية شاملة.

2024-01-28
كود: جريدة إلكترونية مغربية شاملة.
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake images, which have been widely distributed, causing harm to the individual depicted and raising broader societal concerns. The harm includes violation of privacy and potential psychological and reputational damage, fitting the definition of an AI Incident due to realized harm. The article also discusses responses from the White House, social media companies, and lawmakers, but the primary focus is on the harm caused by the AI-generated content, not just the responses, so it is not merely Complementary Information.
Thumbnail Image

البيت الأبيض "قلق للغاية" من فبركة صور إباحية لتايلور سويفت

2024-01-27
الحرة
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the pornographic images were AI-generated deepfakes, indicating the involvement of an AI system in fabricating and spreading harmful content. The harm is realized as it involves non-consensual sexual imagery, which violates personal rights and causes reputational and psychological harm. The event also highlights societal and governance responses, but the primary focus is on the harm caused by the AI-generated content. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to the individual and community.
Thumbnail Image

بعد تحذير البيت الأبيض.. إكس يحذف صورا إباحية مفبركة لتايلور سويفت

2024-01-27
الحرة
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create fabricated explicit images without consent, which is a violation of personal rights and can cause significant harm to the individual and community. The White House's warning and the platform's active removal of such content confirm the harm is occurring. The AI system's role in generating and spreading these images is direct and pivotal to the harm. Therefore, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights and harm to communities.
Thumbnail Image

البيت الأبيض "قلِق" من صور تايلور سويفت الفاضحة

2024-01-27
الإمارات اليوم
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated manipulated images (deepfakes) that are sexually explicit and falsely depict a person, which is a clear violation of privacy and potentially other rights. The widespread sharing and viewing of these images indicate realized harm to the individual and communities affected. The involvement of generative AI in creating these images and the resulting harm fits the definition of an AI Incident. The article also notes active removal efforts and calls for legislative responses, but the primary event is the harm caused by the AI-generated content.
Thumbnail Image

البيت الأبيض قلق من فبركة صور إباحية لتايلور سويفت

2024-01-27
almodon
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate fake pornographic images (deepfakes) of a real person without consent, which is a direct violation of personal rights and causes harm to the individual and communities. The harm is realized as the images have been widely viewed and spread, causing reputational and emotional damage. The involvement of AI in creating these images is explicit and central to the harm described. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

البيت الأبيض يعرب عن قلقه من فبركة صور إباحية لتايلور سويفت

2024-01-27
@Elaph
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the pornographic images are AI-generated deepfakes, indicating the involvement of an AI system in creating manipulated content. The widespread distribution of these images has caused harm to Taylor Swift, violating her rights and causing reputational and emotional damage. The harm is realized and ongoing, not merely potential. The involvement of AI in the creation and dissemination of harmful content fits the definition of an AI Incident, as it directly leads to violations of rights and harm to the individual and community. The article also discusses responses and calls for legislative action, but the primary focus is on the harm caused by the AI-generated images.
Thumbnail Image

جمهور تايلور سويفت: الذكاء الاصطناعي مثير للاشمئزاز | صحيفة المواطن الالكترونية للأخبار السعودية والخليجية والدولية

2024-01-27
صحيفة المواطن الإلكترونية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake pornographic images, which are a form of manipulated content created by AI systems. The widespread sharing of such images constitutes a violation of rights and harms the individual and community by spreading misinformation and defamatory content. This meets the criteria for an AI Incident because the AI system's use has directly led to harm (violation of rights and reputational damage). The mention of the White House's concern and calls for social media companies to act further supports the recognition of harm already occurring.
Thumbnail Image

منصة إكس تحظر عمليات البحث عن تايلور سويفت | صحيفة المواطن الالكترونية للأخبار السعودية والخليجية والدولية

2024-01-28
صحيفة المواطن الإلكترونية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images causing harm by spreading misinformation and inappropriate content about Taylor Swift. The social media platform's blocking of searches is a direct response to this harm. The AI system's involvement in generating the fake images is central to the incident, and the harm includes reputational damage and violation of rights. Hence, this event meets the criteria for an AI Incident as the AI system's use has directly led to harm to communities and individuals.
Thumbnail Image

لماذا يقلق البيت الأبيض من نشر صور إباحية لمغنيَّة

2024-01-28
نجوم مصرية
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and spread fake pornographic images of a real person, which constitutes a violation of rights and harm to the individual and communities. The White House's concern and call for legislative action indicate recognition of the harm caused. The AI-generated content's role is pivotal in causing misinformation and reputational harm, fitting the definition of an AI Incident due to realized harm from AI misuse.
Thumbnail Image

الذكاء الاصطناعي يثير غضب محبي تايلور سويفت | | صحيفة العرب

2024-01-27
صحيفة العرب
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create and distribute non-consensual explicit deepfake images, which directly violates the rights of the individual depicted (intellectual property and personal rights) and causes harm to communities by spreading harmful content. The harm is realized and ongoing, as evidenced by the widespread sharing and political concern. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

صور إباحية لتايلور سويفت.. البيض الأبيض قلق!

2024-01-28
tayyar.org
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems. The distribution of these images constitutes a violation of privacy and can be considered harassment, which falls under violations of human rights and harm to individuals. The harm is realized as the images were widely circulated for a significant period before removal, and the White House has expressed concern about the impact on women and girls. The platform's response to remove the images and take action against accounts is a mitigation step but does not negate the occurrence of harm. Hence, this is an AI Incident.
Thumbnail Image

صور تايلور سويفت الإباحية تثير قلق البيت الأبيض

2024-01-27
صحيفة صدى الالكترونية
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated fake images (deepfakes) that have been disseminated widely, causing harm to the celebrity's rights and reputation. The harm is realized and significant, as indicated by the White House's concern and calls for legislative action. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to community).
Thumbnail Image

البيت الأبيض والكونغرس يتدخلان بشأن صور تايلور سويفت الإباحية

2024-01-27
جريدة المدى
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to create and spread fake pornographic images of a real person, which is a clear violation of rights and causes harm. The involvement of AI in generating these images and their widespread dissemination on social media platforms directly leads to harm as defined under violations of human rights and harm to communities. The governmental response underscores the seriousness of the incident. Therefore, this qualifies as an AI Incident.
Thumbnail Image

صور تايلور سويفت المزيفة تسلط الضوء على مخاطر الذكاء الاصطناعي - صدى التقنية

2024-01-30
صدى التقنية
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (text-to-image generation tools) to create and spread fake explicit images of a real person without consent, which constitutes a violation of personal rights and intellectual property. The harm is realized as the images were widely viewed and shared, causing reputational and privacy harm. The platform's response and the company's efforts to fix vulnerabilities confirm the AI system's role in causing harm. Hence, this is an AI Incident as the AI system's use directly led to harm to the individual and communities.
Thumbnail Image

مثيرة للقلق.. البيت الأبيض يعلق على انتشار صور مزيفة لمغنية البوب تايلور سويفت - أخبار ليبيا 24

2024-01-27
وكالة أخبار ليبيا 24
Why's our monitor labelling this an incident or hazard?
The event describes the creation and widespread dissemination of AI-generated deepfake images causing harm to an individual and potentially to broader communities by spreading misinformation and violating rights. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. The article also discusses responses and mitigation efforts, but the primary focus is on the realized harm caused by AI-generated content.
Thumbnail Image

خطورة الذكاء الاصطناعي.. صور إباحية لتايلور سويفت تثير غضب محبيها والبيت الأبيض - تيل كيل عربي

2024-01-27
تيل كيل عربي
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create and spread fake pornographic images (deepfakes) of a public figure without consent. This constitutes a violation of rights and causes harm to communities and individuals, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, as the images were widely viewed and caused public outrage and political concern. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

بعد فضيحة صورها العارية.. اسم تايلور سويفت يختفي عن "إكس"

2024-01-28
albiladpress.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that have been widely disseminated, causing harm to the privacy and dignity of the individual depicted. The AI system's use directly led to a violation of rights and harm to the community by spreading misleading and harmful content. The platform's response and governmental concern further confirm the seriousness of the incident. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

البيت الأبيض قلق للغاية من صور تايلور سويفت الإباحية

2024-01-28
Arabstoday
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the pornographic images are AI-generated fakes, indicating the involvement of an AI system in creating manipulated content. The harm includes violation of rights (privacy and possibly intellectual property) and harm to the community through misinformation and the spread of non-consensual explicit content. This constitutes a violation of human rights and harm to communities, fitting the definition of an AI Incident. The White House's concern and call for legislative action further underscore the seriousness and realized nature of the harm.
Thumbnail Image

صور إباحية مزيفة لتايلور سويفت تثير غضب أميركيين

2024-01-27
Azzaman
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI systems to create fake pornographic images, which have been widely circulated, causing harm to the individual depicted and distress to the public. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The involvement of AI in generating the content and the resulting harm is direct and clear. The political and social responses further confirm the recognition of harm caused by the AI system's misuse.
Thumbnail Image

بعد أزمة تايلور سويفت .. كل ما تحتاج معرفته عن الـ Deepfakes؟ | صحيفة تواصل نيوز

2024-01-28
تواصل
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake images and videos that have been widely distributed, causing harm to individuals' privacy and dignity, which falls under violations of human rights and harm to communities. The article describes realized harm from the use of AI-generated content, including non-consensual explicit images, misinformation, and social disruption. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

صور تايلور سويفت المزيفة تثير "هلع" في هوليوود والبيت الأبيض - الجزائر الجديدة

2024-01-28
الجزائر الجديدة
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake explicit images without consent, directly leading to harm in the form of privacy violations and potential emotional and reputational damage to the individuals targeted. This fits the definition of an AI Incident because the AI system's use has directly led to violations of fundamental rights and harm to individuals. The public and governmental responses further confirm the recognition of harm caused by the AI-generated content.
Thumbnail Image

البيت الأبيض والكونغرس يتدخلان بشأن صور تايلور سويفت "الإباحية" - شفق نيوز

2024-01-27
Shafaq News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake images (deepfakes) that have been widely disseminated, causing harm to the individual depicted and potentially others. The harm includes violations of rights and reputational damage, fitting the definition of harm to communities and violations of rights under the AI Incident criteria. The involvement of AI in generating the images is clear, and the harm is realized, not just potential. The governmental and platform responses further confirm the seriousness and materialization of harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

لماذا أثارت الصور الإباحية لتايلور سويفت قلق البيت الأبيض؟ | البوابة

2024-01-27
البوابة
Why's our monitor labelling this an incident or hazard?
The article describes the creation and widespread sharing of AI-generated fake pornographic images of a real person, which constitutes a violation of rights and harm to the individual and community. The AI system's role in generating these images is direct and pivotal to the harm caused. Therefore, this qualifies as an AI Incident under the framework, as the harm is realized and linked to the AI system's use.
Thumbnail Image

صور إباحية "مفبركة" لتايلور سويفت تجلب اللعنات على الذكاء الاصطناعي

2024-01-27
اندبندنت عربية
Why's our monitor labelling this an incident or hazard?
The event clearly involves the use of AI systems (generative AI for deepfake image creation) and has directly led to harm, including violation of rights and reputational damage to Taylor Swift, as well as broader societal harm through harassment and misuse of AI-generated content. The widespread dissemination of these images and the political and social backlash confirm that harm has materialized. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

بعد انتشارها.. البيت الأبيض قلق للغاية من صور تايلور سويفت الإباحية

2024-01-28
مانكيش نت
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake pornographic images, which are being widely spread and causing harm to the individual depicted and potentially to broader societal norms and rights. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The direct harm is realized as the images have already been disseminated and caused concern, not just a potential future risk. Therefore, the classification is AI Incident.
Thumbnail Image

AI 虚假"不雅照"泛滥,X(推特)官方禁止搜索泰勒 · 斯威夫特

2024-01-29
163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake explicit images (deepfakes) of a person, which constitutes a violation of rights and harm to communities. The dissemination of such AI-generated content is an active harm caused by AI misuse. The platform's response to block searches is a mitigation measure but does not negate the fact that harm has occurred. Therefore, this qualifies as an AI Incident due to realized harm from AI misuse.
Thumbnail Image

泰勒·斯威夫特AI"不雅照"疯传,惊动白宫_图像_图片_皮埃尔

2024-01-27
搜狐新闻
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to create and spread non-consensual explicit images of a real person, which constitutes a violation of privacy and potentially other rights. The harm is realized as the images are actively circulating, prompting legal considerations and official responses. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs and their dissemination.
Thumbnail Image

AI 虚假"不雅照"泛滥,X(推特)首次官方下场禁止搜索泰勒*斯威夫特

2024-01-29
中关村在线
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake explicit images (deepfakes) of Taylor Swift spreading on a social media platform, which is a direct misuse of AI systems causing harm to the individual and community by spreading false and harmful content. This fits the definition of an AI Incident as the AI system's use has directly led to harm (reputational and potential rights violations). The platform's intervention is a response but does not negate the incident classification. Therefore, this event is an AI Incident.
Thumbnail Image

惊动白宫!知名女星AI"不雅照"疯传!罕见 得州与美国政府发生武装对峙!中泰大消息来了......

2024-01-28
东方财富网
Why's our monitor labelling this an incident or hazard?
The AI-generated fake images constitute a direct harm to the individual involved (violation of rights and harm to reputation and privacy), caused by the use of AI systems to create and spread false and harmful content. This meets the definition of an AI Incident as the AI system's use has directly led to harm. The article's focus on the spread and impact of these AI-generated images, along with official concern and calls for regulation, confirms this classification. The other news items do not involve AI systems or related harms and are therefore unrelated.
Thumbnail Image

AI不雅照传出后 X一度封锁泰勒丝关键字搜寻 - 大纪元

2024-01-30
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create non-consensual explicit images, which constitutes a violation of personal rights and privacy, a form of harm under the AI Incident definition (violation of human rights). The rapid spread of these images on a social media platform and the platform's response to mitigate harm confirm that harm has occurred. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content and its dissemination.
Thumbnail Image

网传泰勒丝AI不雅照 白宫、微软及工会谴责 - 大纪元

2024-01-28
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event explicitly describes AI-generated non-consensual pornographic images of Taylor Swift being widely circulated online, which is a direct violation of personal rights and causes harm to the individual and community. The AI system's use in generating these images and their dissemination is central to the harm. The public and institutional responses, including from the White House, Microsoft, and the actors' union, acknowledge the harm and call for stronger regulation and protective measures. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated content violating rights and causing distress.
Thumbnail Image

美国流行巨星"泰勒AI不雅照"流出 白宫震惊(组图) - 新闻 美国 - 看中国新闻网 - 海外华人 历史秘闻 时事 -

2024-01-27
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images of Taylor Swift have been widely circulated on social media, causing significant concern and harm. The AI system's use in generating these images directly leads to violations of privacy and potential harassment, which are harms to individuals and communities. The White House's response and calls for legislation further confirm the recognition of harm caused by AI misuse. Therefore, this event meets the criteria for an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

美國流行巨星「泰勒AI不雅照」流出 白宮震驚(組圖) - 新聞 美國 - 看中國新聞網 - 海外華人 歷史秘聞 時事 -

2024-01-27
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake images of Taylor Swift have been widely circulated, causing harm to her reputation and privacy, which constitutes a violation of rights. The harm is direct and ongoing, as millions have viewed these images and the spread has not been fully stopped. The AI system's role in generating these images is central to the incident. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in generating and disseminating harmful content.
Thumbnail Image

AI生成斯威夫特"不雅"照片疯传风波!马斯克的X平台恢复霉霉相关搜索功能-科技频道-和讯网

2024-01-30
和讯网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems creating realistic but false content. The widespread sharing of these images on a major social media platform has led to tangible harm, including reputational damage and the spread of harmful misinformation. The platform's response to temporarily block and then restore search functionality indicates recognition of the harm caused. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to communities and individuals' rights.
Thumbnail Image

极目锐评|泰勒・斯威夫特再度被换脸引全球粉丝愤怒,防范AI技术滥用,谁都不是旁观者

2024-01-29
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (deepfake AI) to generate harmful content that infringes on individuals' rights and causes reputational and emotional harm, which fits the definition of an AI Incident. The harms include violations of legal rights, deception, and social harm to communities. The article describes realized harm through the spread of AI-generated fake images and videos, not just potential harm. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

泰勒·斯威夫特AI"不雅照"疯传!惊动白宫 2024-01-28 12:26

2024-01-28
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology (AI-generated synthetic images and videos). The harms are realized and ongoing: reputational damage to Taylor Swift, dissemination of non-consensual explicit images (a violation of rights and privacy), and financial fraud through fake promotional scams. These constitute direct harms caused by the AI system's outputs. Therefore, this qualifies as an AI Incident. The article also mentions platform and government responses, but the primary focus is on the harm caused by the AI-generated content, not just the responses, so it is not merely Complementary Information.
Thumbnail Image

泰勒·斯威夫特AI"不雅照"被疯传!惊动白宫 2024-01-28 10:00

2024-01-28
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event describes the creation and widespread dissemination of AI-generated deepfake explicit images and videos of Taylor Swift, which have caused reputational harm and distress. The AI system's use (deepfake generation) directly led to violations of personal rights and harm to the community through misinformation and harassment. The harm is realized and ongoing, with social media platforms and the White House acknowledging the issue. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to community).
Thumbnail Image

文化杂谈|歌手泰勒·斯威夫特再被AI深度伪造引全球粉丝愤怒,约束AI迫在眉睫

2024-01-28
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that have been widely spread, causing significant harm to Taylor Swift's privacy, reputation, and emotional well-being, as well as distress among her fans. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The AI system's use in creating and disseminating these images is central to the harm. Although mitigation efforts and calls for regulation are discussed, the main event is the actual harm caused by the AI deepfakes, not just potential or complementary information.
Thumbnail Image

Taylor Swift遭人AI深伪色情照 流量飙破4700万次惊动白宫

2024-01-28
早报
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems capable of generating realistic but fake content. The harm is realized as the images are non-consensual, pornographic, and widely viewed, constituting a violation of personal rights and harm to the community. The involvement of AI in creating and spreading these images directly leads to the harm described. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

美媒:"泰勒·斯威夫特AI不雅照"惊动白宫,社交媒体X阻止搜索其英文名

2024-01-29
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images (deepfakes) of a public figure being widely spread on social media, causing reputational harm and privacy violations. The AI system's use in generating these images is central to the harm. The platform's blocking of search terms is a response to this harm. The White House's concern and call for regulation further underscore the seriousness of the incident. Since the harm (violation of rights and harm to community trust) is realized and directly linked to AI-generated content, this event meets the criteria for an AI Incident.
Thumbnail Image

霉霉"不雅照"事件惊动白宫!全球近50亿网民正成为AI内容受害者

2024-01-29
163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create deepfake content that has been widely distributed, causing direct harm to individuals' reputations and privacy, which constitutes violations of human rights and harm to communities. The harm is realized, not just potential, as the fake images have been viewed millions of times and have prompted legal and governmental responses. Therefore, this qualifies as an AI Incident. The additional discussion of regulatory responses and societal concerns is complementary but does not overshadow the primary incident of harm caused by AI-generated deepfakes.
Thumbnail Image

微软修复漏洞,旗下AI不能再生成名人假裸照

2024-01-30
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Microsoft's Designer) used to generate harmful content—non-consensual fake nude images of celebrities. This misuse has caused harm by violating individuals' rights to privacy and dignity, which falls under violations of human rights as defined. Microsoft's response to fix the vulnerability and prevent further misuse confirms the AI system's role in causing the harm. The harm is realized and ongoing, not merely potential, so this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

外媒:社交媒体X已解除阻止搜索泰勒·斯威夫特英文名的临时措施

2024-01-30
163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images of a public figure, which have been widely disseminated, causing reputational harm and potential violation of privacy and rights. The social media platform's temporary blocking of search results related to the name was a direct response to this harm. The AI system's use (generative AI creating fake images) directly led to harm to the community and violation of rights. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI生成的泰勒·斯威夫特露骨照片疯传,受害者如何维权?

2024-01-31
163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake explicit images of a real person without consent, which constitutes a violation of rights and harm to the individual. The harm is realized as the images have been widely disseminated, causing reputational damage and distress. The involvement of AI in creating and spreading these images is explicit. The article also mentions legal actions and platform responses, but the primary focus is on the incident of harm caused by AI deepfake misuse. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"深度伪造"门槛降低,"霉霉AI照片"让美国很紧张!

2024-01-29
163.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create and distribute non-consensual, sexually explicit images, which directly violates personal rights and causes harm to individuals and communities. The harm is realized and widespread, with millions exposed to the content. The article details the societal and legal implications, government and platform responses, and ongoing harm to victims. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

霉霉AI照片"让美国很紧张

2024-01-28
163.com
Why's our monitor labelling this an incident or hazard?
The article describes the creation and widespread dissemination of AI-generated deepfake images of Taylor Swift, which are non-consensual and sexually explicit. This constitutes a violation of individual rights and causes harm to the affected person and potentially others. The involvement of AI in generating these images and their viral spread on social media platforms directly leads to harm as defined by violations of rights and harm to communities. The event is not merely a potential risk but an actual incident with realized harm, thus qualifying as an AI Incident.
Thumbnail Image

霉霉AI不雅照被疯传,约束AI迫在眉睫

2024-01-28
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that are non-consensual and harmful, which have been widely disseminated causing harm to the individual (Taylor Swift) and distress to her fans and the public. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The involvement of AI in generating and spreading these images is clear, and the harm is realized, not just potential. The discussion of legal actions and platform responses further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

数百万人看到了泰勒·斯威夫特"不雅照"?粉丝气疯了,白宫都被惊动

2024-01-28
163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating explicit fake images (deepfakes) of a person without consent, which is a direct violation of personal rights and can cause reputational and emotional harm. The harm is realized as millions have seen these images, and the affected individual and community have reacted strongly. The involvement of AI in creating and spreading these images is explicit, and the harm to rights and communities is clear. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

知名女星AI"不雅照"疯传,白宫震惊!

2024-01-28
163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfake AI generating fake images and videos). The use of AI directly led to harm: privacy violations through non-consensual intimate images, reputational damage to the celebrity, and financial harm to victims of the fraudulent product promotions. The article describes actual harm occurring, not just potential harm. The involvement of AI in creating and spreading the harmful content is central. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

涉泰勒·斯威夫特,白宫震惊

2024-01-27
163.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate and spread false and non-consensual intimate images, which constitutes a violation of individual rights and causes harm to the person and potentially to communities by enabling harassment and misinformation. The harm is realized as the images are actively circulating and causing distress, meeting the criteria for an AI Incident due to violations of rights and harm to individuals. The White House's response and call for legislation further confirm the seriousness of the harm caused by the AI system's misuse.
Thumbnail Image

霉霉AI色情照在网上疯传,惊动白宫,催促立法阻止造黄谣

2024-01-27
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake images, which are AI-generated manipulated content. The harm includes violations of personal rights (privacy, reputation), emotional harm, and social disruption, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The AI system's use in creating and spreading these images directly led to the harm described. The involvement of the platform and government responses further confirms the incident's significance. Therefore, this is classified as an AI Incident.
Thumbnail Image

泰勒斯威夫特被用AI造黄谣,引起大范围传播,粉丝奋起反击

2024-01-27
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create deepfake explicit images of Taylor Swift, which were widely spread on social media platforms, causing harm to her privacy and dignity. The spread of these images led to harassment and stalking, which are direct harms to the person. This fits the definition of an AI Incident because the AI system's use directly caused violations of rights and harm to the individual and community. The involvement of AI in generating harmful content and the resulting real-world consequences confirm this classification.
Thumbnail Image

泰勒·斯威夫特AI不雅照"疯传,白宫震惊

2024-01-27
163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating explicit fake images of a real person without consent, which is a direct violation of personal rights and can cause significant harm to the individual and communities targeted by such harassment. The harm is realized as the images have already been widely circulated, and the White House's response underscores the seriousness of the incident. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content and its societal impact.
Thumbnail Image

AI生成的泰勒·斯威夫特露骨照片疯传,受害者如何维权?

2024-01-31
华龙网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake explicit images of a public figure, which have been widely disseminated on social media, causing reputational harm and distress. The AI-generated content is the direct cause of harm (violation of rights, harassment, and misinformation). The article also mentions the social media platform's efforts to remove such content and legal advice on how victims can seek redress, confirming the harm has occurred. Hence, this is an AI Incident as the AI system's use has directly led to harm to a person and communities.
Thumbnail Image

「泰勒·斯威夫特AI不雅照」驚動白宮 X阻止搜索其英文名 - 香港文匯網

2024-01-29
香港文匯網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating fake, non-consensual images of a public figure, which have been widely disseminated, causing reputational harm and misinformation. The platform's intervention to block searches indicates recognition of harm caused by AI misuse. The White House's concern and call for regulation further underscore the seriousness of the harm. The AI system's use directly led to harm to the individual and communities through misinformation and violation of rights, fitting the definition of an AI Incident.
Thumbnail Image

知名女星AI"不雅照"疯传,白宫震惊!

2024-01-28
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI deepfake technology to create and disseminate unauthorized and harmful synthetic content of a real person, causing direct harm through privacy violations, reputational damage, and consumer fraud. The AI system's outputs have been widely viewed and have led to actual harm, fulfilling the criteria for an AI Incident. The presence of AI is explicit (deepfake technology), and the harms are clearly articulated and realized. Although there are governance responses mentioned, the primary focus is on the incident of harm caused by AI misuse.
Thumbnail Image

白宫回应泰勒斯威夫特AI照疯传:虚假图像,令人担忧

2024-01-27
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate fake intimate images, which were then disseminated online, causing harm through violation of privacy and enabling harassment. This fits the definition of an AI Incident because the AI's use directly led to harm to an individual (violation of rights) and communities (harassment and misinformation). The White House's concern and call for regulation further confirm the recognition of harm caused by AI misuse in this context.
Thumbnail Image

伪造的Taylor Swift露骨图片在网上被分享引发国会议员关注 - cnBeta.COM 移动版

2024-01-28
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images causing harm by spreading non-consensual explicit content of a public figure, Taylor Swift. This constitutes a violation of rights and harm to the individual's reputation and emotional well-being, fitting the definition of an AI Incident. The involvement of AI in generating the harmful content is clear, and the harm is occurring as the images have been widely viewed and shared. The legislative and platform responses further confirm the recognition of harm caused by AI misuse. Thus, the event is classified as an AI Incident.
Thumbnail Image

AI生成不雅照传播背后:我们该如何自保?|科技圆桌派

2024-01-29
163.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI deepfake technology to create and distribute non-consensual explicit images of a real person, which is a direct violation of privacy and personal rights, thus constituting an AI Incident under the framework. The harm is realized as the images have been widely disseminated, causing reputational and emotional harm. The article also mentions ongoing legal and societal responses, but these are complementary to the main incident of harm. Therefore, the classification is AI Incident.
Thumbnail Image

泰勒絲深偽不雅照網路瘋傳 白宮促立法應對 | 娛樂 | 中央社 CNA

2024-01-27
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images of Taylor Swift being widely circulated, causing harm through non-consensual explicit content and online harassment. The AI system's use in generating these images directly leads to violations of rights and harm to the individual and community. The removal of content by platforms and calls for legislation further confirm the recognition of harm. Therefore, this qualifies as an AI Incident due to realized harm linked to AI system use.
Thumbnail Image

泰勒絲「深偽」不雅照瘋傳 「X」出手封鎖搜索結果 - 國際 - 自由時報電子報

2024-01-29
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems synthesizing realistic but fake content. The spread of these images causes harm to the individual's privacy and reputation, which falls under violations of human rights and harm to communities. The platform's intervention to block search results is a response to an ongoing AI Incident. The harm is realized, not just potential, as the deepfake images are already circulating. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

下架假裸照!泰勒絲鐵粉組網軍1招護主 | 聯合新聞網

2024-01-27
UDN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake non-consensual nude images, which directly harms the individuals depicted by violating their rights and privacy. The harm is realized as the fake images are widely viewed and circulated. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-generated content violating human rights and causing reputational and emotional harm. The article also discusses societal and governance responses, but the primary focus is on the realized harm from AI misuse.
Thumbnail Image

AI深偽色情泛濫...9州禁合成影像 聯邦尚無立法 | 聯合新聞網

2024-01-27
UDN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images and audio that have directly led to harm to individuals, including privacy violations, emotional distress, and potential breaches of rights. The AI-generated content is non-consensual and sexually explicit, affecting women and children, which constitutes harm to persons and communities. The article reports actual occurrences of these harms, qualifying it as an AI Incident. The discussion of legal and platform responses is complementary but does not overshadow the primary focus on realized harm from AI misuse.
Thumbnail Image

造假泰勒絲不雅圖瘋傳 X暫時封鎖用戶搜尋 | 噓!星聞

2024-01-29
聯合新聞網 udn.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake images (deepfakes) of Taylor Swift, which have been widely shared on a social media platform, causing reputational and privacy harm. This constitutes a violation of rights and harm to communities as defined in the framework. The platform's measures to block searches and suspend accounts are responses to an ongoing AI Incident. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs and their dissemination.
Thumbnail Image

煽情照傳遍網路 泰勒絲受夠AI了 想提告! | udnSTYLE

2024-01-27
聯合新聞網 udn.com
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated explicit images of Taylor Swift being spread online, which directly harms her reputation and privacy rights. The AI system's use in creating these manipulated images is central to the harm. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of rights and harm to the individual. The ongoing circulation of such images and the legal considerations further confirm the realized harm caused by AI misuse.
Thumbnail Image

不只泰勒絲受害!深偽色情影像泛濫 美國僅9州立法禁合成影像 | 國際要聞 | 全球 | NOWnews今日新聞

2024-01-29
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for deepfake creation) that have directly caused harm by producing and distributing non-consensual synthetic sexual images, violating individuals' rights and causing social harm. The article details actual incidents of harm (e.g., Taylor Swift and other victims) and ongoing legislative efforts to address these harms. Therefore, this qualifies as an AI Incident due to realized harm linked to AI-generated content violating rights and causing social harm.
Thumbnail Image

驚動白宮!泰勒絲遭深偽合成不雅照 社群平台X封鎖搜尋結果止血 | 國際要聞 | 全球 | NOWnews今日新聞

2024-01-29
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically generative AI used to create deepfake images. The use of these AI-generated images has directly led to harm in the form of reputational damage, emotional distress, and violation of privacy rights of Taylor Swift. The widespread dissemination of these images on social media platforms constitutes an AI Incident as it has caused realized harm. The platform's response and the White House's call for legislation are complementary information but do not negate the fact that harm has occurred. Therefore, this event is classified as an AI Incident.
Thumbnail Image

泰勒絲「遭AI變臉」成不雅照主角 白宮動怒:國會該修法應對 | 娛樂 | NOWnews今日新聞

2024-01-28
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (deepfake technology) being used to create and disseminate non-consensual explicit images of Taylor Swift, which constitutes a violation of personal rights and causes harm to the individual. The harm is realized as the images have been widely viewed and caused distress. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harassment). The involvement of the White House and calls for legal reform further confirm the significance of the harm caused.
Thumbnail Image

AI不雅照傳出後 X一度封鎖泰勒絲關鍵字搜尋 - 大紀元

2024-01-30
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated explicit images (deepfake or synthetic media) of a real person, which constitutes a violation of personal rights and dignity, a breach of applicable laws protecting individual rights. The AI system's use in creating and disseminating these images directly led to harm. The platform's temporary blocking of keyword searches and content removal efforts are responses to this harm. The involvement of AI in generating the harmful content and the resulting violation of rights and harm to the community meet the criteria for an AI Incident.
Thumbnail Image

網傳泰勒絲AI不雅照 白宮、微軟及工會譴責 - 大紀元

2024-01-28
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake explicit images (deepfakes) of Taylor Swift, which have been widely circulated online, causing harm to her and raising social concerns. The AI system's use in generating these images directly led to violations of personal rights and harm to communities through misinformation and harassment. The involvement of AI in creating harmful content and the resulting public and institutional responses confirm this as an AI Incident under the framework, as the harm is realized and directly linked to AI misuse.
Thumbnail Image

泰勒丝假裸照疯传 粉丝愤怒白宫促立法应对

2024-01-27
RFI
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created using AI systems. The spread of these non-consensual explicit images constitutes a violation of rights and causes harm to the individual and potentially to communities through harassment and toxic content proliferation. The harm is realized as the images were widely viewed before removal, and the incident has led to calls for legal and policy responses. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

泰勒絲憤怒遭AI深偽變臉 不雅畫面瘋傳擬告色情網站

2024-01-26
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI deepfake technology used to create and distribute non-consensual explicit images of a person, causing harm to her rights and dignity. The harm is realized as the images are widely spread, and the victim is taking legal steps to address the issue. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to the individual.
Thumbnail Image

造假泰勒絲不雅圖瘋傳 X暫時封鎖用戶搜尋 | 娛樂 | 中央社 CNA

2024-01-29
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake images (deepfakes) of Taylor Swift, which have been widely spread on a social media platform, causing reputational and privacy harm. This constitutes a violation of rights and harm to the individual and community by spreading false and harmful content. The platform's temporary blocking of searches and account suspensions are responses to this harm. Therefore, this qualifies as an AI Incident because the AI system's use (creation and dissemination of fake images) has directly led to harm.
Thumbnail Image

泰勒丝"深伪不雅照"疯传 白宫促立法应对(图) - 新闻 美国 - 看中国新闻网 - 海外华人 历史秘闻 影视热议 -

2024-01-27
看中国
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated deepfake images, which qualifies as an AI system's use. The harm includes violations of personal rights (non-consensual explicit images), emotional and reputational harm to the individual, and broader societal harm through harassment and toxic content spread. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident. The article also notes the platform's response and calls for legislation, but the primary focus is on the realized harm caused by the AI-generated content.
Thumbnail Image

泰勒絲「深偽不雅照」瘋傳 白宮促立法應對(圖) - 新聞 美國 - 看中國新聞網 - 海外華人 歷史秘聞 影視熱議 -

2024-01-27
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images of Taylor Swift being widely disseminated, causing harm through harassment and violation of rights. The AI system's use in generating these images directly led to this harm. The harm includes violation of privacy and potential psychological harm to the individual, as well as societal harm from the spread of harmful content. The involvement of AI in generating the deepfake images and the resulting harm meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

泰勒絲AI造假裸照網路瘋傳 驚動美國白宮籲立法改善 | yam News

2024-01-27
蕃新聞
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI deepfake technology to create and disseminate fake explicit images of Taylor Swift, which is a direct violation of her rights and causes harm to her and the community. The AI system's role in generating these images is pivotal to the harm. The harm is realized, not just potential, as the images are actively spreading and causing distress. The White House's response further confirms the seriousness of the incident. Hence, this event meets the criteria for an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

泰勒絲成AI深偽影像受害者 白宮籲社群平台嚴格把關

2024-01-27
公共電視
Why's our monitor labelling this an incident or hazard?
The event involves AI deepfake technology explicitly used to generate non-consensual explicit images of a public figure, which have been widely disseminated online. This constitutes a violation of rights and harm to the individual and community, fitting the definition of an AI Incident. The harm is realized, not just potential, as the images have been viewed millions of times and caused public outcry. Therefore, this is classified as an AI Incident.
Thumbnail Image

泰勒絲深偽不雅照驚動白宮 美國AI立法再受關注

2024-01-30
公共電視
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (generative AI/deepfake technology) used to create harmful synthetic explicit images without consent, which constitutes a violation of rights and harm to the individual and community. The harm is realized as the images have been viewed millions of times and caused significant distress and reputational damage. The slow platform response and ongoing legislative discussions are complementary but do not negate the fact that an AI Incident has occurred. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

造假泰勒絲不雅圖瘋傳 X暫時封鎖用戶搜尋 | 娛樂星聞 | 三立新聞網 SETN.COM

2024-01-29
三立新聞
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to create fake explicit images of a public figure, which have been widely disseminated on a social media platform, causing harm to the individual's reputation and privacy. The platform's intervention to block searches indicates recognition of the harm caused. The AI-generated fake images directly lead to harm (violation of rights and harm to communities). Therefore, this event qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

泰勒·斯威夫特(Taylor Swift)在Twitter/X上被禁止搜索

2024-01-29
Gamereactor China
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI systems to generate and spread fake explicit images of a person without consent, which constitutes a violation of rights and harm to the individual and community. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The platform's action to ban searches is a mitigation response but does not negate the incident itself.
Thumbnail Image

AI 換臉的泰勒絲不雅照瘋傳,X 封鎖用戶搜尋、處置過慢惹議

2024-01-29
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake images (AI System involvement). The use of these AI-generated images has directly led to harm in the form of violations of privacy and non-consensual explicit content distribution, which breaches fundamental rights (human rights violations). The harm is realized as the images have been widely disseminated and viewed millions of times, causing reputational and emotional harm to the individual and distress to the community. The platform's slow response further contributed to the harm. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

泰勒絲辦不到但你可以!專家教你這樣做 防遭AI換臉變造不雅照|壹蘋新聞網

2024-01-27
Nextapple
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI face-swapping technology used to create non-consensual explicit images and videos, which constitutes a violation of personal rights and privacy, a form of harm to individuals. The harm is realized as victims have already been affected, including a specific case of a high school student whose face was used without consent. The AI system's use directly leads to these harms, fitting the definition of an AI Incident. The article also discusses the ongoing spread of such content and the difficulty in removing it, reinforcing the presence of actual harm rather than just potential risk.
Thumbnail Image

酋長隊闖進超級盃! 泰勒絲衝入場熱吻球星男友放閃-台視新聞網

2024-01-30
台視新聞網
Why's our monitor labelling this an incident or hazard?
The event involves the malicious use of generative AI (deepfake technology) to create and spread non-consensual explicit images of a public figure, Taylor Swift. This constitutes a violation of personal rights and causes harm to the individual and the community by spreading harmful misinformation and damaging reputations. The platform's response to block searches indicates recognition of the harm caused. Since the harm is occurring and the AI system's role is pivotal in generating the harmful content, this qualifies as an AI Incident under the framework.
Thumbnail Image

泰勒絲遭AI變臉!假裸照網路瘋傳 白宮也震怒-台視新聞網

2024-01-27
台視新聞網
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake technology) used to create manipulated explicit images without consent, which constitutes a violation of personal rights and causes harm to the individual and community. The widespread dissemination of such content on social media platforms has led to reputational and emotional harm, qualifying as harm to communities and violations of rights under the AI Incident definition. The involvement of the AI system in generating and spreading harmful content directly led to these harms, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

泰勒斯「深偽」不雅照網路瘋傳 白宮震怒:國會應推動修法 | 娛樂星聞 | 三立新聞網 SETN.COM

2024-01-27
三立新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems used maliciously to create non-consensual explicit content. The harm is realized as the images have been widely viewed and caused distress, constituting violations of personal rights and harassment. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to communities). The article's focus is on the harm caused and the societal response, not merely on the existence or potential of such technology, so it is not a hazard or complementary information. Therefore, the classification is AI Incident.
Thumbnail Image

Deepfake khiêu dâm của Taylor Swift tràn ngập mạng xã hội

2024-01-27
vnexpress.net
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems (Stable Diffusion) are used to create deepfake pornographic images of a real person without consent, which are then widely disseminated on social media. This constitutes a violation of rights and harm to the individual and communities. The harm is realized and ongoing, not just potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

Nhà Trắng lên tiếng về deepfake khiêu dâm của Taylor Swift

2024-01-27
vnexpress.net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images created using Stable Diffusion, an AI system. These images are non-consensual and sexually explicit, constituting a violation of rights and causing harm to the individual and community. The harm is realized as the images have been widely viewed and spread, despite platform efforts to remove them. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs and their dissemination.
Thumbnail Image

Deepfake khiêu dâm - ác mộng của nữ giới trong cơn sốt AI

2024-01-29
vnexpress.net
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to create deepfake pornographic images, which have been disseminated and caused harm to individuals, including emotional distress and violation of privacy rights. This constitutes a violation of human rights and harm to communities, fitting the definition of an AI Incident. The harms are realized and ongoing, not merely potential. The article also references multiple real cases and the societal impact, confirming the direct link between AI use and harm.
Thumbnail Image

Nhà Trắng lo ngại vì deepfake khiêu dâm của Taylor Swift

2024-01-27
vnexpress.net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images created by the Stable Diffusion model, which have been widely spread and caused harm by violating Taylor Swift's rights and causing social harm. The White House's concern and legislative efforts further confirm the recognition of harm caused by AI misuse. The AI system's use directly led to the harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

Mạng xã hội X chặn tìm kiếm Taylor Swift

2024-01-28
vnexpress.net
Why's our monitor labelling this an incident or hazard?
The article describes the creation and widespread sharing of AI-generated deepfake pornographic images of Taylor Swift, which is a direct violation of her rights and causes reputational and personal harm. The AI system (Stable Diffusion) was used to generate these images, and the social media platform's actions to block search terms and suspend accounts confirm the harm has occurred. This fits the definition of an AI Incident as the AI system's use has directly led to harm to a person and communities.
Thumbnail Image

Từ hình ảnh deepfake khiêu dâm của Taylor Swift, Nhà Trắng kêu gọi luật pháp Mỹ thay đổi

2024-01-27
laodong.vn
Why's our monitor labelling this an incident or hazard?
The event describes the creation and widespread sharing of AI-generated deepfake pornographic images of a public figure without consent, which constitutes a violation of personal rights and causes reputational and psychological harm. The AI system (Stable Diffusion) is explicitly mentioned as the tool used to generate these images. The harm is realized and ongoing, as the images have been viewed millions of times and continue to spread despite platform efforts to remove them. The involvement of the White House and calls for legislative action further confirm the significance of the harm. Thus, this meets the criteria for an AI Incident as the AI system's use has directly led to violations of rights and harm to the individual and community.
Thumbnail Image

Mạng xã hội X chặn tìm kiếm Taylor Swift

2024-01-29
laodong.vn
Why's our monitor labelling this an incident or hazard?
The presence of AI is reasonably inferred from the mention of deepfake videos, which are AI-generated synthetic media. The harm involved includes violations of rights (non-consensual explicit content) and harm to communities (spread of harmful misinformation and content). However, the article does not describe a specific AI Incident causing direct or indirect harm but rather the platform's mitigation efforts and policy responses. This aligns with the definition of Complementary Information, as it updates on responses to an AI-related harm issue rather than reporting a new incident or hazard.
Thumbnail Image

Taylor Swift bị phát tán ảnh khiêu dâm

2024-01-26
Kenh14.vn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used to create non-consensual pornographic images of a real person, Taylor Swift. This use of AI has directly led to harm in terms of violation of personal rights and dignity, which falls under violations of human rights and breach of obligations protecting fundamental rights. The harm is realized, not just potential, as the images have been shared and caused distress. Therefore, this qualifies as an AI Incident. The article also discusses legal and policy responses, but these are complementary to the main incident of harm caused by the AI-generated images.
Thumbnail Image

CEO Microsoft nói gì trước vụ ảnh deepfake khiêu dâm của Taylor Swift?

2024-01-29
VietNamNet News
Why's our monitor labelling this an incident or hazard?
The article describes the creation and widespread dissemination of AI-generated deepfake pornographic images of a public figure without consent, which constitutes a violation of rights and harm to the individual and community. The AI system's role in generating these images is explicit and central to the harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

X khóa tìm kiếm Taylor Swift sau vụ nữ ca sĩ bị chế ảnh khiêu dâm

2024-01-29
TUOI TRE ONLINE
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI to generate fake pornographic images (deepfakes) of Taylor Swift, which have been widely disseminated, causing harm to her dignity and reputation. This is a direct violation of rights and a clear harm caused by the AI system's outputs. The platform's response and government calls for regulation further confirm the seriousness and realized nature of the harm. Hence, this is an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

Hình ảnh deepfake nhạy cảm của Taylor Swift lan truyền chóng mặt, Nhà Trắng lo ngại

2024-01-27
TUOI TRE ONLINE
Why's our monitor labelling this an incident or hazard?
The event describes the creation and widespread sharing of deepfake images generated by AI, which directly harms Taylor Swift's reputation and privacy, constituting a violation of rights. The use of AI to produce non-consensual explicit content is a clear example of harm caused by AI systems. The article also mentions societal and governance responses, but the primary focus is on the realized harm from the AI-generated deepfake images, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Hình ảnh khêu gợi giả mạo Taylor Swift lan truyền trên mạng xã hội

2024-01-27
Thanh Niên
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate manipulated images and videos (deepfakes) of Taylor Swift and others, which are shared widely on social media. The harms are realized and ongoing, including violations of personal rights, psychological harm, and reputational damage. The AI's role is pivotal as the content would not exist without AI generation. The article also discusses the societal and legal challenges in addressing these harms, but the primary focus is on the harm caused by the AI-generated content. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

X chặn tìm kiếm Taylor Swift do có nhiều hình ảnh AI deepfake

2024-01-29
Thanh Niên
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Microsoft Designer) to create high-quality deepfake images of a public figure, which have been widely disseminated on X, causing reputational and privacy harm. The platform's decision to block searches is a direct response to this harm. The involvement of AI in generating harmful content that affects individuals' rights and community trust fits the definition of an AI Incident. The ongoing investigation and mitigation efforts do not change the fact that harm has occurred.
Thumbnail Image

Lý do mạng xã hội X chặn người dùng tìm từ khóa 'Taylor Swift'

2024-01-29
Thanh Niên
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images (deepfakes) of Taylor Swift being widely disseminated on X, causing harm through misinformation and reputational damage. The platform's response to block searches indicates recognition of the harm caused. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and individuals by spreading false and harmful content. The harm is realized, not just potential, and involves violation of rights and harm to community trust and safety.
Thumbnail Image

Nhà Trắng lên tiếng về bức ảnh khiêu dâm của Taylor Swift

2024-01-27
Báo điện tử Tiền Phong
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake pornographic images without consent, which constitutes a violation of individual rights and causes harm to the person depicted (Taylor Swift). This harm is realized and ongoing, as the images are being spread and causing reputational and emotional damage. The involvement of AI in creating these images and the resulting harm fits the definition of an AI Incident, as the AI system's use has directly led to violations of rights and harm to the community (fans and public perception). The article also discusses responses and proposed legislation, but the primary focus is on the harm caused by the AI-generated deepfakes.
Thumbnail Image

Taylor Swift bị phát tán ảnh khiêu dâm

2024-01-26
Báo điện tử Tiền Phong
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create non-consensual pornographic images of a real person, Taylor Swift. This use of AI has directly led to harm, including violation of personal rights, reputational damage, and emotional distress, which fall under violations of human rights and harm to communities. The article describes the harm as occurring, not just potential, and mentions ongoing legal considerations and societal reactions. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Quốc hội Mỹ được kêu gọi lên tiếng bảo vệ Taylor Swift trước nạn deepfake

2024-01-27
danviet.vn
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates realistic fake images and videos. The article describes the actual occurrence of deepfake nude images of Taylor Swift being spread online, which constitutes harm to her privacy and reputation, a violation of rights. This is a direct harm caused by the use of an AI system. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Thực hư nữ ca sĩ Taylor Swift lộ clip "nóng"?

2024-01-26
danviet.vn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate deepfake videos that impersonate a real person in a harmful and non-consensual manner. This constitutes a violation of rights and harm to the individual and community, fitting the definition of an AI Incident. The harm is realized as the videos have been widely viewed and shared, causing reputational and psychological damage. The article also discusses the lack of effective platform responses and legal frameworks, reinforcing the ongoing nature of the harm.
Thumbnail Image

Taylor Swift trở thành nạn nhân tiếp theo của AI

2024-01-26
Đọc báo tin tức, tin mới Ngày nay Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images of a public figure being widely spread on social media, causing misleading impressions and potential reputational harm. The AI system's outputs (manipulated images) have directly led to harm by spreading misinformation and causing confusion among the public. The context of an election year and concerns about misinformation campaigns further underline the societal harm dimension. The reliance on automated content moderation systems that failed to prevent the spread also indicates malfunction or insufficient use of AI in content control. Hence, this event meets the criteria for an AI Incident as the AI system's use has directly led to harm to communities and individuals.
Thumbnail Image

Hiểm họa phim khiêu dâm từ công nghệ AI

2024-01-30
Đọc báo tin tức, tin mới Ngày nay Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate realistic pornographic deepfake images, which have been actively shared and caused harm to individuals' rights and dignity. This constitutes a violation of human rights and harm to communities, fulfilling the criteria for an AI Incident. The AI system's use directly leads to harm through non-consensual, harmful content creation and distribution. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Foto hard di Taylor Swift, X blocca le ricerche

2024-01-28
Tgcom24
Why's our monitor labelling this an incident or hazard?
AI systems were used to create false nude images of a public figure, which constitutes a violation of personal rights and can cause harm to the individual and communities by spreading misinformation. The platform's response to block searches indicates recognition of the harm caused. Since the AI-generated images have already appeared and caused harm, this qualifies as an AI Incident due to violation of rights and harm to communities through misinformation dissemination.
Thumbnail Image

Taylor Swift, X blocca le ricerche dopo le false immagini di nudo create con intelligenza artificiale

2024-01-28
informazione interno
Why's our monitor labelling this an incident or hazard?
The creation and viral spread of AI-generated deepfake images constitute a violation of personal rights and can cause significant harm to the individual depicted. Since the AI system's use directly led to the dissemination of harmful content, this qualifies as an AI Incident under the definition of violations of human rights or breach of obligations intended to protect fundamental rights. The platform's blocking of searches is a response to the incident but does not negate the occurrence of harm.
Thumbnail Image

Deepfake? fake you! - le foto hard di taylor swift, create con l'intelligenza artificiale, smuovono

2024-01-29
DAGOSPIA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI systems to create fake explicit images of a public figure, which have been widely viewed and shared, causing harm to the individual's rights and reputation. This constitutes a violation of personal rights and harm to communities through misinformation and non-consensual content dissemination. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident. The legislative and social responses described are complementary information but do not change the primary classification.
Thumbnail Image

Taylor Swift, X blocca le ricerche dopo le false immagini di nudo create con intelligenza artificiale

2024-01-28
La Repubblica.it
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate deepfake images of Taylor Swift, which have been widely disseminated, causing harm to her personal rights and reputation. The AI system's role is pivotal in creating these false images, constituting a violation of rights and harm to the community. The harm is realized, not just potential, as the images have gone viral and caused distress. The platform's moderation efforts are responses to the incident, not the incident itself. Hence, this is classified as an AI Incident.
Thumbnail Image

X sospende le ricerche del termine "Taylor Swift". La censura dopo la diffusione di deep fakes

2024-01-29
Corriere della Sera
Why's our monitor labelling this an incident or hazard?
The creation and viral spread of AI-generated deep fake pornographic images constitute a direct violation of personal rights and cause harm to the individual and the community. The AI system's role in generating these images is central to the incident, and the widespread dissemination on a major social media platform led to realized harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's outputs and their impact on human rights and community harm.
Thumbnail Image

Alla Veneziano-Novelli la festa dell'arancia FOTO

2024-01-27
informazione interno
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake images that are sexually explicit and offensive, which have been widely disseminated on social media. This constitutes a violation of human rights and personal dignity, fulfilling the criteria for harm under the AI Incident definition (specifically, violations of human rights or breach of obligations protecting fundamental rights). The harm is realized and ongoing, as the images have circulated widely and caused indignation and legal considerations. Therefore, this is classified as an AI Incident.
Thumbnail Image

False foto nude di Taylor Swift, X blocca le ricerche - Cybersecurity - Ansa.it

2024-01-29
ANSA.it
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated false nude images of Taylor Swift that have been widely viewed and spread, causing reputational and privacy harm. The AI system's role in generating these deepfakes is explicit, and the harm (violation of rights, reputational damage, and disinformation) is occurring. The platform's censorship response confirms the harm's materialization. Hence, this is an AI Incident involving direct harm caused by AI-generated content.
Thumbnail Image

False foto nude di Taylor Swift, X blocca le ricerche - People - Ansa.it

2024-01-28
ANSA.it
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that have been widely shared, causing harm to Taylor Swift's privacy and reputation, which is a violation of rights. The harm is realized as the images have been viewed millions of times and have caused distress. The platform's response to block searches is a mitigation measure but does not negate the occurrence of harm. The involvement of AI in generating the false images and the resulting harm to the individual and community fits the definition of an AI Incident.
Thumbnail Image

Taylor Swift, le finte foto hard diventano virali su X: "Scatti a luci rosse create con l'intelligenza artificiale" - Il Fatto Quotidiano

2024-01-27
Il Fatto Quotidiano
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake explicit images of Taylor Swift being circulated, which constitutes a violation of rights and harm to the individual's reputation and privacy. The AI system's use in creating these images is central to the harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to community reputation). The event is not merely a potential risk or a complementary update but an actual incident of harm caused by AI-generated content.
Thumbnail Image

Sui social foto a luci rosse di Taylor Swift create con l'IA, è la deriva del "deep fake"

2024-01-26
Gazzetta del Sud
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the false pornographic images were almost certainly generated using AI technology (deepfake generation). The images have been widely shared, causing harm to Taylor Swift's image and distress, which qualifies as a violation of rights and harm to the individual. The AI system's use in creating and enabling the spread of these images is directly linked to the harm. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by AI-generated content violating rights and causing reputational and emotional damage.
Thumbnail Image

Foto porno di Taylor Swift finiscono sui social e si moltiplicano, ma non si tratta di lei. "Al 90% è colpa delle app di intelligenza artificiale

2024-01-26
lastampa.it
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated images (deepfakes) of Taylor Swift have been widely circulated on social media, causing harm to her reputation and violating her rights. The AI system's use directly led to the harm (violation of rights and reputational damage). The involvement of AI in generating these images is clear, and the harm is actual and ongoing, not merely potential. Hence, this is an AI Incident under the framework, as it involves realized harm caused by AI-generated content violating rights and causing community harm.
Thumbnail Image

False foto a luci rosse della Tailor Swift in rete, ira dei fan

2024-01-26
Gazzetta di Mantova
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake images that have been widely shared, causing harm to Taylor Swift's reputation and emotional state, which constitutes a violation of rights and harm to communities. The AI system's use in generating and spreading these images is directly linked to the harm. Therefore, this qualifies as an AI Incident. The mention of legal and societal responses is complementary but does not change the primary classification.
Thumbnail Image

X ha bloccato "Taylor Swift" dalle ricerche, dopo la diffusione di finte foto sessualmente esplicite della cantante - Il Post

2024-01-29
Il Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images that are sexually explicit and non-consensual, which constitutes a violation of rights and harm to the individual (Taylor Swift) and potentially to the community by spreading harmful misinformation. The AI system's use in creating and disseminating these images directly caused this harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to a violation of rights and harm.
Thumbnail Image

Taylor Swift a luci rosse, fan in rivolta per le foto fake: cos'è successo

2024-01-27
il Giornale.it
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake images that have been widely disseminated, causing harm to Taylor Swift and potentially to the community by spreading non-consensual explicit content. The harm includes violation of privacy and reputational damage, which fall under violations of rights and harm to communities. The AI system's use in creating and spreading these images is a direct cause of the harm. The involvement of AI in the creation of these images and their viral spread meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ecco perché su X non si può più cercare Taylor Swift

2024-01-28
il Giornale.it
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate explicit deepfake images of a real person, which have been widely disseminated causing reputational and privacy harm. This constitutes a violation of rights and harm to the individual and community through misinformation and non-consensual explicit content. The AI system's use directly led to this harm, qualifying this as an AI Incident under the definitions provided. The platform's blocking of search terms is a mitigation response but does not negate the incident itself.
Thumbnail Image

Taylor Swift vittima dell'AI (ma non è l'unica): le foto senza veli deepfake

2024-01-26
DiLei
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake images that have been widely disseminated, causing reputational and privacy harm to Taylor Swift. This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The AI system's use directly led to the harm through the creation and spread of manipulated explicit content. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift nuda con l'intelligenza artificiale. Foto virali sul web, X blocca le ricerche

2024-01-29
QuotidianoNet
Why's our monitor labelling this an incident or hazard?
AI systems were used to create and spread false nude images of a real person without consent, causing harm to the individual's rights and privacy. This is a direct harm caused by the use of AI-generated content. The event involves the use of AI systems to produce misleading and harmful content, fulfilling the criteria for an AI Incident due to violation of rights and harm to the individual. The censorship and monitoring actions are responses to this incident, not the primary event.
Thumbnail Image

Deepfake su X: immagini di Taylor Swift creano sconcerto e polemiche

2024-01-25
Adnkronos
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create deepfake images, which are explicit and non-consensual, thus violating human rights and platform rules. The widespread dissemination of these images caused harm to the individual depicted and to the community by spreading manipulated and harmful content. The platform's delayed response and ongoing presence of such content further underline the harm caused. Therefore, this qualifies as an AI Incident due to realized harm linked directly to the use of AI-generated content.
Thumbnail Image

Non si può più cercare Taylor Swift su X

2024-01-29
Wired
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating deepfake images, which are false and harmful content targeting a specific individual. This constitutes a violation of rights and harm to the individual and community by spreading misleading and explicit content. Since the harm is occurring and the AI system's use directly leads to this harm, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

Taylor Swift, immagini di nudo false virali: X blocca le ricerche/ Cantante vittima Intelligenza Artificiale

2024-01-29
IlSussidiario.net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are false and sexually explicit, causing harm to Taylor Swift's reputation and privacy. The spread of these images on social media platforms constitutes a violation of rights and harm to the individual and community. The platform's response to block searches indicates recognition of the harm caused. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm as defined in the framework (violation of rights and harm to communities).
Thumbnail Image

Deepfake: immagini esplicite di Taylor Swift diffuse tramite X

2024-01-26
IlSoftware.it
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake images, which have been widely distributed, causing harm to the individual depicted (Taylor Swift) through privacy violations and reputational damage. The harm is realized, not just potential, as the images have been viewed millions of times and are still circulating. The AI system's use in creating and spreading harmful content fits the definition of an AI Incident, as it leads to violations of rights and harm to communities. The article also mentions the challenges in content moderation and the likelihood of legal actions, reinforcing the seriousness of the incident.
Thumbnail Image

X blocca le ricerche su Taylor Swift dopo le finte immagini porno diventate virali. Ma il divieto può essere aggirato

2024-01-29
Open
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate fake pornographic images of Taylor Swift, which have been widely disseminated, causing harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to community through disinformation and harmful content). The platform's blocking of searches is a response to this harm but does not negate the incident itself. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Il porno di Taylor Swift generato dall'intelligenza artificiale ha invaso X. Ecco come è stata difesa dai fan

2024-01-27
Open
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI generative systems to create deepfake pornographic images without consent, which is a clear violation of personal rights and can be classified as harm to individuals and communities. The AI system's role is pivotal in producing and spreading these harmful images. Since the harm is realized and ongoing, this qualifies as an AI Incident rather than a hazard or complementary information. The article also mentions legislative responses, but the main focus is on the incident of harm caused by AI-generated content.
Thumbnail Image

I fan di Taylor Swift sono furiosi con Musk per le foto porno (false) della loro star

2024-01-27
Il Foglio
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and distribute deepfake images, which are false and harmful representations of a real person. The harm includes reputational damage and violation of personal rights, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. Since the AI-generated content is actively spreading and causing harm, this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

X blocca la ricerca dei deepfake di Taylor Swift

2024-01-28
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (DALL-E 3 for image generation and ElevenLabs for voice synthesis) used to create non-consensual explicit deepfake images and fake political robocalls. These AI-generated contents have directly led to harm: violation of privacy and rights of Taylor Swift (non-consensual nudity), and manipulation of public opinion and electoral processes (harm to communities). The platform's response and calls for legislation further confirm the recognition of harm caused by AI misuse. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Taylor Swift e il porno fake da 27mln di visite: il nuovo business dell'IA

2024-01-29
Affari Italiani
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI systems to create realistic fake pornographic images of a public figure, which have been widely disseminated and monetized. The harms include violation of privacy and personal rights, reputational damage, and exploitation through a business model leveraging AI-generated fake content. These harms are realized and ongoing, not merely potential. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the use of AI systems in generating and distributing false and harmful content.
Thumbnail Image

Taylor Swift: foto porno generate con l'IA allarmano Microsoft

2024-01-29
informazione interno
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create and spread non-consensual pornographic images of a public figure, which constitutes a violation of rights and harm to the individual's reputation and privacy. This harm is realized and ongoing, as the images have been widely disseminated and caused significant concern. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs and their impact on human rights and communities.
Thumbnail Image

Taylor Swift vittima dell'Intelligenza artificiale: finte foto hard sul web. Il caso arriva alla Casa Bianca - Secolo d'Italia

2024-01-27
Secolo d'Italia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to create deepfake images that falsely depict Taylor Swift in sexually explicit scenes. These images have been widely shared, causing harm to her reputation and violating her rights, which fits the definition of an AI Incident involving violations of human rights and harm to communities. The AI system's use in generating and spreading these images is directly linked to the harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake e AI: decine di foto false di Taylor Swift invadono il web. E ora è un problema della Casa Bianca

2024-01-28
Rolling Stone Italia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate false, pornographic deepfake images of Taylor Swift and others without consent, which is a direct violation of privacy and autonomy rights. This harm is realized and ongoing, as the images have circulated widely on social media, causing reputational and emotional damage. The involvement of AI in creating these images is clear, and the harm caused fits within the definition of violations of human rights and harm to communities. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

I Deepfakes generati dall'AI di Taylor Swift sono "allarmanti e terribili", dice il CEO di Microsoft: "Dobbiamo agire"

2024-01-29
Cinefilos.it
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images causing harm by spreading explicit and false content about a person, which is a violation of rights and harms communities. The AI system's use directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to communities and violations of rights.
Thumbnail Image

Deepfake su X: immagini di Taylor Swift creano sconcerto e polemiche Periodico Daily

2024-01-25
Periodico Daily
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created using AI systems (image generators like Microsoft Designer). The use of these AI systems has directly led to harm in the form of violations of personal rights and the spread of non-consensual explicit content, which is a breach of fundamental rights and platform policies. The widespread dissemination and delayed removal of such content caused harm to the individual and the community, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Taylor Swift e le foto hard create con l'AI: interviene anche la Casa Bianca | Rumors.it

2024-01-27
Rumors.it
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake explicit images of a public figure being widely circulated, causing harm to her privacy and autonomy. The AI system's use in creating these images is central to the harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The involvement of the White House and the union's response further confirm the recognition of harm caused by AI misuse in this context.
Thumbnail Image

Dopo lo scandalo delle foto osé AI, X blocca la ricerca "Taylor Swift"| Rumors.it

2024-01-29
Rumors.it
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake images that have been widely disseminated, causing harm to Taylor Swift's privacy and rights. The harm is realized, not just potential, as millions viewed the images and the platform took action to block related searches. This fits the definition of an AI Incident because the AI system's use directly led to a violation of human rights (privacy and consent). The event is not merely a hazard or complementary information, but a clear incident of harm caused by AI misuse.
Thumbnail Image

I falsi nudi di Taylor Swift scuotono la Casa Bianca: "Tutelare le star dalla IA" - Notizie italiane in tempo reale!

2024-01-29
Notizie italiane in tempo reale!
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to produce and spread non-consensual, fake explicit images of a celebrity, which constitutes a violation of personal rights and causes harm to the individual and community by spreading offensive and misleading content. The harm is realized and ongoing, as millions have viewed these images. Therefore, this qualifies as an AI Incident due to violation of rights and harm to community reputation and dignity. The involvement of AI is explicit and central to the harm described.
Thumbnail Image

False foto hard di Taylor Swift invadono la Rete: l'ira dei fan

2024-01-26
Tgcom24
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create deepfake images and fake robocalls, which have directly caused harm by spreading misinformation and non-consensual explicit content. This fits the definition of an AI Incident as it leads to harm to communities and violations of rights. The harm is realized, not just potential, as the article states these deepfakes are actively circulating and causing concern. Therefore, the classification is AI Incident.
Thumbnail Image

Usa, X blocca ricerche "Taylor Swift" dopo diffusione immagini nudo

2024-01-28
Askanews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images (deepfakes) of a public figure, which have circulated online causing reputational and privacy harm. The AI system's use directly led to the harm, fulfilling the criteria for an AI Incident. The platform's blocking of searches is a response to the realized harm. The harm is not just potential but ongoing, and the AI system's role is pivotal in creating the false content. Hence, the classification is AI Incident.
Thumbnail Image

旧Twitter「X」テイラー・スウィフトさんの検索が不可に 偽画像拡散を受けて(2024年1月29日)|BIGLOBEニュース

2024-01-29
BIGLOBEニュース
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI to create fake images that have been widely disseminated, causing harm to the individual depicted and potentially to the community through misinformation and reputational damage. The platform's response to restrict search functionality indicates recognition of the harm caused. This fits the definition of an AI Incident because the AI system's use directly led to harm (reputational and possibly psychological harm) and violation of rights (privacy and dignity).
Thumbnail Image

スウィフトさんの偽画像拡散|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題

2024-01-27
��ʐV���b��ʂ̍ŐV�j���[�X�E�X�|�[�c�E�n��̘b��
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create fake images that are spreading widely on social media. This dissemination of harmful, sexually explicit AI-generated content constitutes harm to the individual and communities by violating privacy and potentially causing reputational damage. The involvement of AI in generating and spreading this content directly leads to harm, qualifying this as an AI Incident. The mention of calls for legal measures supports the recognition of realized harm rather than just potential risk.
Thumbnail Image

#TaylorSwift X検索に戻る 『テイラースイフト偽画像』は生成AIが問題なのか?(神田敏晶) - エキスパート - Yahoo!ニュース

2024-01-30
Yahoo!ニュース
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI creating fake images) and their misuse leading to reputational and rights-related harm to a public figure, which fits the definition of an AI Incident. However, the article does not report a specific new incident with direct harm but rather discusses the broader issue, platform responses, and governance challenges. Since the article mainly provides analysis and commentary on an ongoing issue and responses rather than reporting a concrete new incident or hazard, it is best classified as Complementary Information. It enhances understanding of AI misuse and societal responses without describing a distinct AI Incident or AI Hazard event.
Thumbnail Image

X、「テイラー・スウィフト」検索停止を解除 偽画像監視=WSJ 執筆: Reuters

2024-01-30
Investing.com 日本
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated fake images (deepfakes) which caused harm by spreading sexualized false content about an individual, leading to a temporary search ban. The platform's monitoring and removal efforts relate to the use and mitigation of AI-generated harmful content. Since the harm (spread of fake sexual images) has already occurred and the AI system's outputs directly led to reputational and personal harm, this qualifies as an AI Incident. The article reports on the lifting of the search ban and ongoing monitoring, but the core issue is the prior harm caused by AI-generated fake images.
Thumbnail Image

X、「テイラー・スウィフト」の検索一時停止 偽画像拡散で 執筆: Reuters

2024-01-29
Investing.com 日本
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the fake images were possibly created by AI and were widely disseminated on the social media platform X (formerly Twitter). The spread of such AI-generated fake content has caused harm to the individual (Taylor Swift) and the community by spreading misinformation and violating personal rights. The platform's response to suspend search indicates recognition of the harm caused. Therefore, this event meets the criteria for an AI Incident as the AI system's use directly led to harm.
Thumbnail Image

スウィフトさんのAI製ニセ画像、Xで拡散 性的画像も

2024-01-26
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images, including sexual content, being spread on a social media platform, causing harm to the individual depicted (Taylor Swift) and potentially to communities by spreading non-consensual and harmful content. The AI system's use in generating these images directly leads to violations of rights and harm, fitting the definition of an AI Incident. The platform's mitigation actions are responses to the incident, not the main focus, so this is not merely Complementary Information.
Thumbnail Image

米 テイラー・スウィフトさんの偽画像拡散 SNS各社が削除急ぐ | NHK

2024-01-29
NHKオンライン
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI creating deepfake images and videos). The use of these AI systems has directly led to the dissemination of harmful fake content, including non-consensual sexual images, which constitutes a violation of rights and harm to the individual and community. The harm is realized, not just potential, as the images were viewed millions of times and caused public concern. The social media platforms' removal efforts and government statements are responses to this incident. Therefore, this qualifies as an AI Incident under the framework definitions.
Thumbnail Image

Xでのテイラー・スウィフト偽AI画像拡散について米連邦政府が懸念表明 Microsoftは「Designer」のフィルターを強化

2024-01-30
ITmedia
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Microsoft Designer) used to generate harmful fake images, which were widely disseminated causing harm to the individual depicted and potentially to the community by spreading misinformation and violating rights. The harm has already occurred, making this an AI Incident. The government's concern and Microsoft's response are complementary information but do not change the classification. Therefore, this is an AI Incident due to the direct harm caused by the AI-generated non-consensual images.
Thumbnail Image

テイラー・スウィフトさんの偽画像拡散 生成AIか、米政権憂慮

2024-01-27
毎日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create fake images that are sexually explicit and widely spread on social media, causing harm to the individual depicted and raising societal concerns. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The involvement of AI in generating harmful content that is actively disseminated and causing harm is clear and direct.
Thumbnail Image

Xが「スウィフト」の検索停止 フェイクポルノ急拡散で一時措置

2024-01-30
毎日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create deepfake pornographic images without consent, which have been widely disseminated causing harm to the individual and potentially to communities. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The AI system's use directly led to the harm through the creation and spread of non-consensual deepfake content. The platform's mitigation measures and calls for regulation further confirm the seriousness of the incident.
Thumbnail Image

テイラー・スウィフトのAI偽画像が拡散、物議を醸す 17時間で4500万回以上閲覧 - ハリウッド : 日刊スポーツ

2024-01-28
nikkansports.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake images that are sexually explicit and falsely depict a real person, causing harm through violation of rights and reputational damage. The AI system's use in creating and spreading these images directly led to harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The large scale of dissemination and public concern further supports this classification.
Thumbnail Image

X(旧Twitter)で「Taylor Swift(テイラー・スウィフト)」が検索不可能に、ディープフェイクポルノ拡散防止のため

2024-01-28
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake pornographic images, which have been widely disseminated on X, causing harm to the individual (Taylor Swift) and potentially to other victims. This constitutes a violation of rights and harm to communities, fulfilling the criteria for an AI Incident. The platform's inability to fully contain the spread and the official responses further confirm the materialized harm. Therefore, this is not merely a potential hazard or complementary information but a realized AI Incident involving direct harm caused by AI-generated content.
Thumbnail Image

ネットで拡散された「テイラー・スウィフトのディープフェイクポルノ」がMicrosoftの生成AIツール「Microsoft Designer」で生成されていたとの指摘

2024-01-30
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Microsoft Designer) used to generate deepfake pornographic images, which constitutes a violation of human rights, specifically the right to privacy and protection from non-consensual intimate imagery. The harm is realized as the images have been widely disseminated online, causing reputational and emotional damage to Taylor Swift and potentially others. The misuse of the AI system to create such content directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, as it involves direct harm to a person through the use of an AI system.
Thumbnail Image

テイラー・スウィフトさんの偽画像、ヌードなど大量に拡散...大統領報道官「女性が標的」

2024-01-30
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of generative AI to create fake nude images that have been massively spread, causing reputational and privacy harm to Taylor Swift. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to the community. The harm is realized, not just potential, and the AI system's role is pivotal in producing the harmful content. Therefore, this is classified as an AI Incident.
Thumbnail Image

スウィフトさん偽画像、Xで拡散 AIで生成か、米政府も憂慮:時事ドットコム

2024-01-27
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the fake images were generated by AI and spread widely on social media, causing harm to the individual (Taylor Swift) through harassment and reputational damage. The involvement of AI in creating harmful content that is actively disseminated fits the definition of an AI Incident, as it directly leads to harm to a person and communities. The mention of government concern and calls for regulation further supports the recognition of realized harm rather than just potential risk.
Thumbnail Image

スウィフトさん偽画像拡散し数百万人閲覧 生成AIの「ディープフェイク」、米政権も憂慮

2024-01-27
産経ニュース
Why's our monitor labelling this an incident or hazard?
The event describes the creation and spread of AI-generated deepfake images that have caused harm by violating the rights and privacy of the individual depicted (Taylor Swift) and by spreading misleading content to a large audience. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and harm to community through misinformation and non-consensual sexual content). The article also notes the response from authorities and social media companies, but the primary focus is on the harm caused by the AI-generated content itself.
Thumbnail Image

一気に出回ったテイラー・スウィフトのAI生成エロ画像。テイラーファンが集結し応戦

2024-01-26
GIZMODO JAPAN(ギズモード・ジャパン)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images created using an AI image generator, which were non-consensual and abusive, causing harm to Taylor Swift's rights and dignity. The widespread sharing and viewing of these images on a major platform led to realized harm, including violation of rights and abuse. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The fans' response and platform moderation are complementary but do not negate the incident classification.
Thumbnail Image

X、テイラー・スウィフトさんの検索一時停止に 偽ポルノ動画拡散

2024-01-30
afpbb.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated fake pornographic videos (deepfakes) of Taylor Swift, which were widely disseminated, causing harm to her personal rights and dignity. The AI system's development and use directly led to this harm. The platform's response to limit search functionality indicates recognition of the harm caused. This fits the definition of an AI Incident as it involves violations of rights and harm to an individual caused by AI-generated content.
Thumbnail Image

T・スウィフトさんの性的なAI生成画像、SNSで急速に拡散 

2024-01-26
CNN.co.jp
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate synthetic, sexually explicit images that are harmful to the individual depicted, constituting harm to the person and communities. The AI-generated content was widely disseminated, causing reputational and psychological harm, which fits the definition of an AI Incident due to realized harm. The discussion of content moderation failures and ongoing investigations further supports the classification as an incident rather than a hazard or complementary information. Therefore, this is an AI Incident involving the use and misuse of AI-generated synthetic media causing harm.
Thumbnail Image

スウィフトさんの偽画像拡散 生成AI、米政権も憂慮

2024-01-27
神戸新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create deepfake images that have been widely spread on social media, causing harm to the individual depicted and raising societal concerns. The harm is realized, not just potential, as millions have viewed the fake images, some of which are sexually explicit, constituting a violation of rights and harm to the community. The involvement of AI in generating the harmful content and its dissemination meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

スウィフトさんの偽画像拡散 生成AI、米政権も憂慮|全国のニュース|Web東奥

2024-01-27
Web東奥
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated deepfake images that are misleading and harmful, constituting a violation of rights and harm to communities. The involvement of generative AI in producing these images and their widespread distribution on social media platforms directly caused harm. The US government's concern and call for accelerated legislation further underscore the recognized harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

テイラー・スウィフトさんの偽画像、SNSで拡散...生成AIで作成か 「X」では対応に追われる|日テレNEWS NNN

2024-01-30
日テレNEWS NNN
Why's our monitor labelling this an incident or hazard?
The article describes the creation and widespread dissemination of AI-generated fake images, which is a direct use of an AI system causing harm through misinformation and reputational damage. The harm is realized as the images have been viewed millions of times and have prompted platform intervention and public concern. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and individuals through misinformation and violation of rights.
Thumbnail Image

تیلور سوئیفت قربانی بزرگ هوش مصنوعی

2024-01-28
پایگاه خبری تحلیلی فرتاک نیوز
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create harmful deepfake content that has been widely distributed, causing direct harm to Taylor Swift's rights and reputation. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of rights and harm to an individual. The mention of platform actions and political responses are complementary but do not change the primary classification as an AI Incident.
Thumbnail Image

جعل تصاویر جنسی تیلور سوئیفت / اعتراض کاخ سفید

2024-01-28
عصر ايران،سايت تحليلي خبري ايرانيان سراسر جهان www.asriran.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and distribute fake sexual images (deepfakes) of Taylor Swift, which have been widely viewed and shared, causing harm to her personal rights and reputation. The White House's concern and call for legal action further confirm the recognition of harm. The AI system's use directly leads to violations of personal rights and harm to the community through misinformation and non-consensual image creation. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

پربازدید شدن تصاویر جعلی تیلور سوئیفت جنجال به پا کرد

2024-01-27
خبرآنلاین
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake images, which have been widely viewed and caused significant harm to the individual depicted (Taylor Swift) and raised societal concerns. This constitutes a violation of rights and harm to communities through misinformation and reputational damage. The harm is realized as the images have been viewed millions of times and caused public controversy. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

X غرق در عکس‌های غیراخلاقی خواننده زن آمریکایی!

2024-01-28
خبرآنلاین
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated explicit images (deepfake-like content) of a real person without consent, which is a violation of personal rights and can cause reputational and psychological harm. The AI system's use directly led to the creation and dissemination of harmful content. The widespread sharing and millions of views indicate significant harm to the individual and the community. The platform's removal efforts are reactive and do not negate the harm already caused. Hence, this meets the criteria for an AI Incident as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

تیلور سوییفت، قربانی جدید جعلیات هوش مصنوعی؛ کاخ سفید خواستار اقدام موثر شد

2024-01-26
صدای آمریکا فارسی
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate fake images (deepfakes) that have caused harm to an individual by violating her privacy and subjecting her to online abuse. This constitutes a violation of rights and harm to the individual and community. Since the AI-generated content has already been disseminated and caused harm, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

پیامدهای انتشار تصاویر جعلی و غیراخلاقی از تیلور سوییفت؛ "ایکس" جستجو برای نام این خواننده را مسدود کرد

2024-01-28
صدای آمریکا فارسی
Why's our monitor labelling this an incident or hazard?
The incident involves AI systems used to generate fake images that have been widely disseminated, causing harm to the individual and the community by spreading false and offensive content. This meets the criteria for an AI Incident because the AI system's use has directly led to harm (harm to communities and violation of rights). The platform's blocking of searches is a response to this harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

ابراز نگرانی کاخ سفید نسبت به انتشار تصاویر جعلی برهنه تیلور سوئیفت

2024-01-27
رادیو فردا
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and distribute deepfake images that have caused real harm by enabling harassment and abuse of a public figure and, more broadly, women and girls online. The harm is realized and ongoing, as millions viewed the images before removal, and the White House's response highlights the severity of the issue. The AI system's role in generating the fake images is pivotal to the harm, meeting the criteria for an AI Incident under the framework.
Thumbnail Image

محکومیت لیلا نقدی‌پری به جریمه مالی، ابطال گذرنامه و ۱۸ ماه ممنوع الخروجی

2024-01-27
رادیو فردا
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate fake nude images (deepfakes) of Taylor Swift, which were widely shared and viewed millions of times before removal. This constitutes direct harm to the individual depicted and to communities affected by online harassment and misinformation. The AI system's role in creating and disseminating these images is central to the harm described. Therefore, this qualifies as an AI Incident due to realized harm (harassment, violation of privacy, misinformation) caused by AI-generated content.
Thumbnail Image

ایکس در پی انتشار تصاویر جعلی برهنه‌ از تیلور سوئیفت، جست‌وجو درباره او را مسدود کرد

2024-01-29
رادیو فردا
Why's our monitor labelling this an incident or hazard?
The use of AI-based deepfake technology to create and disseminate non-consensual explicit images directly causes harm to Taylor Swift's privacy and dignity, constituting a violation of human rights. The platform's response to restrict search functions indicates recognition of the harm. The event involves the use and misuse of AI systems (deepfake generation) leading to realized harm, fitting the definition of an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

جعل ماهرانه تصاویر جنسی تیلور سوئیفت با هوش مصنوعی صدای کاخ سفید را در آورد

2024-01-27
news.gooya.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and distribute fake sexual images of a public figure, which is a direct harm to the individual's rights and dignity, as well as a broader harm to communities by spreading misinformation and potentially enabling harassment. The White House's concern and call for legal action underscore the recognized harm. The AI system's role in generating these images is central to the incident, fulfilling the criteria for an AI Incident under violations of rights and harm to communities.
Thumbnail Image

ابراز نگرانی شدید کاخ سفید در پی انتشار تصاویر برهنه ساختگی از تیلور سویفت

2024-01-29
بالاترین
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate fake nude images (deepfakes) of a real person, which have been widely shared and viewed, causing harm to the individual's rights and dignity. This constitutes a violation of personal rights and can be considered harm to communities due to the spread of misleading and harmful content. Since the harm is realized and the AI system's role is pivotal in creating the fake images, this qualifies as an AI Incident under the framework.
Thumbnail Image

جعل ماهرانه تصاویر جنسی و هرزه‌نگاری از "تیلور سوئیفت" خواننده مشهور آمریکایی، خشم هوادارانش را برانگیخت + عکس

2024-01-29
بالاترین
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the sexual and pornographic images are AI-generated deepfakes, which are a product of AI systems. The harm is direct and realized, as these images have been widely circulated on social media platforms, causing reputational and emotional harm to the individual depicted without consent. This constitutes a violation of rights and is a clear example of harm caused by AI misuse. The involvement of social media companies and their responses further confirm the AI system's role in the incident. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

جنجال پربازدید شدن تصاویر غیراخلاقی تیلور سوئیفت

2024-01-27
noandish.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake images that are sexually explicit and non-consensual, targeting Taylor Swift. This has caused harm to her rights and reputation, fulfilling the criteria for harm to individuals and communities. The AI system's development and use directly led to this harm. The article also discusses societal and governance responses, but the primary focus is on the harm caused by the AI-generated content. Therefore, this is classified as an AI Incident.
Thumbnail Image

جنجال تصاویر جعلی تیلور سوییفت به کنگره رسید

2024-01-28
خبرگزاری ایلنا
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated fake images (deepfakes) that have been widely shared, causing harm to the individual depicted and potentially to the community by spreading misinformation. The AI system's use in generating and distributing these fake images has directly led to reputational harm and social disruption. Therefore, this qualifies as an AI Incident due to realized harm from the use of AI-generated content.
Thumbnail Image

جنجال دیپ‎فیک؛ جهان تحت تاثیر تصاویر و ویدیوهای جعلی

2024-01-30
خبرگزاری میزان
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfake technology) used to create highly realistic fake images and videos that have already caused harm to individuals (e.g., Taylor Swift) and potentially to political processes (fake calls with AI-generated voices). The harms include emotional, financial, and reputational damage, which fall under harm to persons and communities. The article also discusses legislative responses but the primary focus is on the realized harms caused by AI-generated deepfakes. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

جنجال کار غیراخلاقی با عکس‌های خواننده زن

2024-01-28
آرمان ملی
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate fake images (deepfakes) of Taylor Swift, which were widely viewed and caused public scandal. The AI system's outputs directly led to harm by violating the individual's rights and causing reputational and emotional harm. The dissemination of such content on social media platforms also harms communities by spreading misinformation and harmful material. Therefore, this event qualifies as an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

فشارها به دولت آمریکا برای مقابله قانونی با تصاویر مستهجن ساخته شده با جعل عمیق افزایش یافت - ITMen

2024-01-27
ITMen | آی تی من | پنجره‌ای نو رو به دنیای فناوری
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and distribute deepfake pornographic images without consent, which directly causes harm to individuals' privacy, emotional well-being, and reputation. The article explicitly mentions the harm caused by these AI-generated images and the legal efforts to address this issue. Since the harm is realized and linked directly to the AI system's use, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

جعل ماهرانه تصاویر جنسی تیلور سوئیفت با هوش مصنوعی صدای کاخ سفید را در آورد

2024-01-27
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and spread of AI-generated sexually explicit deepfake images, which directly harm the individual depicted by violating her rights and causing reputational damage. The AI system's use in generating these images is central to the harm. The harm is realized and ongoing, as evidenced by the widespread sharing and official responses. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

جنجال انتشار تصاویر جعلی تیلور سوئیفت با هوش مصنوعی - تکفارس

2024-01-27
تکفارس: اخبار و بررسی تكنولوژی، کامپیوتر، موبایل و اینترنت
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated fake images that are non-consensual and harmful, which constitutes a violation of rights and harm to communities. The widespread dissemination before removal indicates realized harm. The involvement of AI in generating these images and the resulting social and legal repercussions fit the definition of an AI Incident, as the AI system's use directly led to harm (violation of rights and harm to community).
Thumbnail Image

جنجال هوش مصنوعی و تیلور سوییفت؛ کاخ سفید به پرونده ورود کرد

2024-01-27
زومیت
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake images (deepfakes) that have been disseminated widely, causing harm through online harassment and abuse, which falls under harm to communities and violation of rights. The AI system's use directly led to this harm. The White House's response and call for legal action further confirm the significance of the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift trends on X after graphic AI photos, Panthers, Falcons find head coaches and more trending news

2024-01-26
Roanoke Times
Why's our monitor labelling this an incident or hazard?
The presence of AI-generated images is explicitly mentioned, indicating involvement of AI systems in content generation. However, the article does not describe any direct or indirect harm caused by these images, such as defamation, harassment, or violation of rights, nor does it suggest plausible future harm beyond public concern. The main focus is on the trending topic and public reaction rather than an AI incident or hazard. Therefore, this qualifies as Complementary Information, providing context about AI-generated content and societal response without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Taylor Swift trends on X after graphic AI photos, Panthers find a head coach and more trending news

2024-01-26
Omaha.com
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicitly mentioned through the AI-generated graphic images. The harm is realized as the images caused public outcry and concern, indicating harm to communities and potentially to the individual's rights. Since the AI-generated content directly led to this harm, this qualifies as an AI Incident. Other parts of the article about sports, Tesla shares, and legal proceedings are unrelated to AI systems or harms.
Thumbnail Image

Taylor Swift trends on X after graphic AI photos, Panthers, Falcons find head coaches and more trending news

2024-01-26
The Daily Progress
Why's our monitor labelling this an incident or hazard?
The presence of AI-generated images is explicitly mentioned, indicating AI system involvement. However, the article does not describe any direct or indirect harm resulting from these images, nor does it suggest a plausible future harm beyond social media reaction. The main focus is on the trending topic and public response, not on an AI Incident or Hazard. Hence, it fits the definition of Complementary Information, as it enhances understanding of AI's societal impact without reporting a new harm or credible risk of harm.
Thumbnail Image

Taylor Swift trends on X after graphic AI photos, Panthers find a head coach and more trending news

2024-01-26
JournalStar.com
Why's our monitor labelling this an incident or hazard?
The presence of AI is clear in the generation of graphic images, which is an AI system use. However, the article focuses on the social media reaction and the trending hashtag rather than on a specific harm caused by the AI-generated images. There is no indication of direct or indirect harm such as injury, rights violations, or disruption caused by the AI system. The event does not describe a plausible future harm scenario either. Hence, it fits the definition of Complementary Information, providing supporting context about AI-generated content and public response without constituting a new AI Incident or AI Hazard.
Thumbnail Image

Taylor Swift trends on X after graphic AI photos, Panthers, Falcons find head coaches and more trending news

2024-01-26
JournalStar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated graphic images of Taylor Swift that caused public backlash and concern, indicating the use of an AI system (generative AI) to create harmful content. The harm is realized as it affects the reputation and emotional well-being of the individual and community, fitting the definition of harm to communities and violation of rights. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift trends on X after graphic AI photos, Panthers find a head coach and more trending news

2024-01-26
nwi.com
Why's our monitor labelling this an incident or hazard?
The presence of AI is inferred from the mention of AI-generated images. However, the article does not describe any realized harm or incident caused by these images, only that they circulated and caused public concern. There is no indication of injury, rights violations, or other harms directly or indirectly caused by the AI system. Thus, this is not an AI Incident or AI Hazard. The main AI-related content is about the social media reaction and the existence of AI-generated images, which fits best as Complementary Information about AI's societal impact and public response rather than a new incident or hazard.
Thumbnail Image

Taylor Swift trends on X after graphic AI photos, Panthers find a head coach and more trending news

2024-01-26
missoulian.com
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit in the mention of AI-generated images. The harm is realized as these images caused distress and led to public outcry, indicating harm to the community and individual reputation. The AI system's use (generation of graphic images) directly led to this harm. Other news items in the article are unrelated to AI. Hence, the classification is AI Incident.
Thumbnail Image

Taylor Swift trends on X after graphic AI photos, Panthers, Falcons find head coaches and more trending news

2024-01-26
missoulian.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that graphic AI-generated images of Taylor Swift circulated on social media, causing public backlash and concern. The AI system's use in generating harmful images directly led to reputational and emotional harm, fitting the definition of an AI Incident under violations of rights and harm to communities. Although other news items in the article are unrelated to AI, the primary AI-related event involves realized harm from AI-generated content.
Thumbnail Image

Taylor Swift trends on X after graphic AI photos, Panthers find a head coach and more trending news

2024-01-26
The Daily Progress
Why's our monitor labelling this an incident or hazard?
The presence of AI-generated images is explicitly mentioned, indicating AI system involvement. However, the article does not describe any direct or indirect harm caused by these images, such as defamation, harassment beyond the stalking incident (which is unrelated to AI), or other harms defined in the framework. The AI-generated images' circulation led to social media trending and public defense of Taylor Swift, which is a societal response rather than an incident or hazard. Other news items in the article are unrelated to AI. Hence, the main AI-related content fits the definition of Complementary Information.
Thumbnail Image

Taylor Swift trends on X after graphic AI photos, Panthers, Falcons find head coaches and more trending news

2024-01-26
pantagraph.com
Why's our monitor labelling this an incident or hazard?
The presence of AI is clear in the generation of graphic images, which is an AI system use. However, the article does not describe any direct or indirect harm resulting from these images beyond public concern and trending social media hashtags. There is no indication of injury, rights violations, or other harms materializing from the AI-generated images. Therefore, this event does not meet the threshold for an AI Incident or AI Hazard. Instead, it provides contextual information about AI-generated content circulating and the societal response, which fits the definition of Complementary Information.
Thumbnail Image

Taylor Swift trends on X after graphic AI photos, Panthers find a head coach and more trending news

2024-01-26
Waterloo Cedar Falls Courier
Why's our monitor labelling this an incident or hazard?
The presence of AI is explicit in the mention of AI-generated graphic images. The harm is realized as these images have caused public outcry and concern, indicating harm to the community and potentially to the individual's rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through the creation and dissemination of harmful content.
Thumbnail Image

Taylor Swift trends on X after graphic AI photos, Panthers find a head coach and more trending news

2024-01-26
Dothan Eagle
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicitly mentioned through the AI-generated graphic images. The harm is realized as the images caused public outcry and distress, leading to the #PROTECTTAYLORSWIFT trend. The AI system's use directly led to harm to the community and individual reputation, fitting the definition of an AI Incident. Other news items in the article do not involve AI or harm related to AI, so the classification is based on the AI-generated images event.
Thumbnail Image

Taylor Swift trends on X after graphic AI photos, Panthers find a head coach and more trending news

2024-01-26
Dothan Eagle
Why's our monitor labelling this an incident or hazard?
The AI-generated images are noted as graphic and have caused public reaction, but the article does not describe any direct or indirect harm resulting from these images, such as harassment caused by the AI system itself or legal violations directly linked to the AI generation. The mention of AI-generated content is limited to the presence of such images circulating, without evidence of harm or plausible future harm. Other news items do not involve AI systems or harms. Therefore, this is best classified as Complementary Information, providing context on AI-generated content circulating in social media without a specific AI Incident or Hazard.
Thumbnail Image

Taylor Swift trends on X after graphic AI photos, Panthers find a head coach and more trending news

2024-01-26
Sioux City Journal
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit in the mention of AI-generated graphic images. The harm is indirect but real, as the images cause reputational and emotional harm to Taylor Swift, a person. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to a person. The other news items (sports, Tesla shares, legal cases) do not involve AI systems or harms related to AI. Therefore, the classification is AI Incident based on the AI-generated harmful images.
Thumbnail Image

Taylor Swift trends on X after graphic AI photos, Panthers, Falcons find head coaches and more trending news

2024-01-26
Sioux City Journal
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicitly mentioned through the AI-generated graphic images. The harm is direct, as the images have circulated and caused distress, leading to public outcry and trending protective hashtags. This fits the definition of an AI Incident because the AI system's use has directly led to harm to community and individual rights. Other parts of the article about arrests, sports news, and financial updates are unrelated to AI. Hence, the classification is AI Incident focused on the AI-generated images issue.
Thumbnail Image

Τέιλορ Σουίφτ: Το Twitter μπλοκάρει την αναζήτησή της εξαιτίας των fake γυμνών φωτογραφιών της

2024-01-28
newsbomb.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the fake nude photos are created via artificial intelligence, which constitutes an AI system generating harmful content. The harm here is reputational and privacy-related, falling under harm to individuals or communities. Since the AI-generated images are actively circulating and causing harm, this qualifies as an AI Incident. The platform's blocking of searches is a mitigation response but does not negate the incident itself.
Thumbnail Image

Έξαλλη για τις AI φωτογραφίες της η Τέιλορ Σουίφτ - Έγιναν viral στα social media | in.gr

2024-01-26
in.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images that are abusive and offensive, created and distributed without consent, causing harm to Taylor Swift and her community. The AI system's use directly led to reputational and emotional harm, fitting the definition of an AI Incident under violations of rights and harm to communities. The presence of AI is clear, the harm is realized, and the event is not merely a potential risk or complementary information but a concrete incident.
Thumbnail Image

Μπλόκαραν τις αναζητήσεις στο Twitter για την Τέιλορ Σουίφτ | in.gr

2024-01-28
in.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that are abusive and created without consent, constituting a violation of rights and harm to the individual (Taylor Swift). The viral spread of these images and the platform's blocking of searches indicate direct harm and disruption. The AI system's use in generating manipulated content is central to the harm, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The article describes realized harm, not just potential harm, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Ο Λευκός Οίκος ανησυχεί για τις fake πορνογραφικές φωτογραφίες της Τέιλορ Σουίφτ | LiFO

2024-01-27
LiFO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create fake pornographic images, which are non-consensual and harmful, constituting sexual image abuse. The harm is realized as the images have been widely circulated, causing reputational and personal harm to Taylor Swift. This fits the definition of an AI Incident as it involves violations of human rights and harm to an individual caused directly by the use of AI. The involvement of the White House and legislative discussions further confirm the significance and materialization of harm.
Thumbnail Image

Τέιλορ Σουίφτ: Η λύση που βρήκε το "Χ" προκειμένου να ελέγξει τη διάδοση ΑΙ πορνογραφικού υλικού | LiFO

2024-01-28
LiFO
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake pornographic images without consent, which is a clear violation of rights and causes harm to the individual depicted (Taylor Swift). The dissemination of such content on a major social media platform directly leads to harm to the person and potentially to communities by spreading harmful misinformation and non-consensual explicit material. The platform's blocking of certain search terms is a response to this harm but does not change the fact that the AI-generated content caused harm. Therefore, this qualifies as an AI Incident due to realized harm stemming from the use and misuse of AI systems.
Thumbnail Image

Γυμνές φωτογραφίες της Τέιλορ Σουίφτ που δημιουργήθηκαν από ΑΙ αναστατώνουν το X

2024-01-29
Gazzetta.gr - Sports News Portal
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create manipulated images (deepfakes) that violate the privacy and rights of a person, constituting harm under the framework. The dissemination of such content on a major platform and the resulting legal and platform responses confirm that harm has occurred. Therefore, this qualifies as an AI Incident due to the direct link between AI-generated content and violations of rights and reputational harm.
Thumbnail Image

Τέιλορ Σουίφτ: Θα κινηθεί νομικά για φωτογραφίες της που δημιουργήθηκαν με AI

2024-01-26
Liberal.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated images that are non-consensual and harmful, constituting a violation of rights and harm to the individual and community. The AI system's role in creating these images is central to the harm caused. Therefore, this qualifies as an AI Incident due to realized harm stemming from the use of AI systems.
Thumbnail Image

Έξαλλη για τις AI φωτογραφίες της η Τέιλορ Σουίφτ

2024-01-27
Cretalive
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images that are fake and abusive, created and shared without consent, causing harm to Taylor Swift's personal rights and reputation. The AI system's use in generating these images is central to the harm described. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to a person. The potential legal actions and public outcry further support the recognition of realized harm rather than just a potential risk or complementary information.
Thumbnail Image

Taylor Swift: Θα κινηθεί νομικά για φωτογραφίες της που δημιουργήθηκαν με AI

2024-01-26
ΣΚΑΪ
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that are offensive and non-consensual, which constitutes a violation of rights and harm to the individual and community. The AI system's use in creating these images directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, as it involves harm to rights and communities caused by the use of an AI system.
Thumbnail Image

Τέιλορ Σουίφτ: Το Twitter μπλοκάρει την αναζήτησή της εξαιτίας των fake γυμνών φωτογραφιών της

2024-01-28
ΕΛΕΥΘΕΡΟΣ ΤΥΠΟΣ
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake images (deepfakes) that have caused harm by spreading false and damaging content about a person, which is a violation of rights and harms the community by spreading misinformation. The platform's blocking of searches is a direct response to this harm. Since the AI system's use has directly led to harm (reputational and rights violations), this meets the criteria for an AI Incident under violations of human rights and harm to communities. The harm is realized, not just potential, and the AI system's role is pivotal in creating the fake images.
Thumbnail Image

Οργή στις ΗΠΑ για τα ψεύτικα γυμνά της Τέιλορ Σουίφτ | Protagon.gr

2024-01-29
Protagon.gr
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create and distribute harmful fake images, directly leading to reputational and emotional harm to a person, which qualifies as harm to a person under the AI Incident definition. The AI system's use in generating and spreading these images is central to the harm. Therefore, this is an AI Incident rather than a hazard or complementary information, as the harm has already occurred and is ongoing.
Thumbnail Image

Μηνύσεις από την Τέιλορ Σουίφτ για γυμνές φωτογραφίες της με AI που κυκλοφορούν

2024-01-26
Sporτ FM
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content (deepfake images) that has been used without consent to create harmful and offensive images of a person, leading to reputational and privacy harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person and a violation of rights. The harm is realized, not just potential, as the images circulated and caused distress. Therefore, this is classified as an AI Incident.
Thumbnail Image

Се шират лажни експлицитни фотографии од Тејлор Свифт: Пејачката бесна, размислува да тужи - Слободен печат

2024-01-26
Слободен печат
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the images were created using AI and were distributed without consent, causing harm to Taylor Swift. This fits the definition of an AI Incident as the AI system's use directly led to a violation of rights and harm to the individual. The harm is realized, not just potential, and involves non-consensual explicit content, which is a serious violation of personal rights and dignity. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Се појавија "фотографии за возрасни" на најпознатата пејачка во светот: "Таа е луда" (фото)

2024-01-27
puls24.mk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images (deepfakes) that are harmful and exploitative, causing violation of rights and reputational harm to Taylor Swift. The AI system's use in creating these images is central to the harm described, fulfilling the criteria for an AI Incident involving violation of rights and harm to the individual. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Протекоа лажни експлицитни фотографии од Тејлор Свифт на сајт за возрасни

2024-01-27
tocka.com.mk
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated fake explicit images (deepfakes) of Taylor Swift, which have been shared widely and caused harm. The AI system's use directly led to violations of rights and harm to the community. The harm is realized, not just potential, as the images are already circulating and causing distress. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"ОВА Е ЗА ЗАТВОР!" - Протекоа "порнографски фотографии" на Тејлор Свифт: "Таа е бесна. Размислува за тужба..."

2024-01-27
Во Центар
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated deepfake images that are non-consensual and exploitative, causing harm to Taylor Swift. The harm includes violation of rights and emotional distress, which fits the definition of an AI Incident under violations of human rights and harm to communities. The AI system's use in generating and spreading these images is central to the incident. The article also discusses potential legal actions and societal reactions, but the primary focus is on the harm caused by the AI-generated content, confirming it as an AI Incident.
Thumbnail Image

Создадени голи фотографии од Тејлор Свифт со вештачка интелигенција, пејачката бесна - Локално

2024-01-27
Локално
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating explicit deepfake images of a real person without consent, which constitutes a violation of rights and harm to the individual. The AI system's use directly leads to harm in the form of reputational damage, emotional distress, and exploitation. Therefore, this qualifies as an AI Incident under the category of violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Тејлор Свифт повеќе не може да се пребарува на X: Ова го направивме заради безбедност

2024-01-28
Во Центар
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate explicit fake images of a person without consent, which constitutes a violation of rights and causes harm to the individual. The harm is realized as the images went viral and caused distress. The platform's response and the political reaction further confirm the seriousness of the incident. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-generated content violating personal rights and safety.
Thumbnail Image

Тејлор Свифт исчезна од мрежата X, еве за што се работи

2024-01-29
TV21.mk
Why's our monitor labelling this an incident or hazard?
AI systems were used to generate fake sexualized images of Taylor Swift, which were then disseminated on social media, causing reputational and privacy harm. This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The platform's response to remove the content confirms the harm has materialized. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Името на Тејлор Свифт повеќе не може да се пребарува на X: "Ова го направивме заради безбедност" - СТАНДАРД life

2024-01-29
СТАНДАРД life
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated pornographic images of a real person without consent, which is a clear violation of personal rights and causes reputational and emotional harm. The AI system's use directly led to this harm. The article describes actual harm occurring, not just potential harm. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities. The platform's response and calls for legislation are complementary but do not change the classification of the incident itself.
Thumbnail Image

Лажни голи фотографии од Тејлор Свифт со вештачка интелигенција кружат на интернет - M Express

2024-01-29
M Express
Why's our monitor labelling this an incident or hazard?
The use of AI to generate and distribute non-consensual explicit images constitutes a violation of personal rights and privacy, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. Since the harm is occurring through the dissemination of these images, this qualifies as an AI Incident.
Thumbnail Image

Протекоа "порнографски фотографии" на Тејлор Свифт: "Таа е бесна. Размислува за тужба..."

2024-01-30
puls24.mk
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated fake images (deepfakes) that have been disseminated without consent, causing harm to Taylor Swift's privacy and dignity. This is a clear violation of rights and exploitation, which fits the definition of an AI Incident. The harm is realized, not just potential, as the images have circulated widely and caused distress. The involvement of AI in generating the images is explicit, and the harm is directly linked to the AI system's outputs. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Procurile "FOTKE ZA ODRASLE" najpoznatije pevačice na svetu u SKANDALOZNIM POZAMA I NA KOJIMA "RADI SVAŠTA": "Prebesna je"

2024-01-26
Blic
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate fake images (deepfakes) of a person without consent, which is a direct violation of her rights and an exploitation. The harm is realized as the images were published and caused emotional distress. The AI system's role is pivotal as it enabled the creation of these fake images. Hence, this is an AI Incident involving violation of rights and harm to the individual.
Thumbnail Image

Iscurile 'pornografske fotografije' Taylor Swift: 'Bijesna je. Razmišlja o tužbi...'

2024-01-26
Jutarnji list
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images (deepfakes) that have caused harm to Taylor Swift by spreading non-consensual pornographic content. This is a clear case of harm to an individual through violation of rights and exploitation, directly linked to the use of an AI system to create the images. The harm is realized, not just potential, and legal responses are being considered, confirming the classification as an AI Incident.
Thumbnail Image

Ekstremne mjere na X-u zbog 'gole' Taylor Swift: 'Ovo smo napravili zbog sigurnosti, oprezni smo'

2024-01-28
Jutarnji list
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to create harmful, non-consensual pornographic images, which is a violation of rights and constitutes harm to the individual depicted. The harm is realized as the images circulated and caused distress, and the platform's response indicates recognition of the harm. The involvement of AI in generating the images and the resulting violation of rights and exploitation fits the definition of an AI Incident. The article also references societal and governance responses, but the primary focus is on the incident itself and its harms.
Thumbnail Image

OSVANULE "EKSPLICITNE FOTKE" TEJLOR SVIFT NA SAJTU ZA ODRASLE: Nakon uhođenja, pevačicu zadesio JOŠ JEDAN SKANDAL, ovo je pozadina

2024-01-26
kurir.rs
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the images are generated by artificial intelligence (deepfakes) and that these images are circulating on adult websites without Taylor Swift's consent. This constitutes a violation of her rights and causes harm to her reputation and emotional well-being. The AI system's use in creating these images is central to the harm described, fulfilling the criteria for an AI Incident due to violation of rights and harm to community. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

ZBOG SKANDALA SA GOLIM FOTOGRAFIJAMA TEJLOR SVIFT U SVE SE UMEŠALA I BELA KUĆA: "Situacija je ALARMANTNA, to je veliki problem!

2024-01-28
kurir.rs
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to create harmful, non-consensual pornographic images of a person, which constitutes a violation of rights and harm to the individual and community. The harm is realized as the images circulated widely, causing distress and reputational damage. The platform's response and government statements further confirm the seriousness of the incident. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

Na internetu osvanule gole fotke Taylor Swift, njeni fanovi zgroženi: Ovo je odvratno

2024-01-26
Klix.ba
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated fake nude images of Taylor Swift, created without her consent and shared on social media. This use of AI directly leads to harm by violating her rights and causing reputational and emotional damage. The AI system's role is pivotal as it was used to generate the harmful content. Therefore, this qualifies as an AI Incident under the category of violations of human rights and harm to communities.
Thumbnail Image

Pjevačica bijesna! Pojavile su se 'gole' fotke Taylor Swift: 'Ovo je uvreda, razmišlja i o tužbi!'

2024-01-26
24sata
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake nude images of Taylor Swift being distributed on social media and pornographic sites without her consent. This use of AI has directly caused harm by violating her rights and causing emotional distress, which fits the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized, not just potential, and the AI system's role is pivotal in generating the images.
Thumbnail Image

Kruže gole fotografije Taylor Swift. Pjevačica najavila tužbu, a izvor otkriva: 'Slike su lažne...'

2024-01-26
Net.hr
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system generating fake images, which constitutes the use of AI. The harm caused is a violation of privacy and unauthorized use of the person's likeness, which falls under violations of human rights and personal rights. The harm is realized as the images have been spread and caused distress. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content and its dissemination.
Thumbnail Image

Veštačkom inteligencijom napravili nage fotografije Tejlor Svift: Masovno se šire mrežama, pevačica besna

2024-01-27
NOVA portal
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to generate explicit deepfake images, which have been widely spread causing harm to the individual depicted. This meets the criteria for an AI Incident because the AI's use directly led to violations of rights and harm to the person. The harm is realized, not just potential, and the AI system's role is pivotal in creating the harmful content. Therefore, this is classified as an AI Incident.
Thumbnail Image

ISCURELE EKSPLICITNE FOTOGRAFIJE TEJLOR SVIFT: Pevačica BESNA, razmišlja o tužbi, u sve umešana i veštačka inteligencija

2024-01-26
Mondo Portal
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake explicit images (deepfakes) of Taylor Swift being circulated without her consent, which is a clear violation of her rights and an exploitation causing harm. The AI system's role in generating these images is pivotal to the harm caused. The harm is realized, not just potential, as the images have been published and spread on social media. This fits the definition of an AI Incident due to violation of rights and harm to the individual involved.
Thumbnail Image

Дийпфейк с голата Тейлър Суифт стигна до Белия дом

2024-01-27
СЕГА
Why's our monitor labelling this an incident or hazard?
The article explicitly states that deepfake technology, which uses AI to manipulate images, was used to create and spread harmful and explicit images of Taylor Swift. This has caused harm to the individual and raised concerns about the lack of legal frameworks to address such violations. The harm is realized, not just potential, as the images were widely viewed and caused significant distress and reputational damage. The involvement of AI in generating the deepfakes and the resulting harm to rights and dignity qualifies this as an AI Incident under the OECD framework.
Thumbnail Image

Разпространиха дийпфейк порнографски и насилнически кадри с Тейлър Суифт - Звезди

2024-01-27
offnews.bg
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake content, which is a clear example of an AI system's use leading to harm. The harm includes violations of personal rights and reputational damage to Taylor Swift, as well as broader societal harm due to the spread of non-consensual pornographic and violent images. The article confirms the AI involvement (deepfake generation) and the realized harm (widespread dissemination causing harm). Hence, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

''Х'' блокира търсенето на Тейлър Суифт заради дийпфейк атаката по нея с порнографски и насилствени изображения - Звезди

2024-01-29
offnews.bg
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake images that have directly led to harm by spreading offensive and non-consensual pornographic and violent content of a real person, Taylor Swift. This constitutes a violation of rights and harm to the individual and community. The platform's measures to block search and remove content are responses to an ongoing AI Incident. The article also mentions legislative debates on regulating AI-generated content, but the primary focus is on the realized harm caused by the AI-generated deepfakes, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ИИ съблече Тейлър Суифт: Това ли е последната капка преди въвеждането на регулации?

2024-01-29
bTV Новините
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate harmful deepfake content that directly violates the rights and dignity of a person, constituting a breach of fundamental rights. The harm is realized and ongoing, as the images have been widely disseminated and caused distress. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to the individual and community. The article also mentions responses and potential legal actions, but the primary focus is on the incident itself and its harms, not just on complementary information or future risks.
Thumbnail Image

"Разгневената" Тейлър Суифт обмисля съдебни действия заради голи снимки, създадени с изкуствен интелект

2024-01-26
Fakti.bg
Why's our monitor labelling this an incident or hazard?
The event describes the creation and viral spread of AI-generated explicit deepfake images of Taylor Swift without her consent, which constitutes a violation of her rights and causes harm to her reputation and emotional well-being. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The deletion of the account and consideration of legal action further confirm the harm has occurred. Therefore, this is classified as an AI Incident.
Thumbnail Image

Свиха най-мръсния номер на Тейлър Суифт, феновете са в потрес - Любопитно -- Новини Стандарт

2024-01-27
Стандарт - Новини, които си струва да споделим
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake images, which are explicitly mentioned. The harm is realized, as the images are widely distributed and have caused offense and reputational damage, fitting the definition of an AI Incident under violations of human rights and harm to communities. The involvement of AI in creating these images is direct, and the harm is clearly articulated. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Дийпфейк неприлични кадри с Тейлър Суифт се разпространяват в социалните мрежи

2024-01-27
LadyZone.bg
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake images, which are explicitly mentioned. The harm caused includes violations of rights (privacy, dignity) and harm to the community through the spread of offensive and pornographic content. Since the AI-generated content is already being widely distributed and causing harm, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Дийпфейк неприлични кадри с Тейлър Суифт се разпространяват в социалните мрежи

2024-01-27
Българска Телеграфна Агенция
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake images, which are a form of AI-generated content. The dissemination of these images has directly led to harm, including violations of personal rights and reputational damage to Taylor Swift, as well as broader community harm through the spread of offensive and pornographic material. The article describes realized harm, not just potential harm, and the AI system's role in generating the harmful content is pivotal. Hence, this is classified as an AI Incident.
Thumbnail Image

Дийпфейк изображения на Тейлър Суифт се появиха в социалната мрежа "Х"

2024-01-29
kafene.bg
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images (deepfakes) of a public figure, Taylor Swift, which were widely disseminated on a social media platform, causing harm to the individual and the community. The harm includes violation of rights and emotional harm due to violent and disfigured images. The AI system's use in generating these images directly led to the harm. The article also references legislative responses, but the primary event is the realized harm from the AI-generated content. Hence, this is an AI Incident.
Thumbnail Image

НЕ Я ПОЖАИЛХА: Пуснаха порнографски снимки на Тейлър Суифт

2024-01-25
Telegraph.bg
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate fake explicit images of Taylor Swift without her consent, which constitutes a violation of her rights and causes reputational and emotional harm. The harm is realized as the images circulated widely before removal, impacting the individual and community trust. This fits the definition of an AI Incident due to violation of rights and harm to communities through misinformation and non-consensual content dissemination.
Thumbnail Image

Pornografia tworzona przez AI, czyli deepfake z Taylor Swift

2024-01-30
Onet.pl
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake pornography involving a real person, which constitutes a violation of rights and causes harm to the individual and communities. The AI system's use directly led to the creation and dissemination of harmful content. This fits the definition of an AI Incident because the development and use of AI systems directly caused harm (violation of privacy, reputational damage, psychological harm) and disruption on social media platforms. Although regulatory and moderation responses are discussed, the main event is the realized harm from AI misuse, not just potential or complementary information.
Thumbnail Image

Bulwersujące zdjęcia z twarzą gwiazdy trafiły do sieci. "Obelżywe i obraźliwe"

2024-01-26
Interia.pl
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create fake images of Taylor Swift without her consent, depicting her in offensive and exploitative ways. This unauthorized use of her image constitutes a violation of her rights and is harmful to her reputation and dignity. The widespread dissemination of these images (millions of views) confirms that harm has occurred. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a violation of rights and harm to the individual. The article also discusses potential legal actions and societal responses, but the primary event is the realized harm caused by AI-generated deepfake images.
Thumbnail Image

X blokuje wyszukiwanie frazy "Taylor Swift". Powodem fałszywe "nagie zdjęcia" gwiazdy

2024-01-29
TVN24
Why's our monitor labelling this an incident or hazard?
The AI system was used to create false nude images of Taylor Swift, which were widely viewed and spread on the platform X. This misuse of AI directly led to reputational harm and violation of privacy rights, fitting the definition of an AI Incident under violations of human rights or harm to communities. The platform's blocking of search terms and content removal are responses to this realized harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

X blokuje wyszukiwanie "Taylor Swift" po zalewie podróbek wyprodukowanych przez AI

2024-01-29
Press.pl
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images of Taylor Swift that are sexually explicit and non-consensual, which have spread widely on the platform X. This constitutes a violation of personal rights and causes harm to the individual and community. The AI system's use in generating and disseminating these images is directly linked to the harm. The platform's blocking of searches and content removal are responses to this incident. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to community).
Thumbnail Image

AI generuje obrzydliwe zdjęcia Taylor Swift - fani wszczynają protest

2024-01-26
Antyweb
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating photorealistic fake images (deepfake and image generation AI) that have directly led to harm in the form of reputational damage, violation of privacy, and community harm through disinformation. The AI's use in creating explicit fake images of a public figure without consent constitutes a violation of rights and causes significant harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift - AI wygenerowała wulgarne zdjęcia. SAG-AFTRA komentuje: "To powinno być nielegalne" - naEKRANIE.pl

2024-01-27
naEKRANIE.pl
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate harmful, non-consensual pornographic images of a public figure, which is a clear violation of privacy and autonomy rights. This harm has already occurred as millions viewed the images, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The event is not merely a potential risk but a realized harm caused by AI misuse, thus classifying it as an AI Incident.
Thumbnail Image

X sam się pogrąża. Oto jak "rozwiązuje" palący problem

2024-01-29
Antyweb
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images (deepfakes) that are sexually explicit and harmful, which have spread widely on the platform X. The AI system's use in generating these images directly leads to harm to the community and the individual depicted, constituting a violation of rights. The platform's failure to effectively moderate or remove such content exacerbates the harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (harm to community and violation of rights).
Thumbnail Image

Annyiszor kattintották Taylor Swift kamu meztelen képeit, hogy törvény jöhet a deepfake ellen

2024-01-27
Index.hu
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as deepfake images are generated using AI technology that manipulates images to create realistic but fake content. The harm is direct and realized, including emotional, financial, and reputational damage to individuals targeted by these images, which constitutes harm to persons and communities. The article also discusses ongoing legal and political responses, but the primary focus is on the harm caused by the AI-generated deepfakes. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in creating and spreading manipulated images.
Thumbnail Image

Letiltották Taylor Swiftet, nem lehet rákeresni az X-en

2024-01-30
Index.hu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images causing harm by spreading non-consensual explicit content of Taylor Swift, which is a violation of rights and harms the individual and community. The platform's blocking of searches and removal of such content is a response to an ongoing harm caused by AI misuse. The involvement of AI in generating the harmful content and the realized harm to the individual and community meet the criteria for an AI Incident. The event is not merely a potential risk or complementary information but a realized harm due to AI misuse.
Thumbnail Image

Taylor Swift kamu meztelen képei után végre törvény jöhet a deepfake ellen

2024-01-27
Noizz.hu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake images that have been widely disseminated and viewed, causing significant harm to the individual depicted and raising concerns about emotional, financial, and reputational damage. The harm is realized, not just potential, and the AI system's role in generating and spreading manipulated content is central. The legislative efforts to criminalize such acts further underscore the recognition of harm. Hence, this event meets the criteria for an AI Incident involving violations of rights and harm to individuals and communities.
Thumbnail Image

Deepfake Taylor Swift-képek miatt szorgalmazzák Amerikában a hamisított képek elleni törvényeket

2024-01-27
telex
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake images, which have been widely viewed and caused emotional and reputational harm to the individual depicted. The article explicitly mentions the harm caused and the lack of current federal laws addressing this issue, with lawmakers pushing for new regulations. Since the harm is realized and directly linked to the use of AI-generated deepfake content, this constitutes an AI Incident under the framework, specifically as a violation of rights and harm to communities.
Thumbnail Image

Emberek milliói látták Taylor Swift "deepfake" meztelen képeit, törvényben szabályoznák az AI-t

2024-01-28
Nap Híre
Why's our monitor labelling this an incident or hazard?
The use of AI to create deepfake images that have been viewed millions of times constitutes a direct harm to the individual depicted (privacy violation and reputational harm) and to communities through misinformation and deception. Since the AI system's use has directly led to these harms, this qualifies as an AI Incident. The mention of legal regulation is a complementary aspect but does not change the primary classification.
Thumbnail Image

Pornó miatt tiltották le Taylor Swiftet

2024-01-29
ACNEWS
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images, which are explicitly described as using AI to manipulate images of Taylor Swift without consent. The harm is realized as the images spread widely, causing distress and reputational damage, which fits the definition of an AI Incident under violations of human rights and harm to communities. The platform's blocking of searches and removal of content is a response to this harm. Therefore, this is classified as an AI Incident due to the direct link between AI misuse and harm.
Thumbnail Image

A Microsoft lezárta a kiskaput, amivel felnőtt tartalmú képeket generáltak Taylor Swiftről

2024-01-29
telex
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Microsoft's Designer) used to generate deepfake images, which are AI-generated synthetic content. The generation and sharing of non-consensual sexual deepfake images constitute a violation of individual rights and cause harm to the person depicted, fitting the definition of an AI Incident under violations of human rights and harm to communities. The article reports that such images were created and widely disseminated, confirming realized harm. Microsoft's action to close the loophole is a response but does not negate the incident itself. Therefore, this is classified as an AI Incident.
Thumbnail Image

Jogi segítség nélkül maradhat a pornóbotrányba keveredett Taylor Swift

2024-01-30
marieclaire.hu
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake pornography, which directly harms the individual depicted by violating rights and causing reputational damage. The AI system's use in generating and disseminating the content is central to the harm. Although the article discusses the lack of legal frameworks, the primary event is the occurrence of harm caused by AI-generated content. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to the individual).
Thumbnail Image

Po spletu krožijo lažne gole fotografije Taylor Swift | 24ur.com

2024-01-27
24ur.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate fake pornographic images (deepfakes) of a celebrity, which were widely disseminated causing harm to the individual's rights and dignity. The harm is realized as the images were viewed millions of times before removal, constituting a violation of rights and harm to the community. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content. The mention of calls for stricter legislation and the broader problem context supports the assessment but does not change the primary classification.
Thumbnail Image

Nočna mora za Taylor Swift: gole fotografije "zakurile" splet

2024-01-27
SiOL
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create pornographic deepfake images of a public figure, Taylor Swift, which have been widely circulated online causing harm. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The involvement of AI in generating these images and the resulting harm is direct and materialized, not merely potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Ponarejene opolzke fotografije Taylor Swift sprožile razpravo o nujnosti regulacije UI

2024-01-28
MMC RTV Slovenija
Why's our monitor labelling this an incident or hazard?
The article describes the creation and spread of AI-generated deepfake images that caused harm by misleading millions of users and damaging the reputation and emotional state of the person depicted. This constitutes a violation of rights and harm to communities, fulfilling the criteria for an AI Incident. The involvement of AI in generating these images is explicit, and the harm is realized, not just potential. The discussion about regulatory needs and legal proposals is complementary but does not overshadow the primary incident of harm caused by the AI system's outputs.
Thumbnail Image

Krožijo lažne gole fotografije te zvezdnice, oglasila se je celo Bela hiša

2024-01-27
Slovenske novice
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create pornographic deepfake images of Taylor Swift, which have been widely circulated online, causing harm to the individual and raising concerns about harassment and privacy violations. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to the community). The involvement of political figures and calls for regulation further confirm the significance of the harm. The event is not merely a potential risk or complementary information but a realized harm caused by AI misuse.
Thumbnail Image

Z umetno inteligenco ustvarjene gole slike Taylor Swift razburile Belo hišo

2024-01-27
Dnevnik
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI to create deepfake pornographic images of a public figure, Taylor Swift, which were widely disseminated and caused harm. The AI system's role in generating these images and the resulting violation of personal rights and harassment meets the criteria for an AI Incident. The harm is realized, not just potential, as the images circulated widely and prompted official concern and calls for legislative action. Hence, this is not merely a hazard or complementary information but a clear AI Incident.
Thumbnail Image

Po spletu krožijo lažne gole fotografije Taylor Swift

2024-01-27
STA d.o.o.
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to create fake pornographic images of Taylor Swift, which is a clear case of AI-generated content causing harm to an individual's rights and dignity. The harm is realized as the images are circulating and causing outrage, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

Po spletu krožijo lažne gole fotografije Taylor Swift

2024-01-27
STA d.o.o.
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to generate fake pornographic images of Taylor Swift, which is a clear case of AI misuse causing harm to an individual's rights and dignity. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to communities (fans and public).
Thumbnail Image

Po spletu krožijo gole fotografije Taylor Swift, ustvarjene z umetno inteligenco

2024-01-27
N1
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake pornographic images without consent, which is a direct violation of rights and causes harm to the individual depicted and potentially others. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to communities). The article reports on the actual circulation and impact of these images, not just potential future harm or general commentary, so it is not merely a hazard or complementary information.
Thumbnail Image

Gefälschte Nacktbilder von Taylor Swift: Jetzt reagiert das Weiße Haus

2024-01-28
stuttgarter-nachrichten.de
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (text-to-image generative AI) to create fake nude images without consent, which constitutes a violation of personal rights and privacy, a form of harm to individuals. The dissemination of these images on social media platforms further amplifies the harm. The involvement of AI in generating the images and the resulting harm to the individual meets the criteria for an AI Incident, as the AI system's use has directly led to a violation of rights and harm to the person.
Thumbnail Image

Gefälschte Nacktbilder von Taylor Swift: Jetzt reagiert das Weiße Haus

2024-01-27
GMX
Why's our monitor labelling this an incident or hazard?
The article describes an AI system's use to generate and distribute non-consensual intimate images, which is a clear violation of rights and causes harm to the individual depicted. The harm is realized, not just potential, as the images were publicly accessible and widely viewed. The AI system's role is pivotal as it enabled the creation of these fake images. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations protecting fundamental rights.
Thumbnail Image

KI generierte Fake-Aufnahmen: Deutliche Worte aus dem Weißen Haus zu Swift-Nacktfotos

2024-01-27
N-tv
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-generated fake nude images of Taylor Swift, which is a clear case of non-consensual intimate image creation and distribution. This constitutes a violation of fundamental rights, specifically privacy and dignity, and is a recognized form of harm under the AI Incident definition. The AI system's use directly caused this harm. The political and social responses further confirm the seriousness of the incident. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

KI-Nacktbilder sorgen für Furore - das sagt das Weiße Haus

2024-01-27
BRIGITTE
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated non-consensual intimate images, which directly harms the individual involved by violating privacy and personal rights. The AI system's use in generating these images is central to the harm. The widespread public exposure and political responses further confirm the materialization of harm. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to harm to a person and violation of rights.
Thumbnail Image

Gefälschte explizite Bilder von Taylor Swift: Weißes Haus ist "alarmiert"

2024-01-27
بوابتك العربية
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate non-consensual explicit images, which have been widely disseminated, causing harm to the individual depicted and potentially others. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to the community through the spread of harmful content. The article details actual harm occurring, not just potential harm, and discusses societal and legal responses to this harm, confirming the classification as an AI Incident.
Thumbnail Image

Taylor Swift: Das Weiße Haus reagiert auf KI-Nacktfotos

2024-01-27
بوابتك العربية
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of AI-generated non-consensual intimate images, which constitutes a violation of personal rights and privacy, a form of harm to individuals. The AI system (a text-to-image generative AI tool) was used to create these images, and their publication caused harm to the individual and communities (fans, public). This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and harm to community). The article also discusses responses and potential legislative measures, but the primary focus is on the incident itself.
Thumbnail Image

KI erzeugte gefälschte Aufnahmen: Klare Worte aus dem Weißen Haus zu Swift-Nacktfotos

2024-01-27
بوابتك العربية
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate non-consensual intimate images, which is a clear violation of rights and causes harm to the individual depicted and the broader community. The harm has already occurred as the images were widely disseminated and viewed, causing distress and reputational damage. The political and legal responses further confirm the recognition of harm. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use.
Thumbnail Image

Taylor Swift's Explicit AI-Generated Images Raise Concerns Over Technological Misuse | Fans Outraged

2024-01-26
Jagran English
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated images that are derogatory and explicit, indicating the involvement of AI systems in content generation. The harm is realized as the images have gone viral, causing emotional and reputational harm to Taylor Swift, a person. This constitutes a violation of personal rights and harm to communities (fans and public discourse). Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse in generating harmful content.
Thumbnail Image

NSFW Taylor Swift AI Pics Are Going Viral - And Fans Are Rightfully PISSED!! - Perez Hilton

2024-01-25
Perez Hilton
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake images of a real person without consent, which is a direct violation of privacy and potentially other rights. The harm is realized as the images are spreading and causing distress, and the AI system's role is pivotal in creating these images. This fits the definition of an AI Incident under violations of human rights and harm to communities. The article also mentions ongoing moderation efforts and legal considerations, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

'Protect Taylor Swift' is trending on social media for harrowing reason

2024-01-25
WalesOnline
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating harmful content (graphic fake images) that directly leads to harm in the form of violation of personal rights and harm to the community (fans and the individual targeted). The AI system's use here is malicious and results in realized harm, meeting the criteria for an AI Incident. The harm is not hypothetical or potential but ongoing and materialized, as evidenced by the outrage and calls for legal action. Therefore, this is classified as an AI Incident.
Thumbnail Image

Taylor Swift Fans Are In An Uproar As Someone Created "Disgusting" NSFW A.I. Photos Of The Pop Star That Are Going Viral Online

2024-01-25
Total Pro Sports
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that are non-consensual and sexually explicit, which directly harms the individual depicted by violating her rights and dignity. The widespread circulation and public backlash confirm that harm has materialized. The AI system's use in creating these images is central to the incident, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The mention of legal frameworks and executive orders further supports the recognition of this as a significant harm caused by AI misuse.
Thumbnail Image

'Protect Taylor Swift' Trending as Explicit AI Photos Leave Singer 'Furious'

2024-01-26
AceShowbiz
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated explicit deepfake images that have been posted and spread online, causing harm to Taylor Swift's personal rights and emotional well-being. The AI system's use in creating non-consensual sexual images constitutes a violation of rights and is abusive and exploitative. The harm is realized as the images are circulating and causing distress, meeting the criteria for an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The involvement of AI in generating the images is explicit and central to the harm described.
Thumbnail Image

Taylor Swift hit by 'disgusting' AI deep fakes

2024-01-26
Music News
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated deepfake images that are nonconsensual and sexually explicit, which constitutes a violation of rights and defamation, a form of harm to the individual. The AI system's use in generating these images is central to the harm occurring. The viral spread on a platform owned by Elon Musk further amplifies the impact. The harm is direct and realized, meeting the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

Falske nøgenbilleder: Nu reagerer Det Hvide Hus

2024-01-26
Ekstra Bladet
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated fake nude images, which is a direct violation of privacy and can cause significant harm to the individual depicted. The AI system's role in generating these images is explicit, and the harm is realized as the images are being spread. The involvement of the White House and the social media platform's enforcement actions further confirm the recognition of harm. Hence, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Falske pornobilleder i omløb: Nu er det gjort helt umuligt

2024-01-28
Ekstra Bladet
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated fake nude images, which is a direct violation of privacy and potentially other rights. The AI system's use in generating these images is central to the harm caused. The platform's response to block searches and remove content confirms the harm is realized and ongoing. Hence, this is an AI Incident due to the direct harm caused by AI-generated manipulated content violating rights.
Thumbnail Image

Falske nøgenfotos af Taylor Swift spredes på sociale medier

2024-01-26
Politiken
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created and disseminated using AI systems. The harm is realized as the images are non-consensual and sexually explicit, violating the individual's rights and causing reputational and emotional harm. The platform's delayed removal and the widespread sharing further exacerbate the harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's outputs and their distribution.
Thumbnail Image

Efter skandale med falske nøgenbilleder: Nu kan du ikke længere søge på Taylor Swift

2024-01-29
Politiken
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images, which are created using AI systems capable of generating realistic fake content. The harm is realized as these images circulated widely without consent, violating privacy and potentially causing emotional and reputational harm to Taylor Swift. The platform's response to block searches indicates recognition of the harm caused. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to communities. The discussion of legislative responses further supports the significance of the harm caused.
Thumbnail Image

Taylor Swift udsat for spredning af falske pornobilleder

2024-01-26
Jyllands-Posten
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create deepfake pornographic images, which were widely shared and caused harm to the individual and potentially to communities. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm. The harm is realized, not just potential, as the images were viewed millions of times and caused distress. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Taylor Swift udsat for spredning af falske pornobilleder

2024-01-26
Kristeligt Dagblad
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images, which are created using generative AI systems. The spread of these images on social media platforms has caused harm to Taylor Swift and potentially to the broader community by disseminating toxic and harmful content. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of rights and harm to the individual and community. The harm is realized, not just potential, as the images were widely viewed and circulated.
Thumbnail Image

Taylor Swift udsat for spredning af falske pornobilleder

2024-01-26
www.tidende.dk
Why's our monitor labelling this an incident or hazard?
AI-generated fake images are explicitly mentioned, indicating the involvement of an AI system in creating harmful content. The images caused reputational and emotional harm to Taylor Swift and potentially to the wider community by spreading false and explicit content. The harm has already occurred as the images were widely viewed and circulated. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

AI deepfakes of Taylor Swift spread on X. Here's what to know.

2024-01-26
Washington Post
Why's our monitor labelling this an incident or hazard?
AI deepfake technology is explicitly involved in generating non-consensual nude images, which is a violation of personal rights and can cause significant harm to the individual depicted and the broader community. The content has been posted and is being actively removed, indicating that harm is occurring. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's use in creating and spreading harmful content.
Thumbnail Image

Taylor Swift content now unsearchable on X after pornographic deepfakes go viral - Yahoo Sports

2024-01-28
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that are pornographic and non-consensual, causing harm to Taylor Swift's rights and reputation. The viral spread of these images on X led to the platform restricting search functionality to mitigate further harm. The AI system's use in creating and distributing harmful content directly led to realized harm, fitting the definition of an AI Incident due to violations of rights and harm to communities. The platform's actions and public concern further confirm the incident's significance.
Thumbnail Image

X confirms it blocked Taylor Swift searches to 'prioritize safety'

2024-01-28
engadget
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to create nonconsensual pornographic deepfake images, which have been widely disseminated on the platform X, causing harm to the individual (Taylor Swift) and violating her rights. The platform's response to block searches and remove content confirms the harm has occurred and is ongoing. The AI system's role in generating the deepfakes is central to the incident, fulfilling the criteria for an AI Incident due to violations of rights and harm to communities.
Thumbnail Image

Taylor Swift content now unsearchable on X after pornographic deepfakes go viral

2024-01-27
Daily News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that are non-consensual and pornographic, which directly harms the individual (Taylor Swift) and the broader community by spreading harmful content. The AI system's use in creating and disseminating these images led to violations of rights and reputational harm, fitting the definition of an AI Incident. The platform's actions to remove content and restrict searchability are responses to the incident, not the primary event. The harm is realized, not just potential, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

X Takes Action Against Taylor Swift Deepfake Scandal, Critics Decry Delay In Response

2024-01-29
BW Businessworld
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created using AI systems. The harm includes violations of personal rights (nonconsensual nudity) and harm to the community (spread of explicit content causing distress). The platform's delayed action contributed to the harm by allowing the content to remain accessible for an extended period. The AI system's misuse directly led to these harms, fitting the definition of an AI Incident under violations of rights and harm to communities.
Thumbnail Image

Фото: Гламурозната Џенифер Лопез

2024-01-28
kanal5.com.mk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate fake images (deepfakes) of a person, which have been widely disseminated online, causing harm to the individual's rights and potentially to communities by spreading misinformation and explicit content. The harm is realized as the images have been viewed millions of times and have led to public outcry and platform suspensions. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-generated content violating rights and spreading disinformation.
Thumbnail Image

Белата куќа реагираше поради лажните експлицитни фотографии на Тејлор Свифт

2024-01-28
vecer.mk
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated fake images, indicating the involvement of AI systems in creating harmful content. The spread of such images constitutes harm to communities and a violation of rights, fulfilling the criteria for an AI Incident. The White House's concern and the suspension of accounts spreading these images further confirm the realized harm caused by the AI system's misuse.
Thumbnail Image

Лажни секс фотки од Тејлор Свифт го преплавија интернетот

2024-01-26
IDIVIDI
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated fake explicit images of Taylor Swift are being widely shared on social media, causing harm to the individual and distress to the community. The AI system's role in generating these images is direct and pivotal to the harm caused, including violation of rights and reputational harm. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident under the framework, as it involves direct harm caused by the use of an AI system.
Thumbnail Image

"X" ги блокира пребарувањата за Тејлор Свифт по објавата на дипфејк фотографии од пејачката

2024-01-29
meta.mk
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are a product of AI systems creating realistic but fake content. The harm is realized as the images are non-consensual explicit content, violating privacy and potentially causing psychological harm to the individual and distress to communities. The platform's blocking of searches and removal of content is a response to this harm. The involvement of AI in generating harmful content that is actively spreading fits the definition of an AI Incident, as it directly leads to violations of rights and harm to communities. The event is not merely a potential risk but a realized harm, thus not an AI Hazard or Complementary Information.
Thumbnail Image

"X" ги блокира пребарувањата за Тејлор Свифт по објавата на дипфејк фотографии од пејачката

2024-01-29
Truthmeter
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images that are explicit and non-consensual, directly causing harm to the individual (Taylor Swift) and potentially to the community by spreading harmful misinformation and violating rights. The platform's response and governmental attention further confirm the recognition of harm. The AI system's use in creating and disseminating these images is central to the incident, fulfilling the criteria for an AI Incident due to violation of rights and harm to communities.
Thumbnail Image

樂壇天后Taylor Swift深偽裸照網上瘋傳 粉絲抗議

2024-01-26
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI deepfake technology was used to create fake nude images of Taylor Swift, which have been widely viewed and spread on social media. This constitutes a violation of rights and harm to the individual involved. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and involves violations of fundamental rights and reputational damage. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

社媒疯传泰莱深伪裸照 议员吁立法定为刑事罪 | 国际

2024-01-27
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake images that have been widely disseminated, causing direct harm to the individual depicted (Taylor Swift) in terms of emotional distress, reputational damage, and violation of rights. The AI system's use in creating non-consensual explicit content constitutes a violation of fundamental rights and harms communities by spreading harmful misinformation and abuse. The harm is realized and ongoing, not merely potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

顶流女明星AI不雅照被传播:女神霉霉很愤怒!再不删照后果很严重

2024-01-27
163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images (deepfakes) of Taylor Swift being spread online, which directly harms her by infringing on her portrait rights and damaging her reputation. The AI system's use in generating these images is central to the harm. The harm is realized and ongoing, as the images are being disseminated and causing damage, meeting the criteria for an AI Incident under violations of rights and harm to communities or individuals. The legal actions and account deletions are responses but do not negate the incident classification.
Thumbnail Image

萨蒂亚·纳德拉:基于AI的伪造不良内容是"令人震惊和感到可怕的" - cnBeta.COM 移动版

2024-01-27
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (image generation models like Microsoft Designer and Stable Diffusion) being used to create and spread non-consensual explicit images, which is a clear violation of personal rights and causes harm to individuals and communities. The harm is realized and ongoing, not just potential. Microsoft's AI system's vulnerabilities contribute indirectly to this harm, and the article discusses the societal impact and the need for regulatory and technical responses. Therefore, this is classified as an AI Incident due to direct harm caused by AI-generated content violating rights and causing reputational and emotional damage.
Thumbnail Image

X似乎已经阻止了对Taylor Swift关键词的搜索 但很容易绕过 - 社交 - Microblog 微博客 - cnBeta.COM

2024-01-27
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated deepfake images, which are illegal and harmful content. The platform's partial blocking of search terms related to these images and the ease of bypassing these blocks indicate an ongoing issue with AI-generated harmful content dissemination. The mention of potential legal action and calls for better AI safeguards highlights the harm caused by AI misuse (deepfake images) and the need for mitigation. Since the AI-generated content is already being disseminated and causing harm (illegal images), this qualifies as an AI Incident due to violation of rights and harm to communities through the spread of harmful AI-generated content.
Thumbnail Image

Opinion: The chilling lesson of the Taylor Swift deepfakes | CNN

2024-02-01
CNN
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake content that sexually exploits individuals without their consent, causing harm to their dignity, privacy, and potentially mental health. This constitutes a violation of human rights and harm to communities. The AI system's use in creating and disseminating these images is central to the harm described, fulfilling the criteria for an AI Incident. The article also discusses ongoing harm (e.g., millions of views on exploitative sites) and the difficulty in mitigating it, confirming that harm is realized rather than merely potential.
Thumbnail Image

Inside the Taylor Swift deepfake scandal: 'It's men telling a powerful woman to get back in her box'

2024-01-31
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to generate deepfake pornographic content, which has directly led to significant harm to individuals, including psychological trauma, reputational damage, and violation of privacy and rights. The harm is realized and ongoing, with social media platforms failing to adequately prevent or remove such content. The AI system's use in creating and spreading nonconsensual deepfake pornography is central to the harm described, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Taylor Swift is the latest high-profile deepfake victim. Here's what lawmakers are doing to protect them.

2024-02-01
Fox News
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake content causing harm to individuals (nonconsensual pornographic images) and communities (political misinformation). The harms are realized, not just potential, and the AI system's role is pivotal in generating these deepfakes. The discussion of laws and detection technologies serves as complementary information but does not negate the fact that AI-generated deepfakes are actively causing harm. Therefore, the event is best classified as an AI Incident due to the direct and ongoing harm caused by AI deepfake systems.
Thumbnail Image

AI deepfakes and their deeper impacts

2024-02-03
Inquirer
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems to create deepfake images and videos that have caused real harm to individuals, including celebrities and potentially ordinary people. The harms include misinformation, reputational damage, emotional distress, and identity theft, all of which fall under the defined categories of AI Incident harms (harm to communities, violation of rights, and injury to persons). The legislative responses further confirm recognition of these harms. Since the harms are occurring and the AI systems are central to their creation, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Brings Deepfake Pornography to the Masses, as Canadian Laws Play Catch-Up

2024-02-03
www.theepochtimes.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used to create deepfake pornography, which is a direct violation of individuals' rights and causes harm to victims. The harm is realized, not hypothetical, as evidenced by cases involving students and celebrities. The involvement of AI in generating these images is central to the harm described. The article also covers legal and societal responses but the primary focus is on the harm caused by AI-generated non-consensual explicit images. Hence, this is an AI Incident due to direct harm to individuals and communities through violations of rights and privacy.
Thumbnail Image

It's not just Taylor Swift 'nudes': Millions of teen girls are...

2024-02-02
New York Post
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create harmful content that has directly led to violations of rights and harm to individuals (teenage girls) and communities (school environment). The malicious use of AI to generate and distribute fake explicit images constitutes an AI Incident because the harm is realized and ongoing. The article also discusses responses and legislative efforts, but the primary focus is on the incident of harm caused by AI misuse.
Thumbnail Image

AI makes deepfake pornography more accessible, as Canadian laws play catch-up | CBC News

2024-02-03
CBC News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to generate fake explicit images (deepfakes) of underage girls and women, which are then shared online without consent. This constitutes a violation of privacy and potentially other human rights, fulfilling the criteria for harm under the AI Incident definition (violations of human rights and harm to communities). The harm is realized, as evidenced by cases reported in schools and public figures targeted. The AI system's use is central to the harm, as the technology enables the creation and dissemination of these images. The article also discusses legal responses, but the primary focus is on the harm caused by AI-generated deepfake pornography, not just the policy response. Hence, this is an AI Incident.
Thumbnail Image

Taylor Swift and deepfake porn: What's the law?

2024-02-02
The Hindu
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create non-consensual deepfake pornographic images, which constitute a violation of individual rights and cause harm to communities by spreading sexual abuse content. The harm is realized and ongoing, as the images have been widely viewed and shared, directly impacting victims. The article focuses on the harms caused by AI-generated deepfake pornography and the legal and societal responses to these harms. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to violations of rights and harm to individuals and communities.
Thumbnail Image

What to do if someone makes a deepfake of you

2024-01-31
Mashable
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to create deepfake images and videos without consent, causing direct harm to individuals through reputational damage, harassment, and emotional trauma. The harms described fall under violations of human rights and harm to communities. The AI systems' use in generating nonconsensual explicit content is central to the harm, fulfilling the criteria for an AI Incident. The article also discusses responses and resources but the primary focus is on the harm caused by AI misuse, not just complementary information or potential hazards.
Thumbnail Image

Taylor Swift deepfakes: New technologies have long been weaponized against women. The solution involves everyone

2024-02-01
Tech Xplore
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake images, which have directly led to significant harms including violations of privacy, mental health impacts, and reputational damage to individuals, particularly women. The deepfakes are AI-generated synthetic media causing real harm, meeting the criteria for an AI Incident. The article details actual harm occurring due to the use of AI-generated deepfake pornography, not just potential harm or general commentary, so it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

What to do if someone makes a deepfake of you

2024-01-31
Mashable ME
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate nonconsensual deepfake images and videos, which cause direct harm to individuals by violating their rights, damaging reputations, and enabling harassment. The article details realized harm from the use of AI in creating and disseminating these images, fitting the definition of an AI Incident. The discussion of legal and platform responses serves as complementary information but does not overshadow the primary focus on the harm caused by AI-generated deepfakes.
Thumbnail Image

Taylor Swift deepfakes: a legal case from the singer could help other victims of AI pornography

2024-01-31
The Conversation
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to generate deepfake pornographic images of Taylor Swift without her consent, which were widely distributed and caused significant harm including violation of privacy and emotional distress. The harm is realized and directly linked to the AI-generated content. The discussion of legal responses and legislative gaps further supports the classification as an AI Incident rather than a hazard or complementary information. The AI system's role in creating and distributing harmful content is pivotal to the incident.
Thumbnail Image

It Doesn't End With Taylor Swift: How to Protect Against AI Deepfakes and Sexual Harassment

2024-02-02
POPSUGAR UK
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and distribute nonconsensual deepfake pornography, which constitutes a violation of human rights and causes psychological and social harm to victims. The article details actual harm occurring due to AI-generated content, including widespread dissemination on social media platforms and the resulting trauma and objectification of victims. The involvement of AI in generating realistic fake images and the direct impact on victims' rights and well-being qualifies this as an AI Incident under the OECD framework.
Thumbnail Image

Commentary: Taylor Swift deepfake explicit images are a warning anyone can be targeted

2024-02-03
CNA
Why's our monitor labelling this an incident or hazard?
The article centers on the general issue of non-consensual deepfake pornography and its harms, which are well-documented and ongoing, but it does not describe a specific AI Incident or AI Hazard event. It highlights existing harms and calls for legal and technological measures, which aligns with providing complementary information about AI-related harms and responses. Therefore, it fits best as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

What to know about how lawmakers are addressing deepfakes like the ones that victimized Taylor Swift

2024-02-01
Boston
Why's our monitor labelling this an incident or hazard?
The article primarily focuses on the societal and governance responses to AI harms caused by deepfakes, including legislative actions, legal proposals, and expert opinions on mitigation strategies. While it describes harms caused by AI-generated deepfakes (which qualify as AI Incidents), the article itself is not reporting a new specific AI Incident or AI Hazard event but rather provides an overview of ongoing responses and policy developments. Therefore, it fits the definition of Complementary Information as it enhances understanding of AI harms and responses without describing a new primary incident or hazard.
Thumbnail Image

Opinion: Taylor Swift may speak now about sexual deepfake images. But that's not enough

2024-02-01
The Globe and Mail
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically generative AI used to create sexual deepfake images without consent. The harms described include violations of personal rights, sexual exploitation, and harm to individuals' dignity and privacy, which fall under violations of human rights and harm to communities. These harms are occurring and ongoing, as evidenced by the widespread creation and distribution of such images and the failure of platforms to effectively moderate them. Therefore, this event qualifies as an AI Incident because the development and use of generative AI systems have directly led to significant harm through non-consensual sexual deepfake images.
Thumbnail Image

AI brings deepfake pornography to the masses, as Canadian laws play catch-up

2024-02-03
The Star
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to generate fake explicit images (deepfakes) that are distributed without consent, causing harm to individuals' privacy, dignity, and potentially violating laws protecting minors and adults alike. The harms include violations of human rights and harm to communities through the spread of non-consensual intimate images. The involvement of AI in creating these images and the resulting harms are direct and ongoing, qualifying this as an AI Incident. The article also discusses legal and policy responses, but the primary focus is on the realized harms caused by AI-generated deepfake pornography.
Thumbnail Image

Taylor Swift is not the first victim of AI: Decoding the deepfake dilemma

2024-02-02
VentureBeat
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deep learning-based generative AI tools) being used to create harmful deepfake content that has already caused reputational damage and enabled scams, which are direct harms to individuals and communities. The harms are realized, not just potential, and the AI system's use is central to these harms. Therefore, this qualifies as an AI Incident under the framework, as the development and use of AI systems have directly led to harm (reputational harm, fraud, misinformation).
Thumbnail Image

Taylor Swift deepfake porn deluge a 'wake-up call' for lawmakers

2024-02-01
The Next Web
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating deepfake pornographic content without consent, which constitutes a violation of human rights and causes harm to individuals and communities. The harm is realized and ongoing, as evidenced by the viral spread of such content and the legal and social backlash. Therefore, this qualifies as an AI Incident. The article also includes discussion of legislative and technological responses, but the primary focus is on the harm caused by the AI-generated content and its distribution.
Thumbnail Image

It's Not Just Taylor Swift -- All Women Are at Risk From the Rise of Deepfakes

2024-02-01
Glamour UK
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake pornography without consent, which is a direct violation of human rights and causes harm to individuals and communities. The article details actual harm occurring through the circulation of these AI-generated images, including psychological distress and reputational damage. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Deepfake porn: a rising tide of misogyny

2024-02-03
The Week
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deepfake images and audio that have been used maliciously to harm individuals (sexual humiliation of women via deepfake porn) and to disrupt democratic processes (deepfake audio calls to voters). These harms fall under violations of rights and harm to communities. The AI systems' use directly led to these harms, meeting the criteria for an AI Incident. The article also notes the challenges in regulation and enforcement, but the primary focus is on the realized harms caused by AI-generated content, not just potential or future risks.
Thumbnail Image

EXPLAINER-Taylor Swift and deepfake porn: What's the law? | Law-Order

2024-02-01
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake pornographic images, which have been widely viewed and cause harm to the individuals depicted, constituting violations of rights and harm to communities. The article details realized harm from the use of AI-generated deepfakes and the challenges in legal recourse, thus meeting the criteria for an AI Incident. The AI system's use directly leads to harm (non-consensual sexual abuse via fabricated images), fulfilling the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift and deepfake porn: What's the law? | BreakingNews.ie

2024-02-01
Breaking News.ie
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deepfake pornography targeting Taylor Swift, which is a clear example of harm caused by AI systems through non-consensual sexual content. The harm includes violations of privacy and potential psychological and reputational damage, fitting the definition of harm to persons and violation of rights. The article also details the legal and enforcement challenges, but the core event is the realized harm caused by AI-generated content. Hence, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taylor Swift and deepfake porn: what's the law?

2024-02-02
Free Malaysia Today
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems (deepfake technology) to create fabricated pornographic images of Taylor Swift, which have been widely disseminated, causing harm to her and others targeted. The harms include violations of privacy, potential psychological harm, and reputational damage, which fall under violations of human rights and harm to individuals. The article also discusses the legal and enforcement challenges, but the primary focus is on the realized harm caused by the AI-generated deepfakes. Hence, this is an AI Incident as the AI system's use has directly led to significant harm.
Thumbnail Image

Taylor Swift: What Are the AI Videos? How Are the Deepfakes Made?

2024-02-01
ComingSoon.net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI and GANs) used to create deepfake videos that impersonate a real person without consent, causing harm to the individual's reputation and privacy, which constitutes a violation of rights. The harm is realized as these videos are circulating online, including on adult websites, leading to outrage and calls for legal reform. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake content.
Thumbnail Image

Deepfake images continue to cloud social media. Can they be stopped?

2024-02-02
TribLIVE
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating deepfake content that has already caused significant harm, including nonconsensual pornographic videos and reputational damage, which are direct harms to individuals and communities. It describes realized harms rather than just potential risks, such as financial and reputational harm, privacy violations, and legal challenges arising from AI deepfakes. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm. The article also discusses ongoing societal and legal responses, but the primary focus is on the harms caused by AI deepfakes already occurring, not just potential future risks or complementary information.
Thumbnail Image

Deepfake porn: It's not just about Taylor Swift

2024-02-03
Post and Courier
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Microsoft's AI image generator to create nonconsensual deepfake pornographic images of Taylor Swift and others, which have been disseminated widely, causing reputational and psychological harm. This directly involves the use of an AI system (image generator) leading to violations of rights and harm to individuals and communities. The harm is realized, not just potential, as the images went viral and caused public outrage and distress. The discussion of legal challenges and legislative responses further supports the classification as an AI Incident rather than a mere hazard or complementary information. Hence, the event meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

It's not just Taylor Swift; all women are at risk from the rise of deepfakes

2024-01-31
Glamour UK
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to generate non-consensual pornographic content, which directly leads to harm including violations of rights, psychological harm, and reputational damage. The article details actual incidents of harm, not just potential risks, and highlights the systemic nature of this abuse against women. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

Taylor Swift deepfakes cause stir at White House

2024-02-02
Niche Gamer
Why's our monitor labelling this an incident or hazard?
The article describes the creation and use of AI-generated deepfake images of Taylor Swift that are explicit and harassing in nature. Deepfakes are AI systems that generate realistic fake images or videos. The harm here is a violation of personal rights and harassment, which falls under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized, not just potential, as the images have been created and caused concern at the White House level. Hence, this is an AI Incident involving the use of AI systems to generate harmful content.
Thumbnail Image

The Taylor Swift deepfake scandal: 'It's men telling a powerful woman to get back in her box'

2024-01-31
Irish Examiner
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI for deepfake creation) whose use has directly led to significant harm to individuals, including violations of rights, psychological harm, and reputational damage. The article documents actual incidents of harm, not just potential risks, and highlights the role of AI in facilitating the creation and spread of abusive content. This meets the criteria for an AI Incident as the AI system's use has directly caused harm to persons and communities.
Thumbnail Image

Can Taylor Swift save humanity from AI's dark side? - Taipei Times

2024-02-01
Taipei Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI image generators) to create unauthorized deepfake pornography, which has directly caused harm to individuals, including psychological distress and reputational damage, fulfilling the criteria for an AI Incident. The article discusses realized harms (not just potential) and the role of AI in enabling these harms. It also covers societal and legal responses, but the primary focus is on the harm caused by AI-generated deepfakes, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

The danger of deepfakes goes far beyond Taylor Swift

2024-02-01
New Statesman
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deepfake pornography, which involves AI systems that superimpose faces onto pornographic images. The harm described includes violations of privacy, consent, and dignity, which fall under violations of human rights and harm to communities. The proliferation of such content on social media platforms and the resulting distress to victims constitute realized harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the use of AI systems to create and disseminate non-consensual deepfake pornography.
Thumbnail Image

My traditional Christian family have no clue about my sex secret

2024-02-03
The Irish Sun
Why's our monitor labelling this an incident or hazard?
Deepfake pornography is created using AI systems that generate realistic but fake videos by manipulating images of real people without their consent. This directly leads to violations of human rights, specifically privacy and consent, and can cause significant harm to the victims depicted. The article describes the existence and use of such AI-generated content and the harm it causes, which fits the definition of an AI Incident. Although the harm is indirect to the person writing the letter, the AI system's use has directly led to violations of rights and harm to others. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

We Are In The Middle Of An AI Deepfake Porn Crisis

2024-02-02
Junkee
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deepfake pornography, which is an AI system application that manipulates images and videos to create realistic but fake explicit content without consent. The harms described include violations of privacy, consent, and human rights, as well as psychological and social harm to victims, including minors. These harms have already occurred and are ongoing, fulfilling the criteria for an AI Incident. The article also mentions the role of AI in enabling these harms and the urgent need for regulatory and societal responses, confirming the direct link between AI system use and realized harm.
Thumbnail Image

Deepfake porn: It's not just about Taylor Swift

2024-02-03
Lexington Herald Leader
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate nonconsensual deepfake pornographic images, which have directly caused harm to individuals' privacy and dignity, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The article documents actual occurrences of harm, including viral fake images and the spread of AI-generated child sexual abuse material, not just potential risks. Therefore, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

The Rise of Deepfake Images on Social Media Raises Concerns and Legal Questions

2024-02-02
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (deepfake generation AI) whose use has directly led to harms including privacy violations, reputational harm, and misinformation affecting communities. These harms fall under violations of rights and harm to communities. The article also discusses ongoing and realized harms rather than just potential risks. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The focus is on the harms caused by AI-generated deepfakes and the need for legal and ethical responses, not merely on updates or general AI news.
Thumbnail Image

Cabaero: Taylor Swift, child pornography, and AI

2024-02-03
Sun.Star Network Online
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to create deepfake pornography, which is a violation of privacy and intellectual property rights, and involves child exploitation, a serious harm. The mother's illegal business is directly impacted by AI deepfakes, which can create new harmful content without consent. The use of AI in generating non-consensual pornographic content and the resulting harms meet the criteria for an AI Incident, as the AI system's use has directly led to significant harm (violation of rights and exploitation).
Thumbnail Image

Will Taylor Swift's AI deepfake problems prompt Congress to act?

2024-02-01
GZERO Media
Why's our monitor labelling this an incident or hazard?
The creation and distribution of non-consensual pornographic deepfakes using generative AI constitutes a direct violation of individual rights and causes significant psychological harm, fitting the definition of an AI Incident. The article highlights actual harm caused by the AI system's use (non-consensual deepfake generation) and the resulting trauma. Although it also discusses potential legislative responses, the primary focus is on the realized harm from the AI system's misuse, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI brings deepfake pornography to the masses, as Canadian laws play catch-up

2024-02-03
CHEK
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to generate deepfake pornography, which directly leads to harm including violations of privacy, psychological harm, and reputational damage to individuals, including minors and public figures. The misuse of AI to create and distribute non-consensual explicit images fits the definition of an AI Incident because the AI system's use has directly led to significant harm to persons and communities. The article also discusses legal responses and societal impacts, but the primary focus is on the realized harms caused by AI-generated deepfake pornography.
Thumbnail Image

Opinion: The chilling lesson of the Taylor Swift deepfakes - KION546

2024-01-31
KION546
Why's our monitor labelling this an incident or hazard?
The article clearly describes an AI system (deepfake technology) being used to create sexually explicit videos without consent, which constitutes a violation of human rights and personal dignity. The harm is realized and ongoing, as these deepfakes are being widely viewed and shared, causing direct harm to individuals. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to significant harm to individuals and communities through non-consensual explicit content and related abuses.
Thumbnail Image

Taylor Swift deepfakes: new technologies have long been weaponized against women. The solution involves us all

2024-02-02
Knowridge Science Report
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake images, which are sexually explicit and non-consensual, constituting a violation of rights and harm to the targeted individual and communities. The article reports that these deepfakes went viral and caused real harm, fulfilling the criteria for an AI Incident. The AI system's use in generating and spreading harmful content is directly linked to the harm described. Therefore, this is classified as an AI Incident.
Thumbnail Image

Deepfake porn requires immediate action

2024-02-01
The Johns Hopkins News-Letter
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake pornography, which directly leads to harm by violating individuals' rights and causing psychological and reputational damage. The article describes realized harm through the circulation of explicit deepfake content and the targeting of victims, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The discussion of legislative gaps and technological detection efforts serves as complementary information but does not negate the presence of actual harm caused by AI misuse.
Thumbnail Image

AI brings deepfake pornography to the masses, as Canadian laws play catch-up

2024-02-03
Lethbridge News Now
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used to create deepfake pornography, which has caused real harm through non-consensual distribution of intimate images, including cases involving minors. The harms include violations of privacy, potential defamation, and psychological harm to victims, fulfilling the criteria for harm to persons and communities. The involvement of AI in generating these images is central to the incident. Legislative efforts to address these harms are described but do not negate the fact that harm has already occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI brings deepfake pornography to the masses, as Canadian laws play catch-up

2024-02-03
980 CJME
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to generate fake explicit images (deepfakes) that are distributed without consent, causing harm to individuals including underage girls and public figures. This constitutes violations of rights and harm to communities. The harms are realized and ongoing, with examples of images being widely viewed and circulated. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to significant harm as defined in the framework.
Thumbnail Image

Can Taylor Swift Save Humanity From AI's Dark Side?

2024-02-03
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (image generators) used to create illicit deepfake pornography, which directly harms individuals' rights and causes community harm. The flooding of a social media platform with such content and the inability to effectively moderate it demonstrates a direct AI Incident. The harm is not hypothetical but has occurred and is significant, meeting the criteria for an AI Incident under violations of rights and harm to communities.
Thumbnail Image

Will the Taylor Swift AI deepfakes finally make governments take action? | CBC Arts

2024-02-01
CBC News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake images without consent, which were then widely shared, causing harm. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights (non-consensual use of likeness) and harm to the individual and community. The article discusses the development and use of generative AI tools to create harmful content, the viral spread of this content, and the societal and governmental reactions, all indicating realized harm rather than just potential harm or complementary information. Therefore, the classification is AI Incident.
Thumbnail Image

Congress Might Actually Do Something About AI, Thanks to Taylor Swift

2024-02-02
Gizmodo
Why's our monitor labelling this an incident or hazard?
The article describes an ongoing harm caused by AI systems generating and distributing non-consensual deepfake pornographic images, which constitutes a violation of rights and harm to individuals (harm categories c and d). The legislation introduced is a societal and governance response to this AI Incident. The article also mentions the AI robot as a separate development without any associated harm or hazard. Therefore, the main focus is on an AI Incident involving harm caused by AI-generated deepfake porn. The legislative response is complementary information but does not overshadow the primary incident. Hence, the classification is AI Incident.
Thumbnail Image

Taylor Swift Deepfakes: New Technologies Have Long Been Weaponised Against Women

2024-02-03
The Wire
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deepfake pornography that has been widely distributed and caused harm to individuals, particularly women. The harms include mental health issues, reputational damage, and violations of rights, which fall under the definition of AI Incident. The AI system's use (deepfake generation) directly led to these harms. Although the article also discusses broader societal responses and the need for legal reform, the primary focus is on the realized harms caused by AI deepfakes, not just potential or future risks. Hence, the classification is AI Incident.
Thumbnail Image

MIL-OSI Global: Taylor Swift deepfakes: a legal case from the singer could help other...

2024-01-31
foreignaffairs.co.nz
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images of Taylor Swift that were distributed widely, causing harm to her and highlighting the broader issue of AI-generated non-consensual pornography. The AI system's use directly led to violations of rights and psychological harm, fitting the definition of an AI Incident. The article also discusses legal and policy responses, but the primary focus is on the harm caused by the AI system's use, not just complementary information or potential future harm. Hence, the classification is AI Incident.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
St. Louis Post-Dispatch
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation and dissemination of deepfake images, which are AI-generated manipulated content. The harm is realized as these images are nonconsensual, explicit, and abusive, violating Taylor Swift's rights and causing harm to her and the community. The AI system's use in generating and spreading these images directly leads to this harm, fitting the definition of an AI Incident. Other parts of the article about celebrity news are unrelated to AI incidents.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
Roanoke Times
Why's our monitor labelling this an incident or hazard?
Deepfake images are created using AI systems that generate realistic but fake content. The article explicitly mentions the spread of these AI-generated explicit images, which is a direct harm to Taylor Swift's rights and dignity, as well as a broader harm to communities exposed to such abusive content. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use in creating and distributing nonconsensual pornographic deepfakes.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
Richmond Times-Dispatch
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates realistic manipulated images or videos. The article explicitly mentions the spread of pornographic deepfake images of Taylor Swift, which are non-consensual and abusive, causing harm to her reputation and privacy. The AI system's use in creating and disseminating these images directly leads to violations of rights and harm to the individual and community. Hence, this is an AI Incident as the harm is occurring and directly linked to the AI system's outputs.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
Omaha.com
Why's our monitor labelling this an incident or hazard?
Deepfake images are created using AI systems capable of generating realistic but fake visual content. The article explicitly mentions the spread of these AI-generated explicit images, which is a direct use of AI technology causing harm to an individual (Taylor Swift) and potentially to communities by spreading abusive and non-consensual content. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. Other parts of the article unrelated to AI do not affect this classification.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
JournalStar.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created and spread using AI systems. The harm is realized as the nonconsensual explicit images cause reputational and emotional harm to Taylor Swift, violating her rights. The AI system's use directly leads to this harm. Hence, this qualifies as an AI Incident under the framework, as it involves an AI system's use causing a violation of rights and harm to a person.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
Press of Atlantic City
Why's our monitor labelling this an incident or hazard?
Deepfake images are created using AI systems that generate realistic but fake content. The article explicitly mentions the spread of these AI-generated explicit images causing harm to Taylor Swift and her community. This is a clear case of an AI system's use leading directly to harm, fulfilling the criteria for an AI Incident under violations of rights and harm to communities.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
Greensboro News and Record
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates manipulated images or videos. The article explicitly mentions the circulation of AI-generated pornographic deepfake images of Taylor Swift, which is a direct violation of her rights and causes harm. The harm is realized as the images are actively spreading on social media, causing reputational and emotional damage. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI systems in causing harm through non-consensual explicit content dissemination.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
Napa Valley Register
Why's our monitor labelling this an incident or hazard?
Deepfake images are generated using AI systems that synthesize realistic but fake images. The article explicitly mentions the spread of such AI-generated explicit images of Taylor Swift, which is a clear case of harm through violation of rights and abuse. The involvement of AI in creating these images and their distribution on social media platforms directly causes harm to the individual and community, fitting the definition of an AI Incident.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
The Quad-City Times
Why's our monitor labelling this an incident or hazard?
Deepfake images are created using AI systems that generate realistic but fake content. The article explicitly mentions the spread of nonconsensual pornographic deepfake images of Taylor Swift, which is a clear violation of rights and causes harm. The AI system's involvement in creating and disseminating these images directly leads to the harm described. Therefore, this event qualifies as an AI Incident due to violations of rights and harm to communities caused by the AI-generated content.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
missoulian.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created and spread using AI systems. The harm is realized as the nonconsensual distribution of explicit images violates Taylor Swift's rights and causes harm to her reputation and privacy. The article describes the actual circulation of these images, not just a potential risk, thus meeting the criteria for an AI Incident rather than a hazard or complementary information. The involvement of AI in generating the deepfakes and the direct harm caused by their spread justifies this classification.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
pantagraph.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created and spread using AI systems. The harm is realized as the nonconsensual explicit images cause reputational and emotional harm to Taylor Swift, violating her rights. The article describes the actual circulation of these images, not just a potential risk, so this is a realized harm. Hence, it meets the criteria for an AI Incident due to the direct involvement of AI systems in causing harm through nonconsensual deepfake pornography.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
Waterloo Cedar Falls Courier
Why's our monitor labelling this an incident or hazard?
Deepfake images are created using AI systems that generate realistic but fake content. The article explicitly mentions the spread of these AI-generated explicit images, which is a direct use of AI technology causing harm to an individual (Taylor Swift) and communities (through abusive and non-consensual content). This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The article does not focus on responses or updates but on the ongoing harm caused by the AI-generated deepfakes.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
La Crosse Tribune
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created and spread using AI systems. The harm is realized as the nonconsensual explicit images are circulating widely, causing reputational and emotional harm to Taylor Swift. This meets the criteria for an AI Incident because the AI system's use has directly led to a violation of rights and harm to a person. The article does not merely discuss potential or future harm, nor is it a general AI-related news item or a response to a past incident, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
Sioux City Journal
Why's our monitor labelling this an incident or hazard?
Deepfake images are created using AI systems capable of generating realistic but fake visual content. The spread of non-consensual explicit deepfake images of Taylor Swift is a direct harm caused by the use of AI technology, violating her rights and causing reputational and emotional harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm to a person and communities. The article does not merely discuss potential harm or responses but reports on actual harm occurring due to AI-generated content.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
The Eagle
Why's our monitor labelling this an incident or hazard?
Deepfake images are created using AI systems capable of generating realistic but fake content. The article explicitly mentions the spread of non-consensual pornographic deepfake images of Taylor Swift, which is a direct harm to her rights and dignity. The involvement of AI in creating these images and their circulation on social media platforms directly leads to harm as defined by violations of human rights and harm to communities. Hence, this is classified as an AI Incident.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
Magic Valley
Why's our monitor labelling this an incident or hazard?
Deepfake images are created using AI systems capable of generating realistic but fake content. The article explicitly mentions the spread of sexually explicit deepfake images of Taylor Swift, which is a direct harm involving violation of rights and harm to the community. The AI system's role is pivotal in creating and disseminating these images. Hence, this is an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
HeraldCourier.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created and spread using AI systems. The harm is realized as the nonconsensual distribution of explicit images violates Taylor Swift's rights and causes harm to her reputation and privacy. The AI system's use directly leads to this harm. Hence, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
SCNow
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images, which are created and disseminated using AI systems. The harm is realized as the nonconsensual pornographic images cause reputational and emotional harm to Taylor Swift, a violation of her rights. The AI system's use directly leads to this harm. Hence, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
McDowellNews.com
Why's our monitor labelling this an incident or hazard?
Deepfake images are generated using AI systems capable of creating realistic but fake content. The article explicitly mentions the spread of non-consensual pornographic deepfake images of Taylor Swift, which is a clear violation of personal rights and causes harm to the individual and community. The AI system's role in creating and disseminating these images is pivotal to the harm caused. Hence, this is an AI Incident due to realized harm stemming from AI misuse.
Thumbnail Image

Explicit AI images of Taylor Swift spread on social media, Megan Thee Stallion releases music, and more celeb news

2024-01-29
Statesville.com
Why's our monitor labelling this an incident or hazard?
Deepfake images are generated using AI systems that create realistic but fake content. The article explicitly mentions the spread of pornographic deepfake images of Taylor Swift, which is a direct harm involving violation of rights and harm to communities. The AI system's role in generating and disseminating these images is pivotal to the harm. Hence, this is an AI Incident as per the definitions provided.