AI-Generated Fake Nude Images of Rosalía Spark Outrage and Highlight Digital Sexual Violence

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Spanish singer Rosalía was targeted by rapper JC Reyes, who used AI to create and share fake nude images of her without consent. The incident, widely condemned as digital sexual violence, violated Rosalía's rights and dignity, sparking public backlash and renewed debate over AI misuse and online harassment.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI or AI-related digital tools to create altered images that infringe on privacy rights. This constitutes a violation of fundamental rights (privacy) through the use of AI-generated or AI-assisted content. Since the harm (privacy violation) has already occurred due to the dissemination of these manipulated images, this qualifies as an AI Incident under the framework.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsHuman wellbeingRobustness & digital securitySafetyTransparency & explainability

Industries
Media, social platforms, and marketingArts, entertainment, and recreationDigital security

Affected stakeholders
Women

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

De las falsas fotos de Rosalía en toples hasta el video 'fake' de Gal Gadot: los riesgos de la IA

2023-05-24
www.elcolombiano.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI or AI-related digital tools to create altered images that infringe on privacy rights. This constitutes a violation of fundamental rights (privacy) through the use of AI-generated or AI-assisted content. Since the harm (privacy violation) has already occurred due to the dissemination of these manipulated images, this qualifies as an AI Incident under the framework.
Thumbnail Image

Los fans de Rosalía atacan a JC Reyes por publicar fotos falsas de la cantante desnuda

2023-05-23
MARCA
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-edited fake nude photos of a singer, which is a clear violation of privacy and can be considered harm to the individual and the community. The AI system's use in generating these images directly led to reputational and emotional harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The harm is realized, not just potential, as evidenced by the public reaction and controversy.
Thumbnail Image

JC Reyes pide perdón a Rosalía: "No soy un violador"

2023-05-25
MARCA
Why's our monitor labelling this an incident or hazard?
The event describes the publication of AI-edited photos of Rosalía without her consent, which is a violation of her rights and causes harm to her reputation and privacy. The AI system's involvement is in the creation or manipulation of these images. The harm is realized, not just potential, as the incident caused public outcry and required a public apology. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (violation of rights and reputational harm).
Thumbnail Image

Rosalía 'denuncia' a un artista español por la difusión de imágenes de su falso desnudo

2023-05-24
as
Why's our monitor labelling this an incident or hazard?
The incident involves the use of an AI-enabled or AI-adjacent image editing application to create and spread false intimate images of a person without consent, causing harm to her privacy and reputation. This fits the definition of an AI Incident because the AI system's use directly led to harm in terms of violation of rights and harm to the individual. Although no legal action has yet been taken, the harm has already occurred through the dissemination of these images and the resulting social backlash. Therefore, this event qualifies as an AI Incident due to realized harm linked to AI-enabled image manipulation.
Thumbnail Image

Rosalía denuncia la foto falsa en la que aparece desnuda, creada y publicada por el cantante JC Reyes: "Es violencia"

2023-05-24
EL PAÍS
Why's our monitor labelling this an incident or hazard?
The article describes a false explicit image of the singer Rosalía created and published by another artist using AI and photo editing tools. This manipulated image was shared publicly, causing harm to Rosalía's dignity and privacy, which is a violation of rights. The AI system's role in creating the fake image directly led to this harm. Therefore, this qualifies as an AI Incident under the definitions provided, as it involves realized harm (violation of rights) caused by the use of AI technology.
Thumbnail Image

Rosalía arremete contra rapero por publicar foto falsa de ella en la que aparece desnuda: "Da asco"

2023-05-25
El Mercurio de Santiago
Why's our monitor labelling this an incident or hazard?
The incident involves the use of manipulated images, likely generated or altered by AI or digital editing tools, to create a false and non-consensual portrayal of a person. This constitutes a violation of personal rights and consent, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. The AI system's role is inferred in the creation or editing of the fake photo, leading to harm to the individual's dignity and privacy. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-enabled manipulation and dissemination of the image.
Thumbnail Image

Rosalía y JC Reyes: la polémica que vuelve a dejar claro que el cuerpo de la mujer "no es una mercancía"

2023-05-25
infobae
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-edited fake images (deepfakes) of Rosalía without her consent, which is a direct violation of her rights and constitutes harm to her reputation and dignity. The AI system's role in generating these manipulated images is pivotal to the incident. The harm is realized, not just potential, as the images were publicly shared and caused distress and public debate. This fits the definition of an AI Incident under violations of human rights and harm to communities. The event is not merely a hazard or complementary information, but a clear case of AI-enabled harm.
Thumbnail Image

JC Reyes se defiende de polémica con Rosalía: "no soy un acosador sexual"

2023-05-25
El Universal
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating manipulated images of Rosalía, which were shared and caused reputational harm and public outrage. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The harm is realized, not just potential, as the manipulated content was disseminated and led to social and reputational damage. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Rosalía responde a JC Reyes tras subir fotos suyas manipuladas en las que aparece desnuda: "Es un tipo de violencia y da asco"

2023-05-24
EL MUNDO
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI or advanced digital editing tools to manipulate images of a person without consent, creating sexualized content that causes harm. This manipulation and dissemination of altered images directly leads to harm in the form of violation of personal rights and psychological harm, fitting the definition of an AI Incident. The AI system's use in image manipulation and the resulting harm to the individual are clear and direct.
Thumbnail Image

JC Reyes pide disculpas a Rosalía por compartir sus fotos manipuladas: "Soy un hombre y sé cuándo he de reconocer mis errores"

2023-05-25
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The event describes the use of digitally manipulated images created and shared by JC Reyes without consent, which is a violation of the right to privacy and image. The manipulation likely involved AI-based tools given the realistic nature of the edits. The harm is realized as it caused indignation, condemnation by the victim, and potential legal consequences. This fits the definition of an AI Incident because the AI system's use directly led to a violation of fundamental rights (privacy and image rights).
Thumbnail Image

Críticas al cantante JC Reyes por publicar una foto editada de Rosalía desnuda: "Repugnante"

2023-05-23
Mundo Deportivo
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI or AI-related image editing tools to create a manipulated, non-consensual image of a person, which was then publicly shared, causing harm to the individual's reputation and privacy. This aligns with the definition of an AI Incident as it involves harm to a person through violation of rights due to the use of an AI system (image editing AI). The harm is realized, not just potential, as public backlash and reputational damage have occurred. Therefore, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

¿Quién es JC Reyes? El cantante responsable de las fotos de Rosalía editadas con IA

2023-05-24
Diario Sport
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create manipulated images of a public figure without consent, leading to reputational harm and a form of harassment. The AI system's use directly led to harm to the individual (Rosalía), fitting the definition of an AI Incident under violations of human rights or harm to persons. Therefore, this is classified as an AI Incident.
Thumbnail Image

¿Quién es JC Reyes, el cantante español que quiso hacer fama con fotos manipuladas de Rosalía desnuda?

2023-05-26
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event describes the use of manipulated photos of Rosalía, likely created or enhanced by AI techniques, to sexualize and disrespect her, which is a form of digital violence. This has caused harm to Rosalía's dignity and has led to widespread public backlash. The AI system's role in generating or manipulating the images is pivotal to the harm. The harm is realized, not just potential, as the manipulated images were widely disseminated and caused reputational and emotional harm. Hence, it meets the criteria for an AI Incident involving violation of rights and harm to communities.
Thumbnail Image

¿Qué pasó entre Rosalía y JC Reyes? Lo encara por compartir fotos manipuladas de ella desnuda

2023-05-25
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of manipulated nude images of Rosalía without her consent, which is a clear violation of her privacy and a form of harm to her as an individual. The use of image editing (likely AI or digital manipulation tools) to produce these images and the subsequent sharing on social media directly led to reputational and emotional harm. This fits the definition of an AI Incident because the AI system's use (image manipulation) directly led to harm (violation of rights and sexualization).
Thumbnail Image

Rosalía vuelve a plantar cara a JC Reyes por la foto manipulada en la que aparece desnuda

2023-05-25
okdiario.com
Why's our monitor labelling this an incident or hazard?
The event describes the use of manipulated images that falsely depict Rosalía nude, which were spread on social media. The manipulation likely involved AI-based image editing or generation tools, given the nature of the fake photos. This led to a violation of Rosalía's rights, specifically regarding consent and sexualization, which is a breach of fundamental rights. The harm is realized as the images went viral and caused distress. Hence, the event meets the criteria for an AI Incident due to direct harm caused by AI-enabled image manipulation violating rights and consent.
Thumbnail Image

Ni con el pétalo de una Rosalía

2023-05-26
okdiario.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, an AI system capable of generating realistic fake images, to create non-consensual explicit content of Rosalía. This use of AI directly led to harm to her dignity and reputation, which falls under violations of human rights and harm to communities. The malicious sharing of these AI-generated images constitutes an AI Incident as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

Irene Montero se suma a Rauw Alejandro en su apoyo a Rosalía por las fotografías falsas publicadas por JC Reyes

2023-05-24
20 minutos
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI or digital editing tools to create false semi-nude images of the singer Rosalía, which were then published without her consent. This act directly harms her by violating her rights and dignity, constituting a form of violence. The AI system's involvement is inferred from the creation of manipulated images (deepfakes or similar AI-generated content). Since the harm has materialized and is recognized by the victim and public figures, this qualifies as an AI Incident under the framework's definition of violations of human rights and harm to individuals.
Thumbnail Image

Rosalía explota tras publicación de supuestas fotos íntimas

2023-05-24
Milenio.com
Why's our monitor labelling this an incident or hazard?
The article describes an incident where AI was used to generate fake intimate photos of the singer Rosalía without her consent. This use of AI led to a violation of her rights and caused harm to her reputation and emotional well-being. The AI system's role in creating these manipulated images is central to the harm caused, fitting the definition of an AI Incident involving violations of human rights and harm to the individual. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

JC Reyes difunde foto íntima falsa de Rosalía; así respondió

2023-05-24
Milenio.com
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of AI-generated fake intimate photos of the singer Rosalía. The AI system was used to generate false content that was shared publicly, leading to harm in the form of violation of personal rights and reputational damage. This fits the definition of an AI Incident because the AI system's use directly led to harm to an individual (violation of rights and harm to reputation).
Thumbnail Image

El cantante que subió la foto falsa de Rosalía desnuda: "No soy un acosador sexual"

2023-05-25
El Confidencial
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI or AI-like image manipulation tools to create and distribute fake nude images of a person without consent, which constitutes a violation of rights and sexual harassment. The harm is realized and directly linked to the AI system's use (image editing software with AI capabilities). The incident has led to public harm, reputational damage, and potential legal consequences, fitting the definition of an AI Incident under violations of human rights and breach of obligations protecting fundamental rights.
Thumbnail Image

JC Reyes se disculpa y dice que no es un acosador sobre fotos de Rosalía

2023-05-25
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the manipulated photos of Rosalía were created using Artificial Intelligence and were disseminated without her consent, constituting a violation of her rights and causing reputational harm. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and harm to the community through misinformation and image manipulation). The apology and public reaction confirm the harm has materialized. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Rosalía denuncia la foto 'fake' de ella desnuda creada por el cantante JC Reyes: "Es violencia"

2023-05-24
El Confidencial
Why's our monitor labelling this an incident or hazard?
The event describes a manipulated image created using digital editing tools, likely involving AI-based image generation or enhancement techniques, which led to harm by violating Rosalía's rights and causing public distress. The AI system's use in creating the fake image directly led to harm (violation of rights and dignity). Therefore, this qualifies as an AI Incident under the framework's definition of harm to human rights through AI system use.
Thumbnail Image

Rosalía, tras las fotos manipuladas que la mostraban desnuda: "El cuerpo de una mujer no es una mercancía para tu estrategia de marketing"

2023-05-24
eldiario.es
Why's our monitor labelling this an incident or hazard?
The manipulated images were created and shared using digital editing software, which can be considered an AI system or AI-enabled tool for content generation and manipulation. The use of these manipulated images to spread false narratives and sexualize the individual without consent directly violates human rights and causes harm to the person involved. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the use of AI-enabled image manipulation leading to violation of rights and harm to the individual.
Thumbnail Image

¿Quién es JC Reyes, el rapero que publicó fotos falsas de Rosalía?

2023-05-25
Excélsior
Why's our monitor labelling this an incident or hazard?
The article explicitly states that JC Reyes used AI to retouch and create false nude images of Rosalía and shared them on social media. This is a direct use of AI-generated content leading to harm, specifically digital violence and violation of rights. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use directly led to harm to a person (violation of rights and harm to reputation).
Thumbnail Image

JC Reyes se disculpa con Rosalía después de filtrar fotos falsas de la cantante

2023-05-25
Excélsior
Why's our monitor labelling this an incident or hazard?
The incident involves the use of an AI system to create and disseminate manipulated images that caused harm to Rosalía's reputation and personal dignity, which constitutes a violation of rights and harm to the individual. Since the harm has already occurred due to the sharing of AI-generated fake images, this qualifies as an AI Incident. The apology and public discourse are part of the aftermath but do not change the classification of the event as an AI Incident.
Thumbnail Image

Rosalía responde a cantante que publicó una foto editada donde sale desnuda: "Da asco, da pena"

2023-05-24
LaRepublica.pe
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to create edited images of a public figure without consent, which were then shared publicly, causing harm through sexualization and disrespect. The AI system's use in generating these images directly led to a violation of the artist's rights and caused reputational and emotional harm. The harm is realized, not just potential, and the AI system's role is pivotal in producing the false images. Hence, this is an AI Incident under the framework definitions.
Thumbnail Image

"El cuerpo de una mujer no es propiedad pública": el descargo de Rosalía por unas fotos falsas de ella desnuda

2023-05-25
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The event describes the creation and viral spread of AI-manipulated images (photoshopped nude photos) of a woman without her consent, which is a clear violation of her rights and constitutes harm to her dignity and privacy. The AI system (image editing/manipulation technology) was used maliciously to create false content that caused reputational and emotional harm. This fits the definition of an AI Incident as the AI system's use directly led to violations of rights and harm to the individual. The event is not merely a potential hazard or complementary information but a realized harm caused by AI-enabled image manipulation.
Thumbnail Image

Deepfake porn: ¿qué es el porno falso que afecta a estrellas como Rosalía y otras mujeres?

2023-05-26
El Financiero
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI (deepfake technology) to create fake pornographic content without consent, which constitutes a violation of rights and digital sexual violence. The harm is direct and ongoing, as victims have experienced emotional distress and public harm. The AI system's use in generating these images is central to the harm described. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated content violating human rights and causing harm to individuals and communities.
Thumbnail Image

Difamación virtual: Rosalía confronta a rapero por difundir fotos falsas de ella desnuda

2023-05-25
Semana.com Últimas Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the false nude images were created by an AI system that removed clothing from photos of Rosalía. The dissemination of these AI-generated images caused harm to the individual by spreading false and sexualized content without consent, which is a violation of rights and can be considered defamation and virtual harassment. This harm is realized and ongoing, as evidenced by the public reaction and Rosalía's response. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in generating harmful content and the resulting violation of rights and harm to the individual.
Thumbnail Image

Rosalía le planta cara al rapero que publica sus foto-fake desnuda: ''Das asco''

2023-05-23
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI or AI-like image manipulation to create non-consensual sexualized images of a person, which constitutes a violation of rights and harm to the individual. The AI system's use (image manipulation) has directly led to harm (violation of rights and reputational harm). Therefore, this is an AI Incident according to the definitions provided.
Thumbnail Image

Irene Montero, a los pies de Rosalía tras la polémica de sus fotos fake-desnuda: ''Dilo reina''

2023-05-25
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated or AI-assisted manipulated images (fake nude photos) shared without consent, which constitutes a violation of personal rights and sexualizes the individual, causing harm. This fits the definition of an AI Incident because the AI system's use in creating and disseminating these images has directly led to harm in terms of violation of rights and harm to the individual. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Rosalía denuncia acoso por FOTO falsa en la que aparece desnuda

2023-05-25
Medio Tiempo
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of a deepfake image generated by AI technology, which directly harms the individual by sexualizing and harassing her without consent. This falls under violations of human rights and harm to communities as defined in the framework. The AI system's use in generating the fake image is central to the harm caused, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Rosalía explota contra rapero que compartió falsas fotografías de ella en topless

2023-05-25
Diario La Página
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology to create a hyperrealistic fake image (deepfake) of Rosalía topless, which was shared publicly by the rap artist JC Reyes. This misuse of AI has directly led to harm in the form of harassment, sexualization, and violation of the singer's rights. The harm is realized and ongoing, as evidenced by Rosalía's public condemnation and the social media backlash. The AI system's role is pivotal as it enabled the creation of the false image that caused the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Rosalía arremete contra JC Reyes por publicar fotos editadas de la cantante desnuda

2023-05-23
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI or digital editing tools to create manipulated images that falsely depict a person nude, which is a violation of personal rights and dignity. The harm is realized as the images were shared and caused reputational damage and distress. The AI system's involvement is reasonably inferred from the description of edited photos. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated manipulated content.
Thumbnail Image

'Lo que da es pena' Rosalía se defiende de rapero que difundió fotografías manipuladas

2023-05-24
Vanguardia
Why's our monitor labelling this an incident or hazard?
The article describes the use of manipulated photographs, which are likely created or altered using AI techniques, to falsely represent Rosalía. This misuse of AI-generated content leads to harm by violating privacy and potentially damaging reputation, fitting the definition of an AI Incident due to the realized harm caused by the AI system's outputs (manipulated images).
Thumbnail Image

Rosalía, rotunda a JC Reyes tras publicar sus fotos como si estuviera desnuda: "Es un tipo de violencia"

2023-05-24
LaSexta
Why's our monitor labelling this an incident or hazard?
The event describes the creation and publication of falsified images using AI or digital manipulation techniques that directly harm Rosalía by sexualizing her without consent. This is a clear violation of her rights and is recognized by her as a form of violence. The AI system's use in generating these images directly leads to harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

Rosalía denuncia que el cantante JC Reyes ha publicado una foto falsa suya desnuda

2023-05-24
HERALDO
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of a fake nude image of a person using AI or AI-like image manipulation tools, which directly leads to harm by violating the victim's privacy and sexualizing her without consent. This constitutes a violation of human rights and personal dignity. The AI system's involvement is in the generation of the manipulated image, and the harm is realized and ongoing. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

No soy un acosador: JC Reyes se disculpa con Rosalía después de filtrar fotos falsas de la cantante | Periódico Zócalo | Noticias de Saltillo, Torreón, Piedras Negras, Monclova, Acuña

2023-05-25
Zócalo Saltillo
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-generated fake nude images of Rosalía, which is a direct violation of her rights and causes harm to her reputation and privacy. The AI system's use in generating these images and their public sharing led to harm, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The apology and public reaction confirm the harm has materialized.
Thumbnail Image

Rosalía cargó contra rapero que publicó fotos trucadas: "Das pena"

2023-05-23
Cooperativa
Why's our monitor labelling this an incident or hazard?
The rapper published a doctored photo of Rosalía, which is a manipulated content likely created or altered using AI or digital tools. This act constitutes a violation of personal rights and can be seen as a form of harm to the individual (sexualization and disrespect). The use of AI or digital manipulation to create such content and its publication leading to harm fits the definition of an AI Incident, as it involves the use of AI systems (image manipulation) leading to harm (violation of rights and emotional harm).
Thumbnail Image

El 'zasca' de Rosalía a un conocido cantante que publicó imágenes falsas suyas desnuda: "Da asco y pena"

2023-05-24
Antena3
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI or AI-assisted image manipulation to create and spread false, sexualized images of a person without consent, which is a violation of personal rights and causes harm to the individual and community. The AI system's involvement is reasonably inferred from the description of manipulated images spreading on social media. The harm is realized (reputational, emotional, and rights violation). Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

Rosalía le dijo cuatro al rapero JC Reyes por la foto de los senos: ¡atrevido! -

2023-05-25
noticia al dia
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI or digitally manipulated image (edited photo) of Rosalía without her consent, which was disseminated publicly causing harm to her privacy and dignity. This fits the definition of an AI Incident because the AI system's use (image editing/manipulation) directly led to a violation of personal rights and harm to the individual. The harm is realized, not just potential, and involves breach of privacy and sexualization, which are violations of fundamental rights. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Rosalía habla sobre la foto manipulada que se ha movido en las redes para que parezca que está desnuda: "Es un tipo de violencia y da asco"

2023-05-24
Público.es
Why's our monitor labelling this an incident or hazard?
The event describes a manipulated image of a public figure created and shared on social media, which is a known use case of AI-based image manipulation or deepfake technology. The harm is realized as it constitutes a violation of personal rights and is a form of gender-based violence. Therefore, this qualifies as an AI Incident because the AI system's misuse directly led to harm to the individual and potentially to communities by perpetuating misogynistic behavior.
Thumbnail Image

Las mujeres somos sagradas y se nos respeta: Rosalía

2023-05-25
El Diario de Juárez
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-generated manipulated images (deepfakes) of the singer Rosalía without her consent. This use of AI directly leads to a violation of her rights and dignity, which fits the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized as the images were publicly shared and caused indignation and distress. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

El cantante JC Reyes publicó fotos falsas de Rosalía desnuda y ella estalla

2023-05-25
www.expreso.ec
Why's our monitor labelling this an incident or hazard?
The use of edited or fake images to sexualize and disrespect a person without consent involves AI systems capable of generating or manipulating images. This misuse has directly led to harm in terms of violation of rights and emotional distress. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated manipulated content.
Thumbnail Image

Critican a reguetonero JC Reyes por publicar fotos truqueadas de Rosalía y la cantante responde

2023-05-24
T13 (teletrece)
Why's our monitor labelling this an incident or hazard?
The event describes the creation and sharing of digitally altered images (Photoshop) that sexualize Rosalía without her consent. The use of AI or digital editing tools to manipulate images in this way is a direct cause of harm, violating personal rights and causing reputational and emotional harm. This fits the definition of an AI Incident because the AI system's use (image manipulation software) directly led to a violation of rights and harm to the individual and community. The event is not merely potential harm but actual harm has occurred as evidenced by public backlash and the artist's response.
Thumbnail Image

Rosalía, la 'Motomami', molesta por filtración de fotos "íntimas" falsas

2023-05-25
Diario El Telégrafo
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions that the photos were created using artificial intelligence, constituting an AI system's involvement in generating false content. The sharing of these AI-generated fake intimate photos without consent constitutes a violation of personal rights and can be considered a form of harm to the individual, specifically a violation of human rights and dignity. Since the harm has already occurred through the dissemination of these images and the resulting distress, this qualifies as an AI Incident under the framework.
Thumbnail Image

Rauw Alejandro y Aitana, contra la sexualización machista denunciada por Rosalía

2023-05-25
LOS40 - Todos Los Éxitos
Why's our monitor labelling this an incident or hazard?
The event involves the creation and dissemination of a fake nude image of a person without consent, which is a violation of rights and a form of harm. The creation of such a manipulated image likely involves AI technologies such as deepfake or generative AI, which are capable of producing realistic fake images. The harm is realized as it affects the individual's dignity, image, and potentially causes psychological harm. The article also mentions public denunciation and calls for legal action, indicating the harm is materialized. Hence, this is an AI Incident as the AI system's use in creating the fake image directly led to harm.
Thumbnail Image

"Da asco", Rosalía denunció fuerte foto falsa de ella que publicó rapero

2023-05-25
Canal RCN | Nuestra Tele - Televisión y Entretenimiento
Why's our monitor labelling this an incident or hazard?
Although the event involves edited photos and viral social media spread, which often involve AI tools (e.g., for image editing or content recommendation), the article does not explicitly mention AI systems or their malfunction or misuse as the cause of harm. The harm described is a violation of personal rights and sexualization, but the AI role is not clearly pivotal or direct. The rapper's claim of hacking and the social media dynamics are human-driven. Thus, it does not meet the criteria for AI Incident or AI Hazard. It is not unrelated because it touches on AI-adjacent issues (image editing and viral spread), but the main focus is on the social controversy and personal rights. Hence, it is Complementary Information.
Thumbnail Image

"Da asco": Rosalía denuncia foto falsa en la que aparece desnuda

2023-05-25
www.vanguardia.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI or editing programs to create a fake photo of Rosalía, which was then shared publicly, leading to harm in the form of sexualization and disrespect. This constitutes a violation of rights and a form of harm to the individual, fitting the definition of an AI Incident where the AI system's use directly led to harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Rosalía 'estalla' contra las fotos de sus falsos desnudos

2023-05-24
Republica.com
Why's our monitor labelling this an incident or hazard?
The event describes the creation and sharing of manipulated images of a person, which is a form of harm to the individual's rights and dignity. The manipulation was done using Photoshop, which includes AI-powered features for image editing and manipulation. The harm is realized as the images were published and caused distress to Rosalía. Although AI is not explicitly mentioned, the use of AI-based image editing tools can be reasonably inferred. Therefore, the event meets the criteria for an AI Incident due to the direct harm caused by AI-enabled image manipulation.
Thumbnail Image

Rosalía: Todo lo que se sabe de las FOTOS falsas filtradas de la cantante

2023-05-25
La Razón
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of AI-generated fake intimate photos of Rosalía without her consent, which is a clear violation of her rights and causes harm to her reputation and personal integrity. The AI system's use in generating these images directly led to harm (emotional, reputational, and violation of privacy). Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations protecting fundamental rights.
Thumbnail Image

Rosalía denuncia foto falsa generada por IA en la que aparece desnuda - 24 Horas

2023-05-24
24 Horas
Why's our monitor labelling this an incident or hazard?
The event describes a specific instance where an AI-generated fake image caused harm to a person's reputation and dignity, which falls under violations of human rights or breach of obligations protecting fundamental rights. Since the harm has already occurred through the dissemination of the manipulated image and the victim's reaction, this qualifies as an AI Incident. The AI system's use in generating the fake image is central to the harm caused. There is no indication that this is merely a potential risk or a complementary update; the harm is realized and directly linked to the AI-generated content.
Thumbnail Image

Reacción brutal de Rosalía al ver unas abyectas fotos editadas de ella sin ropa: "Da asco" - En Blau

2023-05-24
ElNacional.cat
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of photoshopped images of Rosalía, which are digitally manipulated to show her topless, constituting a violation of her rights and sexualization. The use of image editing software, which can be considered an AI system or AI-assisted tool for content generation and manipulation, directly led to harm. The harm is realized and ongoing, as the images were published and caused public outrage and distress to the individual. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-enabled image manipulation and its public dissemination.
Thumbnail Image

'Lo subí en plan risa', dice rapero que publicó foto editada de Rosalía en toples y pide disculpas

2023-05-25
El Siglo de Torreón
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions that the manipulated photo was created with Artificial Intelligence and was used to sexualize Rosalía without her consent, constituting digital violence. This misuse of AI-generated content has directly led to harm to the individual involved, including reputational and emotional harm, which falls under violations of human rights and harm to communities. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Rosalía estalló contra reguetonero que publicó fotos suyas creadas con ayuda de inteligencia artificial: "lo que da es pena" | NTN24.COM

2023-05-25
NTN24 | Últimas Noticias de América y el Mundo.
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create manipulated nude images of Rosalía without her consent, which is a direct violation of her rights and a form of harm. The AI system's use in generating these images directly led to reputational and emotional harm to the artist, fitting the definition of an AI Incident under violations of human rights and harm to communities. The subsequent public reaction and consequences further confirm the harm caused. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Rosalía denuncia violencia sexual tras manipulación de una foto

2023-05-23
Clase
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to manipulate a photo, which is an AI system involvement. The harm caused is a violation of rights and an act of sexual violence through image manipulation, which fits the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The harm is realized, not just potential, as Rosalía has publicly denounced the act and it has caused distress and reputational harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

La Nación / Enfurecen a Rosalía con fotomontaje al desnudo: "No somos mercancía"

2023-05-25
La Nación
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create a nude image montage of influencer Johanna Villalobos, and Photoshop manipulation for Rosalía's images, both without consent. The harms include violations of personal and possibly intellectual property rights, gender-based violence, and reputational harm. Since the AI-generated content has already been disseminated and caused harm, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights and harm to individuals.
Thumbnail Image

La Nación / Enfurecen a Rosalía con fotomontaje al desnudo: "No somos mercancía"

2023-05-25
La Nación
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the image of Johanna Villalobos was created using AI to remove clothing from her photo without consent, which is a direct misuse of AI technology causing harm to the individual by violating her rights and exposing her to harassment. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights and gender-based violence). The Rosalía case involves Photoshop, not AI, so it is not an AI Incident. Therefore, the overall article reports an AI Incident related to the AI-generated nude image of the influencer.
Thumbnail Image

Rosalía está molesta por fotos falsas de ella desnuda

2023-05-24
AM Querétaro
Why's our monitor labelling this an incident or hazard?
The event describes the use of altered images that appear to be manipulated versions of original photos, likely created using AI-based image editing or deepfake technology. The unauthorized sharing of these images has caused emotional harm and a violation of privacy and rights. Since the harm has occurred and is directly linked to the use of AI-generated manipulated content, this qualifies as an AI Incident under the definitions provided, specifically as a violation of rights and harm to the individual involved.
Thumbnail Image

Rosalía indignada por fotos suyas falseadas con desnudos que circulan redes sociales

2023-05-25
Panamericana Televisión
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of digitally manipulated images (edited photos) of Rosalía without her consent, which sexualizes her and violates her rights. The use of image editing software, which can be considered an AI system or AI-assisted tool for content generation/manipulation, directly led to harm in terms of violation of personal rights and community harm through public outrage and indignation. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in image manipulation and the resulting violation of rights and harm to the community.
Thumbnail Image

Cantante que subió la foto falsa de Rosalía desnuda: 'No soy un acosador sexual'

2023-05-25
Dia a Dia
Why's our monitor labelling this an incident or hazard?
The event describes the creation and sharing of manipulated images using digital editing (Photoshop), which is a form of AI-related content generation or manipulation. The harm caused includes violation of the individual's rights and sexualization without consent, which fits under violations of human rights and breach of obligations to protect fundamental rights. Since the harm has already occurred and is directly linked to the use of AI/digital editing tools, this qualifies as an AI Incident.
Thumbnail Image

¡'Das asco'! Rosalía responde a JC Reyes, el autor de las fotos manipuladas donde sale desnuda

2023-05-23
Dia a Dia
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of manipulated images that are likely AI-generated or AI-assisted, causing harm by sexualizing and disrespecting Rosalía without her consent. This constitutes a violation of personal rights and can be classified as harm to the individual (a form of harm to rights). The dissemination of such images has already occurred, so the harm is realized, not just potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Eso es violencia: Rosalía explota contra JC Reyes por publicar foto falsa de ella desnuda

2023-05-24
El Sol de Tampico
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-generated manipulated content (a false nude image) that was published and caused harm to Rosalía by sexualizing and defaming her without consent. This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The AI system's use directly led to reputational and emotional harm, and the incident involves misuse of AI-generated content for harmful purposes.
Thumbnail Image

Rosalía estalla por la manipulación de unas fotos en las que parece desnuda

2023-05-25
Semana
Why's our monitor labelling this an incident or hazard?
The manipulated images were created and shared using AI or digital editing tools, leading to harm by sexualizing and disrespecting Rosalía without her consent. This is a clear violation of her rights and constitutes harm to an individual. The event describes actual harm caused by the AI-generated manipulated images, not just a potential risk. Therefore, it qualifies as an AI Incident due to the direct harm to human rights and dignity caused by the AI system's use in image manipulation and distribution.
Thumbnail Image

Rosalía explota luego de la publicación de supuestas fotos íntimas

2023-05-27
El Heraldo de San Luis Potosi
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of AI-generated images (deepfakes) to create and disseminate false intimate photos of Rosalía. This constitutes a violation of human rights, specifically the right to privacy and consent, and is a form of harm to the individual. Since the AI system's use directly led to this harm, it qualifies as an AI Incident under the framework's definition of violations of human rights and harm to individuals.
Thumbnail Image

"Es un tipo de violencia y da asco": Rosalía reacciona a su fotografía falsa difundida en redes sociales

2023-05-24
infoLibre.es
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-powered photo editing to create a fake image that sexualizes the singer Rosalía without her consent. This misuse of AI technology has directly caused harm by violating her right to privacy and dignity, which is a breach of fundamental rights protected by law. The harm is realized and ongoing, as evidenced by the public backlash and the singer's denunciation. Hence, the event meets the criteria for an AI Incident due to the direct link between AI use and violation of human rights.
Thumbnail Image

Rosalía responde a JC Reyes por publicar fotos falsas de desnudo: "El cuerpo de una mujer no es propiedad pública"

2023-05-24
Terra USA
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of fake nude images of Rosalía, which were edited and shared without her consent. The use of Photoshop or similar AI-assisted image editing tools to create these images directly led to harm in terms of violation of privacy, consent, and respect for the individual, which falls under violations of human rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI or digital editing system's use directly caused harm through the creation and spread of manipulated images.
Thumbnail Image

Rosalía víctima de fake porn: "El cuerpo de una mujer no es propiedad pública"

2023-05-25
Montevideo Portal
Why's our monitor labelling this an incident or hazard?
The event describes the creation and spread of fake pornographic images of a public figure using image editing software, which is a form of AI-generated or AI-assisted content manipulation. This has directly led to harm in terms of violation of the artist's rights and personal dignity, fitting the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. Although the article does not explicitly mention AI, the use of Photoshop and image editing to create fake images aligns with AI system involvement in content generation or manipulation. Therefore, this qualifies as an AI Incident.
Thumbnail Image

JC Reyes publica desnudos falsos de Rosalía, pero la cantante no se queda callada: "Es un tipo de violencia y da asco"

2023-05-24
we are mitú — business and entertainment, culture and sport, movies and music
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of manipulated images (deepfake-like content) of a person without consent, which is a recognized form of AI-related harm involving the use of AI or advanced image editing techniques. This leads to reputational damage and emotional harm, fitting the definition of an AI Incident due to violation of rights and harm to the individual. The AI system's involvement is reasonably inferred from the nature of the manipulated images. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Así ha sido la reacción de Rauw Alejandro a las polémicas fotos de Rosalía desnuda - Estrella Digital

2023-05-25
Estrella Digital
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-manipulated images without consent, which is a clear violation of personal rights and privacy. The harm is realized as the images went viral and caused distress to the artist. The AI system's role in generating the manipulated photos is pivotal to the harm. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations protecting fundamental rights.
Thumbnail Image

Rosalía estalla contra JC Reyes tras publicar falsos desnudos suyos: "Es un tipo de violencia y da asco"

2023-05-24
Bekia
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-related image manipulation (Photoshop-edited images) to create false nude images of Rosalía, which have been shared publicly causing harm to her reputation and emotional well-being. The AI system's use (image editing software with AI capabilities) directly led to a violation of rights and harm to the individual. The harm is realized and ongoing, not just potential. Hence, it meets the criteria for an AI Incident under violations of human rights and harm to communities (reputation and emotional harm).
Thumbnail Image

Rosalía se enfureció por la difusión de su falso desnudo publicado por un artista español - Notife

2023-05-24
Notife
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI-based image editing application to create and spread false intimate images of Rosalía, leading to harm to her privacy, honor, and dignity. This fits the definition of an AI Incident because the AI system's use directly led to violations of personal rights and harm to the individual. Although no legal actions have been taken yet, the harm has already occurred through the dissemination and public reaction. Therefore, this is classified as an AI Incident.
Thumbnail Image

Rosalía habla de la foto manipulada que ha movido en las redes para que parezca que está desnuda: «Es un tipo de violencia y da un asco»

2023-05-24
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The manipulated photo was created and disseminated using AI or advanced digital editing techniques, which is an AI system's use leading to harm. The harm includes violation of Rosalía's rights, sexual harassment, and psychological harm, fitting the definition of an AI Incident under violations of human rights and harm to individuals. The event is not merely potential harm but actual harm has occurred through the distribution of the manipulated image. Therefore, this qualifies as an AI Incident.
Thumbnail Image

El 'zasca' viral de Rosalía al artista que ha manipulado sus fotos: "Faltando el respeto y sexualizando..."

2023-05-24
CADENA 100
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI or advanced digital editing tools to manipulate images of a public figure, creating false and sexualized content that harms her reputation and dignity. The harm is realized as the manipulated images were shared and caused public backlash and distress. The AI system's use in image manipulation directly led to violations of rights and harm to the individual. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Rosalía vuelve a pronunciarse contra el rapero que utilizó sus fotos falsas: "Las mujeres somos sagradas"

2023-05-25
MegaStarFM
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI or AI-based image editing to create fake nude photos of Rosalía without her consent, which were then shared publicly. This directly leads to harm in terms of violation of personal rights, sexualization without consent, and emotional distress. The AI system's role in generating or editing these images is pivotal to the harm caused. Hence, it meets the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

JC Reyes estalla tras las fotos manipuladas a Rosalía: "Ustedes no pueden desearle el mal a nadie"

2023-05-26
CADENA 100
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of Photoshop to manipulate images of Rosalía, which is a form of AI or digital image manipulation technology. The manipulated images were used without consent and caused reputational harm and violation of the right to honor, which falls under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized and direct, as the manipulated images were disseminated and caused public controversy and distress. The artist's subsequent apology confirms acknowledgment of the harm caused. Thus, this is an AI Incident involving the misuse of AI-enabled image manipulation leading to harm.
Thumbnail Image

Rosalía responde a JC Reyes con un comunicado en el que defiende el cuerpo de las mujeres: "No es propiedad pública"

2023-05-25
Yasss
Why's our monitor labelling this an incident or hazard?
The article describes how JC Reyes used edited (likely AI-manipulated) images of Rosalía to sexualize her without consent, causing harm to her dignity and violating her rights. The AI system's involvement is reasonably inferred from the creation of false images (deepfakes or similar AI-generated content). This misuse of AI has directly led to harm (violation of rights and harm to the community of women facing similar abuses). Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Por qué son violencia las fotos que hizo un rapero español de Rosalía

2023-05-25
nosotras.com.mx
Why's our monitor labelling this an incident or hazard?
The article describes how AI was used to generate fake topless images of Rosalía without her consent, which is a direct violation of her privacy and a form of digital sexual violence. The AI system's use here caused harm to the individual by infringing on her rights and dignity. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and harm to the individual).
Thumbnail Image

Rosalía denuncia en redes que el cantante sevillano JC Reyes haya subido fotos falsas de ella desnuda

2023-05-24
Diario de Sevilla
Why's our monitor labelling this an incident or hazard?
The event describes the creation and sharing of manipulated images using Photoshop, which is an AI-assisted tool for image editing. The manipulated images caused harm to Rosalía's privacy and dignity, constituting a violation of rights. The harm is realized as the images were publicly shared and caused distress, meeting the criteria for an AI Incident. Although the AI involvement is indirect (Photoshop as an AI tool), it is pivotal in enabling the creation of the false images. The event is not merely general news or a product announcement, nor is it a potential future harm; the harm has already occurred. Hence, the classification is AI Incident.
Thumbnail Image

Rosalia planta cara al raper JC Reyes per la difusió d'unes fotos suposadament falses on apareix la cantant mostrant els seus pits nus

2023-05-25
Diari de Girona
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of manipulated images of Rosalia using image editing software (Photoshop), which likely involves AI techniques for image manipulation. The false images caused harm by violating Rosalia's rights and contributing to sexualization and disrespect, which are harms to the individual and community. The incident has already occurred, with public backlash and apology, indicating realized harm. Thus, it meets the criteria for an AI Incident due to the direct role of AI-assisted image manipulation in causing harm.
Thumbnail Image

Reacció brutal de Rosalía en veure unes abjectes fotos editades d'ella sense roba: "Da asco" - En Blau

2023-05-24
ElNacional.cat
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of digitally manipulated images of Rosalía without her consent, which is a direct violation of her rights and causes harm to her reputation and dignity. The use of AI or digital editing tools to alter images in this way is a misuse of AI technology leading to harm. The harm is realized as Rosalía and her fans express outrage and call for legal action. This fits the definition of an AI Incident as the AI system's use (image editing) has directly led to harm (violation of rights and harm to community).
Thumbnail Image

Rosalía esclata contra un raper per una foto falsa d'ella despullada

2023-05-25
ara.cat
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI or digital editing tools to create false nude images of Rosalía, which were then distributed on social media, causing harm to her reputation and violating her rights. The AI system's involvement is in the creation of the manipulated images, which directly led to harm (violation of rights and harm to the individual). Therefore, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Unes imatges de Rosalia amb els pits nus creen polèmica per ser, suposadament falses, i per difondre-les per les xarxes socials

2023-05-25
Regió 7
Why's our monitor labelling this an incident or hazard?
The event describes the use of image editing (likely AI or digital manipulation tools) to create false nude images of Rosalia, which were then shared on social media. This involves the use of AI or advanced image editing systems to generate misleading content, leading to harm in terms of violation of personal rights, consent, and sexualization. The harm is realized as it affects the individual's dignity and privacy, and the event involves the use of AI systems for image manipulation. Therefore, it qualifies as an AI Incident due to violation of rights and harm to the individual and community.
Thumbnail Image

Rosalía critica cantor espanhol por compartilhar fotos falsas dela nua

2023-05-25
Quem
Why's our monitor labelling this an incident or hazard?
The event clearly involves the use of manipulated images (likely AI or digital editing tools) to create false nude photos, which were shared publicly without consent. This constitutes a violation of personal rights and can be classified as harm to the individual, specifically a violation of human rights and personal dignity. The AI system's role is inferred in the creation or editing of the photos (e.g., AI-based image manipulation or deepfake technology). Since the harm has already occurred and is directly linked to the use of AI-generated or AI-assisted manipulated content, this qualifies as an AI Incident.
Thumbnail Image

Rosalía afirma que cantor espanhol compartilhou fotos falsas dela nua: 'Desrespeito' | Celebridades | O Dia

2023-05-25
Home
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the creation and sharing of false nude images of Rosalía, which are edited and falsely attributed to her. The use of AI or digital manipulation to create such images is reasonably inferred. The harm is realized as it violates her rights and causes reputational and emotional harm. This fits the definition of an AI Incident because the AI system's use (image manipulation) has directly led to harm (violation of rights and disrespect).
Thumbnail Image

"É nojento": a fúria de Rosalía ao ver fotografias falsas nas redes sociais

2023-05-24
Jornal Expresso
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-enabled digital image manipulation (Photoshop) to create fake, sexualized images of Rosalía without her consent. This act has caused harm to her dignity and privacy, constituting a violation of rights. The AI system's use in creating these images directly led to this harm. Therefore, this qualifies as an AI Incident under the category of violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Rosalía nua: JC Reyes espalha fotos falsas, fãs descobrem e cantora denuncia violência

2023-05-24
Purebreak Brasil
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-manipulated fake nude images of a person without consent, which is a clear violation of rights and a form of digital sexual violence. The use of AI or advanced image manipulation tools is reasonably inferred given the nature of the fake images described. The harm is realized as the images were shared publicly, causing distress and violation of the singer's rights. This fits the definition of an AI Incident as it involves harm to a person through the use of an AI system's outputs (manipulated images).