AI-Generated Deepfake Nudes of Actress Isis Valverde Spark Legal Action in Brazil

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Brazilian actress Isis Valverde became a victim of AI-generated deepfake nude images, which were falsely attributed to her and widely circulated online. Valverde reported the incident to cybercrime authorities and initiated legal action to hold internet providers accountable, highlighting the privacy violations and reputational harm caused by AI-enabled image manipulation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes the use of an AI-powered application to create digitally altered images falsely depicting the actress nude. This misuse of AI technology has directly led to harm in the form of violation of personal rights and cybercrime against the individual. The event involves the use and misuse of an AI system causing harm, fitting the definition of an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsAccountabilityRobustness & digital securityTransparency & explainabilitySafetyHuman wellbeing

Industries
Media, social platforms, and marketingArts, entertainment, and recreationDigital securityGovernment, security, and defence

Affected stakeholders
WorkersGeneral public

Harm types
ReputationalHuman or fundamental rightsPsychological

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Isis Valverde registra ocorrência após aparecer nua em montagens

2023-10-27
cidadeverde.com
Why's our monitor labelling this an incident or hazard?
The article describes the use of an AI-powered application to create digitally altered images falsely depicting the actress nude. This misuse of AI technology has directly led to harm in the form of violation of personal rights and cybercrime against the individual. The event involves the use and misuse of an AI system causing harm, fitting the definition of an AI Incident.
Thumbnail Image

Advogado de Isis Valverde detalha crime cometido contra a atriz: "Destrutivo"

2023-10-26
anamaria
Why's our monitor labelling this an incident or hazard?
The article describes an AI-powered application that illegally generates fake nude images by placing a person's face onto another body, which is a clear misuse of AI technology. This misuse has caused harm to Isis Valverde by spreading false and damaging content, which can lead to psychological distress and reputational damage. This fits the definition of an AI Incident because the AI system's use has directly led to harm, including violations of rights and harm to the individual and potentially to communities if such misuse is widespread.
Thumbnail Image

Advogado de Isis Valverde fala sobre nudes atribuídos à atriz: 'uma abjeta montagem fotográfica'

2023-10-26
Tribuna do Sertão
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI-powered application to generate fake nude images by face-swapping, which is a misuse of AI technology causing harm to the individual involved. This constitutes a violation of rights and a clear harm caused by the AI system's malicious use. Since the harm has already occurred and legal actions are underway, this qualifies as an AI Incident under the framework.
Thumbnail Image

Advogado de Isis Valverde revela medidas após vazamento de falsas nudes e aponta prova de que não é a atriz nas fotos

2023-10-26
Terra
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI application was used to create fake nude images of the actress, which were then disseminated online, constituting a virtual crime and harm to the actress. This meets the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and reputational harm). The involvement of AI is clear, the harm is realized, and legal actions are being pursued. Therefore, this is classified as an AI Incident.
Thumbnail Image

Isis Valverde vai à polícia após divulgação de imagens falsas de nudez

2023-10-26
uol.com.br
Why's our monitor labelling this an incident or hazard?
The article describes the use of artificial digital montage techniques to create false nude images of Isis Valverde. Such image manipulations typically involve AI systems like deepfake or generative adversarial networks (GANs). The harm here is the violation of privacy and potential reputational damage to the individual, which constitutes a violation of rights under applicable law. Since the images have been disseminated, the harm is realized. Therefore, this qualifies as an AI Incident due to the AI system's role in creating harmful fake content that infringes on personal rights.
Thumbnail Image

Equipe jurídica de Isis Valverde se manifesta após polêmica sobre nudes: 'Manipulação'

2023-10-26
Quem
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI or digital manipulation tools to create false nude images of the actress, which is a violation of her rights and illegal. The AI system's use in generating these fake images directly leads to harm in terms of violation of privacy and rights. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-enabled manipulation and distribution of fraudulent content.
Thumbnail Image

Atriz Isis Valverde tem falsos nudes criados por IA publicados em redes sociais

2023-10-27
TecMundo
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI application was used to alter real images to create false nude images of the actress Isis Valverde, which were then shared on social media. This use of AI directly led to harm by violating the actress's privacy and potentially other rights, triggering legal action. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through malicious use and violation of rights.
Thumbnail Image

'Deep nude': Isis Valverde tem nudes fakes divulgados e toma atitude

2023-10-27
Correio Braziliense
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI technology (Deep Nude) to manipulate images of a public figure, creating fake nude photos without consent. This use of AI directly leads to harm by violating the individual's privacy and potentially their rights, which fits the definition of an AI Incident involving violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Fake nudes: atriz Isis Valverde tem imagens nuas criadas por IA publicadas na internet

2023-10-27
TudoCelular.com
Why's our monitor labelling this an incident or hazard?
The creation and publication of AI-generated fake nude images of a person without consent is a direct violation of personal rights and privacy, which falls under violations of human rights and applicable laws protecting individual dignity. The AI system's use in generating these images directly led to harm to the individual involved. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Isis Valverde é vítima de crime após nudes falsos serem divulgados

2023-10-26
O Povo
Why's our monitor labelling this an incident or hazard?
The incident involves the use of AI or advanced digital manipulation tools to create fake images that harm the actress's reputation and privacy. The harm is realized as the images were disseminated online, constituting a violation of rights and illegal content distribution. Therefore, this qualifies as an AI Incident because the AI system's use in image manipulation directly led to harm to the individual (violation of rights).
Thumbnail Image

Ísis Valverde toma atitude após ter falsos nudes "vazados" - Hugo Gloss

2023-10-27
Hugo Gloss
Why's our monitor labelling this an incident or hazard?
The article describes the use of an AI system to create manipulated images falsely depicting the actress nude, which is a form of digital impersonation and defamation. The AI system's illicit use has directly caused harm to the actress's reputation and psychological well-being, fulfilling the criteria for an AI Incident under violations of rights and harm to individuals. The involvement of AI in generating the fake images is explicit, and the harm is realized, not just potential.
Thumbnail Image

Defesa de Isis Valverde justifica supostos nudes da atriz: 'Potencial destrutivo'

2023-10-26
CARAS Brasil
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (an application capable of creating realistic face-swapped nude images) to produce fabricated content that harms the reputation and privacy of an individual. Although the images are fake, the harm (violation of privacy, potential reputational damage) is occurring due to the AI-generated content. This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident because the AI system's use has directly led to harm through malicious creation and dissemination of false intimate images.
Thumbnail Image

O que aconteceu com Isis Valverde? Artista teve fotos íntimas falsas vazadas

2023-10-26
CARAS Brasil
Why's our monitor labelling this an incident or hazard?
The event describes a cybercrime involving the creation and dissemination of falsified intimate images using digital editing tools, which can reasonably be inferred to involve AI or AI-related systems (e.g., deepfake or advanced image manipulation). The harm is realized as it violates privacy rights and causes reputational damage, fitting the definition of an AI Incident due to direct harm caused by the AI system's use in creating and spreading manipulated content. The legal response and police involvement further confirm the seriousness of the harm.
Thumbnail Image

Exclusivo! Advogado de Isis Valverde explica crime sofrido pela atriz: "Potencial destrutivo"

2023-10-26
Contigo!
Why's our monitor labelling this an incident or hazard?
The article describes a cybercrime where an AI system was used to generate fake nude images by superimposing the actress's face onto another person's body. This is a direct misuse of AI technology causing harm to the individual’s privacy and reputation, constituting a violation of rights. The harm has already occurred, and legal measures are underway, making this an AI Incident rather than a hazard or complementary information. The AI system's role is pivotal as it enabled the creation of the harmful content.
Thumbnail Image

Isis Valverde procura a polícia para denunciar nudes falsas com sua identidade

2023-10-27
Jornal de Brasília
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate fake nude images falsely attributed to the actress, constituting a violation of her rights and a form of harm to her reputation and privacy. The use of AI to create and distribute such fraudulent content directly leads to harm, fitting the definition of an AI Incident involving violations of rights and harm to the individual. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Isis Valverde aciona jurídico contra falsas imagens íntimas na internet

2023-10-26
Pipoca Moderna
Why's our monitor labelling this an incident or hazard?
While the creation of fake images likely involved AI-based image editing or deepfake technology, the article centers on the legal actions taken against the circulation of these images rather than on the AI system's malfunction or misuse causing harm. The harm is related to privacy violation and potential defamation, but the AI system is not explicitly identified as the cause or enabler of the harm in a way that meets the criteria for an AI Incident or AI Hazard. The article primarily reports on the legal and social response, making it Complementary Information rather than a new incident or hazard.
Thumbnail Image

Isis Valverde aciona advogados após aparecer nua em montagens que viralizaram nas redes sociais

2023-10-26
Jornal Midiamax | Notícias de Campo Grande e MS
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of manipulated images using digital tools that simulate nude photos of the actress. These manipulations are artificial and involve AI or similar image editing technologies. The harm caused includes violation of privacy and potential reputational damage, which falls under violations of rights. Since the AI system's use directly led to harm (the creation and spread of fake images), this qualifies as an AI Incident.
Thumbnail Image

Lei Carolina Dieckmann: o que diz a norma sobre vazamento de fotos íntimas?

2023-10-27
CNN Brasil
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create and distribute manipulated intimate images (deepfakes) of a person without consent, which is a direct violation of personal rights and causes harm to the individual. The AI system's role in generating these images is pivotal to the harm. The article explicitly states the harm occurred and discusses legal responses to such AI-enabled crimes. Hence, it meets the criteria for an AI Incident as the AI system's use directly led to harm to the person's rights and reputation.