AI-Generated Fake Wedding Photos of Zendaya and Tom Holland Cause Public Confusion

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated fake wedding photos of Zendaya and Tom Holland circulated online, misleading the public and even close acquaintances. Zendaya addressed the incident on Jimmy Kimmel Live!, revealing that many people believed the images were real, causing confusion and emotional distress among her social circle.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was involved in generating highly realistic fake images that caused misinformation and confusion among the public, leading to emotional reactions such as anger from people who believed the wedding had occurred. This constitutes harm to communities by spreading false information and misleading people, which fits the definition of an AI Incident. The AI system's use directly led to this harm through the creation and dissemination of deceptive content.[AI generated]
AI principles
Transparency & explainabilityAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicOther

Harm types
Psychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Viral di Internet, Zendaya Tanggapi Foto Pernikahannya dengan Tom Holland

2026-03-18
JawaPos.com
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating highly realistic fake images that caused misinformation and confusion among the public, leading to emotional reactions such as anger from people who believed the wedding had occurred. This constitutes harm to communities by spreading false information and misleading people, which fits the definition of an AI Incident. The AI system's use directly led to this harm through the creation and dissemination of deceptive content.
Thumbnail Image

Zendaya sebut foto AI pernikahan dengan Tom Holland tipu banyak orang

2026-03-18
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating realistic fake images that misled people, causing harm to the community by spreading misinformation and deception. This constitutes harm to communities as defined in the framework. Since the AI-generated content has already caused confusion and deception, this qualifies as an AI Incident rather than a potential hazard or complementary information.
Thumbnail Image

Zendaya: Banyak yang Tertipu Foto AI Penikahannya dengan Tom Holland

2026-03-18
Kabarin.com
Why's our monitor labelling this an incident or hazard?
The AI system was involved in generating realistic fake images that misled people, but the article does not report any direct or indirect harm such as injury, rights violations, or disruption. The event does not describe a plausible future harm scenario either, as it focuses on the social reaction to AI-generated images. The main focus is on the social phenomenon and public reaction to AI-generated content, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Zendaya diz que amigos caíram em fotos de IA de seu casamento | Diario de Cuiabá

2026-03-17
Diario de Cuiabá
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate fake images (content generation) that misled people, causing emotional harm (disappointment, feeling excluded). This constitutes harm to communities or individuals through misinformation and deception. Since the AI-generated images directly led to this harm, it qualifies as an AI Incident.
Thumbnail Image

Zendaya responde a fotos de casamento com Tom Holland feitas por IA: 'Muita gente foi enganada'

2026-03-17
Terra
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated photos that have misled many people into believing a false event (the wedding). This misinformation has caused emotional harm to people close to the celebrities and confusion among the public, which fits the definition of harm to communities. The AI system's use (generative AI creating fake images) directly led to this harm. Therefore, this event is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Zendaya. Entes queridos foram "enganados" com falsas fotos de casamento

2026-03-17
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false images that caused people to be misled and emotionally affected, including close family and friends. The AI-generated content directly led to misinformation and emotional harm, fulfilling the criteria for an AI Incident. Although the harm is non-physical, it affects communities and individuals' trust and emotional well-being, which is covered under harm to communities or violations of rights. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Era tudo IA: Zendaya esclarece 'casamento' com Tom Holland | Exame

2026-03-18
Exame
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the images were created by AI and fooled many people, including those close to Zendaya, but it does not mention any resulting harm such as injury, rights violations, or significant disruption. The misinformation was eventually clarified by Zendaya herself. Since no harm has materialized and the event centers on the spread of AI-generated fake content without documented consequences, it does not qualify as an AI Incident. It also does not present a plausible future harm scenario beyond the misinformation already spread, so it is not an AI Hazard. The article mainly provides context and clarification about the AI-generated content and public reaction, fitting the definition of Complementary Information.
Thumbnail Image

Zendaya revela que amigos acreditaram em fotos de IA de seu suposto casamento

2026-03-18
Correio
Why's our monitor labelling this an incident or hazard?
The AI system's use in generating fake wedding photos directly led to harm in the form of emotional distress and misinformation among Zendaya's social circle. This constitutes harm to communities and individuals due to the AI-generated content's deceptive nature. Therefore, this qualifies as an AI Incident because the AI-generated images caused realized harm through misinformation and emotional impact.
Thumbnail Image

Zendaya comenta pela primeira vez fotos falsas feitas por IA, mas deixa dúvida sobre casamento com Tom Holland

2026-03-17
Monet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images that fooled many people, including close acquaintances of Zendaya. The AI system's use directly led to misinformation and deception, which is a form of harm to communities. The harm is realized, not just potential, as people believed the fake images were real. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Zendaya responde a fotos de casamento com Tom Holland feitas por IA: 'Muita gente foi enganada' - Rolling Stone Brasil

2026-03-17
Rolling Stone
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake images that misled people, which is a form of misinformation. While this caused emotional reactions and confusion, the article does not describe any direct or indirect harm meeting the criteria for an AI Incident, such as violations of rights or significant harm to communities. Nor does it describe a plausible future harm scenario beyond the current misinformation. Therefore, it does not qualify as an AI Incident or AI Hazard. The article primarily provides complementary information about the impact of AI-generated content and public responses, fitting the definition of Complementary Information.
Thumbnail Image

Zendaya comenta sobre fotos geradas por IA de suposto casamento com Tom Holland

2026-03-17
PAPELPOP
Why's our monitor labelling this an incident or hazard?
The AI system involvement is the generation of fake images, which is explicitly mentioned. However, the article does not report any actual harm resulting from these images, such as defamation, privacy violations with legal consequences, or other significant harms. The rumors and social confusion are typical of AI-generated deepfakes but do not rise to the level of an AI Incident or AI Hazard as defined. The event is best classified as Complementary Information because it provides context on AI-generated content affecting public perception and celebrity privacy without describing a specific incident of harm or a plausible future harm scenario.
Thumbnail Image

Zendaya quebra silêncio sobre fotos falsas de seu casamento com Tom Holland

2026-03-17
Portal R7
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate fake images that led to misinformation and emotional harm to people who believed the images were real. This is a direct harm to communities through the spread of false information and emotional distress, fitting the definition of an AI Incident. The article describes realized harm caused by AI-generated content, not just a potential risk or a general update.
Thumbnail Image

O pronunciamento de Zendaya sobre suposto casamento com Tom Holland

2026-03-17
VEJA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating fake images that misled the public, which fits the definition of an AI system's use leading to misinformation. While this caused social and reputational harm, it does not rise to the level of an AI Incident as defined (no direct or indirect harm to health, critical infrastructure, legal rights, property, or significant community harm). It is also not a hazard since the harm has already occurred. The main focus is on the misinformation spread and public reaction, which is a form of social harm but not clearly articulated as a significant harm under the framework. Therefore, this is best classified as Complementary Information, providing context on AI-generated misinformation and its social impact without constituting a new AI Incident or Hazard.
Thumbnail Image

Após muita especulação, Zendaya reage a boatos de casamento com Tom Holland e revela saia-justa com conhecidos; assista - Hugo Gloss

2026-03-17
Hugo Gloss
Why's our monitor labelling this an incident or hazard?
The AI system is involved in generating fake images (AI-generated photos), which caused social confusion and emotional reactions among people who believed the images were real. However, the article does not describe any injury, rights violation, or other significant harm caused by these images. The confusion and emotional responses do not rise to the level of an AI Incident as defined, nor is there a plausible future harm described that would qualify as an AI Hazard. The article primarily reports on the social impact and public reaction to AI-generated content, making it Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Zendaya se manifesta sobre fotos falsas de "casamento" com Tom Holland | CNN Brasil

2026-03-17
CNN Brasil
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to generate fake images that misled people, causing them to believe in a false event (the wedding). This misinformation has caused social confusion and emotional impact on people close to the celebrities, which qualifies as harm to communities. The AI system's use directly led to this harm. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Zendaya nega rumores de casamento com Tom Holland e desmente foto de IA

2026-03-17
Portal Tela
Why's our monitor labelling this an incident or hazard?
AI-generated images are explicitly mentioned as the source of false rumors about a wedding. This involves the use of AI systems to create misleading content. However, the article does not report any direct or indirect harm resulting from these images, such as reputational damage, legal violations, or other significant harms. The main focus is on clarifying and denying the misinformation, which aligns with providing complementary information about AI-generated content and its societal impact rather than reporting an incident or hazard. Therefore, this event is best classified as Complementary Information.
Thumbnail Image

Zendaya confirma casamento com Tom Holland e comenta supostas fotos da festa: 'É IA'

2026-03-17
Glamour
Why's our monitor labelling this an incident or hazard?
Although AI-generated images were involved, the event does not describe any realized harm such as injury, rights violations, or disruption caused by the AI system. The AI-generated photos caused confusion but no significant harm or legal issues are reported. Therefore, this is not an AI Incident or AI Hazard. It is a general news item about AI-generated content and public reaction, which fits the category of Complementary Information as it provides context about AI's societal impact without describing harm or plausible harm.
Thumbnail Image

Zendaya komentirala na AI fotke vjenčanja s Hollandom: "Mnogi su nasjeli"

2026-03-17
IndexHR
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate fake images that misled the public, causing misinformation and emotional reactions among people close to the celebrities. This constitutes harm to communities through the spread of false information and deception. Since the AI-generated content has already been disseminated and caused real confusion and emotional impact, this qualifies as an AI Incident under the definition of harm to communities caused directly or indirectly by AI-generated misinformation.
Thumbnail Image

Zendaya zaprepaštena: 'To su lažne fotografije!'

2026-03-17
Jutarnji list
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated fake images that misled people, which is a form of misinformation and can be considered harm to communities. However, the article focuses on the clarification and denial by Zendaya, indicating the harm is recognized but not escalated to a significant incident causing direct or indirect harm as defined. Since the misinformation has already occurred but no further harm or legal violation is described, and the article mainly reports on the clarification and social reaction, this fits best as Complementary Information enhancing understanding of an AI-related misinformation issue rather than a new AI Incident or Hazard.
Thumbnail Image

"Naseli ste" Zendaja progovorila o navodnom venčanju s Tomom Holandom: "U pitanju je veštačka inteligencija" (VIDEO)

2026-03-17
kurir.rs
Why's our monitor labelling this an incident or hazard?
The AI system is involved in generating fake wedding photos that fooled many people, including close acquaintances of the celebrities. This is a clear example of AI-generated misinformation causing social confusion. However, the article does not report any realized harm such as injury, legal violations, or significant community harm. The harm is limited to misleading people and causing emotional reactions, which while notable, does not meet the threshold for an AI Incident. There is also no indication of a plausible future harm beyond the current misinformation. Therefore, this event is best classified as Complementary Information, as it provides context on the societal impact and challenges posed by AI-generated content but does not describe a new AI Incident or Hazard.
Thumbnail Image

Zendaya se osvrnula na AI fotografije svog vjenčanja pa u emisiju donijela "pravi snimak"

2026-03-17
Klix.ba
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic fake images and videos that have been widely believed to be real, causing confusion and emotional harm to individuals who thought the wedding had occurred. This fits the definition of an AI Incident because the AI-generated content has directly led to harm to communities (emotional distress, misinformation). The manipulated video with digital face replacement also contributes to this harm. Therefore, this is classified as an AI Incident.