AI-Generated Deepfake Images of Celebrities at Met Gala Mislead Millions Online

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake images of Selena Gomez and Zendaya at the Met Gala circulated widely on social media, misleading millions of fans into believing the celebrities attended the event. The manipulated photos, created by superimposing faces onto existing images, caused significant confusion and spread misinformation before being flagged as fake.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to generate or manipulate an image of Selena Gomez at the Met Gala, which was false and led to widespread misinformation and deception among millions of users. This constitutes harm to communities by spreading false information and misleading the public. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafety

Industries
Media, social platforms, and marketingArts, entertainment, and recreation

Affected stakeholders
General public

Harm types
ReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generationRecognition/object detectionOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Selena Gomez's 2023 Met Gala Look Went Viral on Twitter, But She Wasn't Even There. Millions of Users Duped By Possibly AI-Generated Photo.

2023-05-03
Entrepreneur
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate or manipulate an image of Selena Gomez at the Met Gala, which was false and led to widespread misinformation and deception among millions of users. This constitutes harm to communities by spreading false information and misleading the public. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

Deepfakes of Zendaya and Selena Gomez at the Met Gala confuse fans

2023-05-03
Metro
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated deepfake images that misled the public, causing harm to the community by spreading misinformation. The AI system's use in generating these altered images directly led to confusion and deception among fans, which constitutes harm to communities as per the OECD framework. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's outputs.
Thumbnail Image

Selena Gomez makes surprise appearance at Met Gala, thanks to AI-generated photo

2023-05-03
Malay Mail
Why's our monitor labelling this an incident or hazard?
The photo was AI-generated or digitally altered to create a false image of Selena Gomez at the Met Gala, which misled the public. This constitutes the use of an AI system to generate misleading content that has caused misinformation and potential reputational harm. However, there is no direct or indirect harm such as injury, rights violations, or disruption of infrastructure reported. The harm is primarily misinformation and deception, which can be considered harm to communities if widespread and impactful. Since the misinformation has already occurred and caused public deception, this qualifies as an AI Incident due to harm to communities through misinformation dissemination.
Thumbnail Image

Selena Gomez's fake Met Gala attendance goes viral | Al Bawaba

2023-05-03
Al Bawaba
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake image (deepfake) of Selena Gomez at the Met Gala, which was widely shared on social media. However, there is no indication that this caused direct or indirect harm such as misinformation leading to significant societal harm, violation of rights, or other harms as defined. The event mainly reports on the viral nature of the AI-generated content without evidence of resulting harm or plausible future harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information since it provides context on AI-generated content and its social media impact without describing harm.
Thumbnail Image

Selena Gomez Didn't Attend the Met Gala... So Why Is Her Red Carpet Look Trending?

2023-05-03
Rare
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI or AI-like image manipulation techniques (photoshopping) to create fake images that misrepresent reality. However, the article does not indicate any direct or indirect harm caused by these images beyond misleading fans or social media users. There is no mention of injury, rights violations, disruption of infrastructure, or significant harm to communities. The misinformation is noted and corrected by Twitter with a disclaimer, but no harm or plausible future harm is described. Therefore, this is best classified as Complementary Information about AI-generated misinformation and social media response, rather than an AI Incident or Hazard.
Thumbnail Image

Deepfakes of Zendaya and Selena Gomez at the Met Gala confuse fans

2023-05-03
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that led to misinformation and confusion among fans, which constitutes harm to communities by spreading false narratives. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The presence of AI is clear from the creation of deepfake images, and the harm is realized as fans were misled and the misinformation widely disseminated.