Billie Eilish Deepfake Controversy: AI-Generated Met Gala Images Spark Outcry

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple reports detail how AI-generated images falsely depicted Billie Eilish at the 2025 Met Gala. The singer, who was performing in Europe, publicly debunked these deepfakes on social media, highlighting concerns over misrepresentation, potential defamation, and violations of intellectual property rights through advanced AI technology.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to generate a fake image of a celebrity in a context that did not occur, leading to reputational harm and public misinformation. The AI-generated content caused indirect harm to Billie Eilish's reputation and public perception, as people criticized her for an event she did not attend and an outfit she did not wear. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (reputational harm and misinformation).[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilityRobustness & digital securityHuman wellbeing

Industries
Media, social platforms, and marketingArts, entertainment, and recreationDigital security

Affected stakeholders
Other

Harm types
ReputationalHuman or fundamental rightsEconomic/PropertyPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Billie Eilish responds to criticism of her Met Gala look: that's AI, I was in Europe

2025-05-16
Celebitchy
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake image of a celebrity in a context that did not occur, leading to reputational harm and public misinformation. The AI-generated content caused indirect harm to Billie Eilish's reputation and public perception, as people criticized her for an event she did not attend and an outfit she did not wear. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (reputational harm and misinformation).
Thumbnail Image

Billie Eilish Debunks AI-Generated Met Gala Photos Mid-Tour

2025-05-16
newKerala.com
Why's our monitor labelling this an incident or hazard?
AI systems were used to generate fake photos of celebrities at an event they did not attend, leading to misinformation and reputational harm. This constitutes harm to communities through the spread of false information and potential violation of personal rights. Since the AI-generated content has already caused negative reactions and confusion, this is a realized harm, qualifying as an AI Incident.
Thumbnail Image

Billie Eilish calls out fans over AI version of her at 2025 Met Gala

2025-05-15
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the outfit image circulating online is AI-generated and that Billie Eilish was not at the event. While the AI system created misleading content, there is no evidence of direct or indirect harm such as health injury, rights violations, or disruption. The event is about public reaction and the artist's response to AI-generated misinformation, which fits the definition of Complementary Information. It does not meet the criteria for an AI Incident or AI Hazard because no harm has occurred or is plausibly imminent from the AI system's use in this context.
Thumbnail Image

Billie Eilish Reacts to 'Trash' Comments on Her 2025 Met Gala Look -- but She Wasn't Even There

2025-05-15
PEOPLE.com
Why's our monitor labelling this an incident or hazard?
The AI system's involvement is in generating deepfake images that misrepresent reality. However, there is no indication that this has caused direct or indirect harm such as injury, rights violations, or disruption. The event describes a situation where AI-generated content caused misinformation or confusion but no materialized harm or legal violation is reported. Therefore, this is a case of AI-generated content causing potential reputational or informational confusion but not rising to the level of an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on the societal impact and public reaction to AI deepfakes involving celebrities, without reporting a specific harm or credible future harm.
Thumbnail Image

Billie Eilish Says Met Gala Images of Her Are Fake: 'That's AI'

2025-05-15
Rolling Stone
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to generate fake images of celebrities at an event they did not attend, which caused public confusion and criticism. While AI-generated misinformation can be harmful, the article does not indicate that this misinformation caused significant harm such as health injury, rights violations, or disruption. The celebrities themselves clarified the situation, mitigating potential harm. Thus, the event does not meet the threshold for an AI Incident or AI Hazard but serves as an example of AI-generated misinformation and public reaction, fitting the definition of Complementary Information.
Thumbnail Image

Billie Eilish reveals AI-generated Met Gala images are 'trash'

2025-05-15
The News International
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake images circulating online, which is a misuse of AI technology to create false content. While this could potentially lead to harm such as misinformation or reputational damage, the article does not report any actual harm occurring or any incident resulting from these images. The celebrities clarify the falsehoods, mitigating potential misinformation. Hence, the event does not meet the threshold for an AI Incident or AI Hazard but rather serves as complementary information about AI's societal impact and public response to AI-generated misinformation.
Thumbnail Image

Billie Eilish shuts down AI-generated photos of Met Gala appearance: 'I wasn't even there!'

2025-05-15
Washington Times
Why's our monitor labelling this an incident or hazard?
AI systems were used to generate fake images of celebrities at an event they did not attend, which is a misuse of AI-generated content. While this could plausibly lead to reputational harm or misinformation spread, the article does not report any realized harm or significant impact beyond the existence of the fake images and public clarifications by the celebrities. Therefore, this situation represents a plausible risk of harm from AI misuse but no confirmed incident of harm has occurred yet. This fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Billie Eilish Responds to Fake AI Images of Her at Met Gala 2025: 'I Wasn't Even There' | Just Jared: Celebrity News and Gossip | Entertainment

2025-05-16
Just Jared
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake images of a celebrity, which is a misuse of AI-generated content. However, there is no evidence of direct or indirect harm occurring from these images, such as reputational damage leading to legal claims, health harm, or rights violations. The event is primarily about misinformation potential but does not document an AI Incident or a plausible future harm leading to an incident. Therefore, it is best classified as Complementary Information, as it provides context on AI-generated misinformation and the celebrity's response, without describing a new AI Incident or Hazard.
Thumbnail Image

Billie Eilish claps back at critics of her Met Gala look: "I wasn't even there!"

2025-05-15
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The event centers on AI-generated images falsely depicting celebrities at an event they did not attend. While the AI system's use led to misinformation and public confusion, there is no evidence of realized harm such as health injury, rights violations, or significant community harm. The incident illustrates a plausible risk of reputational harm and misinformation from AI-generated content, but since no direct harm has occurred, it does not qualify as an AI Incident. It also does not qualify as an AI Hazard because the harm is not merely potential but has manifested as misinformation, yet without significant harm. The article mainly reports on the phenomenon and the celebrities' responses, which aligns best with Complementary Information, as it provides context and updates on AI-generated content impacts without describing a harmful incident.
Thumbnail Image

Billie Eilish Reacts To AI Photos of Her in Met Gala 2025

2025-05-15
Mandatory
Why's our monitor labelling this an incident or hazard?
The AI system generated fake photos that caused misinformation and deception among fans, which can be considered harm to communities by spreading false information. However, the article does not describe any direct or significant harm resulting from these AI-generated images beyond public confusion and the need for clarification by the celebrity. There is no indication of injury, rights violations, or other significant harms. Therefore, this event is best classified as Complementary Information, as it provides context on the use and impact of AI-generated content and the societal response (the celebrity's clarification) but does not describe a concrete AI Incident or AI Hazard.
Thumbnail Image

Billie Eilish slams AI images of her at 2025 Met Gala

2025-05-15
Far Out Magazine
Why's our monitor labelling this an incident or hazard?
AI systems are explicitly involved as the images are AI-generated fabrications of celebrities at an event they did not attend. The misuse of AI to create and spread false images directly leads to reputational harm and misinformation, which falls under harm to communities and violation of rights. The article shows that the harm is occurring now, as the images are circulating and the artists are responding to them. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Billie Eilish reacts to criticism of her 'trash' Met Gala look: 'I wasn't even there!'

2025-05-15
Idaho Statesman
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake image of Billie Eilish at the Met Gala, which she did not attend. This AI-generated content caused reputational harm through misinformation and public misunderstanding. Since the AI-generated image directly led to harm in the form of reputational damage and misinformation, this qualifies as an AI Incident under the category of harm to communities or individuals through misinformation and false representation.
Thumbnail Image

Billie Eilish hits out at 'fake' images of her at 2025 Met Gala

2025-05-15
Music News
Why's our monitor labelling this an incident or hazard?
AI systems were used to generate fake images of Billie Eilish and Katy Perry at an event they did not attend. This led to reputational harm and public misunderstanding, as fans criticized Billie Eilish's outfit based on AI-generated content. The harm here is indirect reputational harm and misinformation affecting the individuals and potentially their communities. Since the AI-generated images have already circulated and caused harm, this qualifies as an AI Incident due to violation of rights related to personal image and misinformation causing harm to individuals and communities.
Thumbnail Image

Billie Eilish AI Fakes: Met Gala Hoax Highlights Deepfake Threat

2025-05-15
TechnoCodex
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake images (deepfakes) that have been circulated and believed by the public, which constitutes harm to communities through misinformation and deception. The AI system's use in generating these realistic but false images directly led to this harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated misinformation.
Thumbnail Image

Billie Eilish Slams AI- Generated Met Gala Pics: "I Had A Show In Europe That Night"

2025-05-16
NDTV
Why's our monitor labelling this an incident or hazard?
The AI system's involvement is in generating fake images (use of AI), but the article does not describe any direct or indirect harm resulting from these images beyond public confusion or criticism, which is not clearly articulated as significant harm under the framework. There is no evidence of injury, rights violation, or disruption. Therefore, this is not an AI Incident. It also does not present a plausible future harm scenario beyond the current misuse, so it is not an AI Hazard. The article mainly reports on the existence of AI-generated fake images and the celebrity's response, which is informational and contextual, fitting the category of Complementary Information.
Thumbnail Image

Billie Eilish slams fake Met Gala pics, calls out rise of AI-generated images

2025-05-16
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
AI-generated images are explicitly mentioned, indicating the involvement of AI systems in creating fake content. The misuse of AI-generated images to falsely depict a celebrity at an event could plausibly lead to reputational harm or misinformation. However, the article focuses on the celebrity's response to the fake images and public reactions rather than any concrete harm or disruption caused by the AI content. There is no indication of injury, rights violations, or other significant harms materializing. Thus, the event does not meet the threshold for an AI Incident or AI Hazard but serves as complementary information about the societal impact and challenges posed by AI-generated media.
Thumbnail Image

Fake AI images are already manipulating you, and this crazy controversy is proof

2025-05-16
BGR
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for image generation and editing to create and disseminate fake images that mislead the public. This constitutes an AI system's use leading to harm to communities through misinformation and manipulation of public opinion, which fits the definition of an AI Incident. The harm is realized as people were misled and discussed a fabricated event, and the artist herself had to clarify the falsehood. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Billie Eilish Debunks AI-Generated Met Gala Appearance | Entertainment

2025-05-16
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake images of Billie Eilish and Katy Perry at the 2025 Met Gala, which they publicly debunked. While the AI system's use led to misinformation, the article does not report any direct or indirect harm such as health injury, rights violations, or disruption. The event highlights the pitfalls of AI-generated content but does not document an incident causing harm. Therefore, it does not meet the threshold for an AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides context on AI-generated misinformation and public responses to it.
Thumbnail Image

Entertainment News | Billie Eilish Slams Fake Met Gala Photos, Calls out AI-generated Images | LatestLY

2025-05-16
LatestLY
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to generate fake images of a public figure at an event she did not attend. This involves an AI system generating misleading content that has caused public confusion and criticism. However, there is no indication that this has led to direct or indirect harm such as injury, rights violations, or significant community harm. The event is primarily about the existence and public reaction to AI-generated fake images, without reported materialized harm or legal consequences. Therefore, it does not meet the threshold for an AI Incident or AI Hazard but provides contextual information about AI-generated misinformation and public response, fitting the definition of Complementary Information.
Thumbnail Image

'I Wasn't There, That's AI': Singer Billie Eilish Slams Fake Met Gala 2025 Photos, Calls Out AI-Generated Images | LatestLY

2025-05-16
LatestLY
Why's our monitor labelling this an incident or hazard?
AI systems were used to generate fake images of celebrities at an event they did not attend, leading to misinformation and reputational harm. This constitutes harm to individuals' reputations and potentially to communities by spreading false information. Since the AI-generated images have already been disseminated and caused negative reactions, this is a realized harm linked directly to the AI system's use. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-generated misinformation.
Thumbnail Image

Billie Eilish slams fake Met Gala photos, calls out AI-generated images

2025-05-16
Asian News International (ANI)
Why's our monitor labelling this an incident or hazard?
The AI system is used to generate fake images that falsely depict Billie Eilish at an event she did not attend. This constitutes misinformation that affects public perception and could be considered harm to the community's informational environment. Since the AI-generated content has already been disseminated and caused public misunderstanding and criticism, this qualifies as an AI Incident due to harm to communities through misinformation and reputational impact.
Thumbnail Image

Billie Eilish slams AI lies: "I was in Europe, not at the Met Gala!" - The Statesman

2025-05-16
The Statesman
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated images falsely showing Billie Eilish and Katy Perry at the Met Gala, which they did not attend. The AI system's use directly led to misinformation and reputational harm, as the celebrities had to publicly deny these false appearances. The harm is realized and directly linked to the AI-generated content, fulfilling the criteria for an AI Incident due to violation of rights and harm to communities through misinformation.
Thumbnail Image

Billie Eilish reacts to criticism of her 'trash' Met Gala look: 'I wasn't even there!' - UPI.com

2025-05-15
UPI
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake image of Billie Eilish at the Met Gala, leading to misinformation and reputational harm. This constitutes harm to the individual (a person) through misinformation and false representation. Since the AI-generated content directly led to public criticism and confusion, it qualifies as an AI Incident under the definition of harm to a person through misinformation caused by AI-generated content.
Thumbnail Image

Billie Eilish Reacts to "Trash" Met Gala Look Comments: "I Wasn't There, That's AI"

2025-05-15
The Hollywood Reporter
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images falsely depicting Billie Eilish at an event she did not attend, which is a misuse of AI-generated content. While this involves an AI system and the use of AI-generated content, the article does not report any direct or indirect harm such as defamation, reputational damage, or rights violations that have materialized. The main focus is on the confusion caused and the social media reaction, without evidence of significant harm or legal issues. Therefore, this is best classified as Complementary Information, as it provides context and societal response to AI-generated misinformation without describing a concrete AI Incident or plausible AI Hazard.
Thumbnail Image

"Das ist KI!" - Eilish wehrt sich gegen gefälschte Fotos

2025-05-15
Volksstimme.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake images (deepfakes or similar AI-generated content) that misrepresent reality, which is a direct use of AI. The harm here is the spread of misinformation and reputational harm to the individual, which qualifies as harm to communities and individuals. Since the fake photos are actively circulating and causing confusion, this constitutes an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Billie Eilish wehrt sich gegen KI-Bilder von der Met Gala

2025-05-15
Frankfurter Allgemeine
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images causing reputational issues for celebrities, which is a misuse of AI-generated content. While this misuse could plausibly lead to harm such as misinformation or reputational damage, the article focuses on the celebrities' statements clarifying the falsity of the images and does not document actual harm occurring. Therefore, this situation represents a plausible risk of harm from AI misuse rather than a confirmed AI Incident. It is not merely general AI news or product updates, so it is not unrelated. The main focus is on the potential and ongoing misuse of AI-generated content causing reputational confusion, fitting the definition of an AI Hazard.
Thumbnail Image

Leute: "Das ist KI!" - Eilish wehrt sich gegen gefälschte Fotos

2025-05-15
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake images (deepfakes or similar AI-generated content) that falsely depict Billie Eilish at the Met Gala. This use of AI has directly led to misinformation and reputational harm, which qualifies as harm to communities and potentially a violation of rights. Since the harm is occurring (misinformation spreading and public confusion), this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Das ist KI!" - Eilish wehrt sich gegen gefälschte Fotos

2025-05-15
stern.de
Why's our monitor labelling this an incident or hazard?
An AI system was used to create fake images of Billie Eilish at the Met Gala, which caused confusion and misinformation among the public. This constitutes harm to the community by spreading false information and misrepresenting the individual. Since the AI-generated content has already caused this misinformation, it qualifies as an AI Incident due to harm to communities through misinformation.
Thumbnail Image

Diskussionen ums Outfit: "Das ist KI!" - Billie Eilish wehrt sich gegen gefälschte Fotos

2025-05-15
RP Online
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate fake images of Billie Eilish at an event she did not attend, leading to misinformation and potential reputational harm. This constitutes an AI Incident because the AI-generated content directly led to harm in the form of misinformation and reputational impact on the individual involved.
Thumbnail Image

"Das ist KI!" - Billie Eilish wehrt sich gegen gefälschte Fotos

2025-05-15
Nau
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating a fake image (deepfake) of Billie Eilish, which misled some social media users. While this is a misuse of AI-generated content, the article does not report any realized harm such as injury, rights violations, or significant disruption. The event is primarily about the existence of AI-generated misinformation and the celebrity's clarification, without evidence of direct or indirect harm. Therefore, it does not meet the threshold for an AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides context on AI-generated misinformation and public reaction.
Thumbnail Image

"Das ist KI": Billie Eilish wehrt sich gegen gefälschte Fotos

2025-05-15
HAZ – Hannoversche Allgemeine
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the photos circulating are AI-generated or possibly manipulated, which is a direct use of AI systems to create misleading content. This has led to misinformation about Billie Eilish's presence at the Met Gala, which can be considered harm to the community through the spread of false information. Since the AI-generated content has already been disseminated and caused confusion, this qualifies as an AI Incident due to harm to communities via misinformation.
Thumbnail Image

Billie Eilish wehrt sich gegen gefälschte Fotos

2025-05-15
Südtirol News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the photos circulating are AI-generated or possibly manipulated, which is a direct use of AI systems to create misleading content. This has led to misinformation about Billie Eilish's presence at the Met Gala, which can be considered harm to the community through false information and reputational damage. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI-generated fake content.
Thumbnail Image

KI-generierte Bilder: Billie Eilish wehrt sich gegen gefälschte Fotos

2025-05-16
www.kleinezeitung.at
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images that falsely depict Billie Eilish at an event she did not attend. This constitutes a misuse of AI-generated content leading to misinformation and potential reputational harm, which is a form of harm to the individual and community trust. Since the harm (misinformation and reputational damage) is occurring due to the AI-generated images, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (harm to community trust and individual reputation).
Thumbnail Image

Billie Eilish wehrt sich gegen KI-Fälschungen von Met Gala-Bildern

2025-05-16
Radio Hamburg
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake images (deepfakes) that mislead the public about the celebrities' presence at an event. This is a clear example of AI-generated misinformation. However, the article focuses on the celebrities' clarifications and warnings rather than describing actual harm that has materialized. Since the harm is potential (misinformation and reputational damage could occur or escalate), and the main focus is on raising awareness and warning about these AI-generated fakes, this fits best as Complementary Information rather than an AI Incident or AI Hazard. There is no indication of direct or indirect harm having occurred yet, nor a credible imminent risk described beyond the general concern.