Katy Perry Deepfake Images Mislead Public at Met Gala

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake images of Katy Perry at the Met Gala misled many, including her mother, despite her absence from the event. The incident highlights privacy and intellectual property concerns as the images circulated widely on social media, deceiving numerous internet users.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems used to create deepfake images that have directly led to misinformation and reputational harm to individuals, which constitutes harm to communities and individuals' rights. The AI-generated content has been widely disseminated and believed, causing real-world impact. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and potential harassment. Additionally, the article discusses governance responses, but the primary focus is on the realized harm from AI-generated deepfakes.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityRobustness & digital securitySafety

Industries
Media, social platforms, and marketingArts, entertainment, and recreation

Affected stakeholders
WomenGeneral public

Harm types
ReputationalHuman or fundamental rightsEconomic/PropertyPsychological

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Internetom se proširila fotka Katy Perry s Met Gale, ona poručila: "Nisam ni došla"

2024-05-07
IndexHR
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake images that have directly led to misinformation and reputational harm to individuals, which constitutes harm to communities and individuals' rights. The AI-generated content has been widely disseminated and believed, causing real-world impact. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and potential harassment. Additionally, the article discusses governance responses, but the primary focus is on the realized harm from AI-generated deepfakes.
Thumbnail Image

Katy Perry uopće nije bila na Met Gali, pa tko je dovraga onda ova žena?! 'Pogledajte joj oči, sve će vam biti jasno'

2024-05-07
Jutarnji list
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images that have been disseminated and believed by many, causing misinformation and reputational harm to the celebrities involved. The AI-generated content has been used maliciously or deceptively, leading to harm to individuals' rights and potentially to communities through misinformation. Therefore, this meets the definition of an AI Incident, as the AI system's use has directly led to harm. The article also mentions societal responses but the primary focus is on the harm caused by AI-generated deepfakes.
Thumbnail Image

Internetom se šire fotke Katy Perry s Met Gale, ona poručila: Nisam ni došla na događaj

2024-05-07
Klix.ba
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake images that misrepresent real individuals, leading to misinformation and potential reputational harm. The AI's role is pivotal in creating and disseminating these false images, which have already caused confusion and deception among the public. This constitutes harm to communities through misinformation and violation of personal rights, qualifying it as an AI Incident.
Thumbnail Image

Fotografije Katy Perry i Rihanne s ovogodišnje Met Gale su AI lažnjaci

2024-05-07
Hrvatska radiotelevizija
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images, which are AI systems creating fabricated content. However, it does not report any direct or indirect harm resulting from these deepfakes, such as reputational damage, misinformation causing social disruption, or legal violations. The content is primarily informational about the presence of AI deepfakes at a high-profile event, without evidence of harm or plausible future harm detailed. Therefore, this fits best as Complementary Information, providing context and awareness about AI deepfake usage rather than reporting an AI Incident or Hazard.
Thumbnail Image

AI Deepfakes at Met Gala 2024: Katy Perry and Rihanna's Fake Images at Met Gala Go Viral

2024-05-07
News9live
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems generating deepfake images, which fits the definition of an AI system. The use of these AI-generated images has led to confusion and misinformation, which could be considered harm to communities or rights if it were realized. However, the article does not report any actual harm occurring, only the potential for such harm and public concern. Therefore, this event is best classified as Complementary Information, as it provides context and discussion about the implications and challenges of AI deepfakes without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Katy Perry's own mom duped by AI-generated Met Gala pics

2024-05-08
Daily News
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to create realistic fake images (deepfakes) that misled people into believing false information about celebrity attendance at a major event. This constitutes harm to communities through misinformation and deception. Since the AI system's use directly led to this harm, the event qualifies as an AI Incident under the framework, specifically harm to communities due to the spread of false narratives and images.
Thumbnail Image

Don't Be Fooled by A.I. Katy Perry Didn't Attend the Met.

2024-05-07
The New York Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images that fooled people but does not describe any direct or indirect harm resulting from these images. The harm is potential misinformation and deception, but no concrete harm such as injury, rights violations, or operational disruption is reported. The event serves to inform about the phenomenon and its implications rather than documenting an incident or hazard. Hence, it fits the definition of Complementary Information, providing context and awareness about AI-generated content and its societal effects.
Thumbnail Image

How to spot AI generated images on social media - BBC Bitesize

2024-05-09
BBC
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a realistic image that was widely viewed and initially believed to be real, but the article's main focus is on raising awareness and providing guidance to spot AI-generated images. There is no indication of direct or indirect harm occurring from the AI-generated image, such as misinformation causing harm to communities or individuals. Therefore, this is best classified as Complementary Information, as it supports understanding of AI-generated content and its societal implications without reporting a new incident or hazard.
Thumbnail Image

Katy Perry and Rihanna didn't attend the Met Gala, but AI-generated images still fooled fans

2024-05-08
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate realistic deepfake images of celebrities, which were then spread online and fooled people, including the celebrities' own family members. This constitutes direct harm through misinformation and deception, impacting communities and individuals' reputations. The article also references prior incidents of harmful AI-generated content and the societal risks posed by such technology, reinforcing the classification as an AI Incident. Although the article discusses broader governance and regulatory responses, the primary focus is on the realized harm from the AI-generated images, not just potential or complementary information.
Thumbnail Image

Katy Perry and Rihanna didn't attend the Met Gala. But AI-generated...

2024-05-07
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that have been widely disseminated and have fooled people, including close family members of the celebrities depicted. This constitutes misinformation and reputational harm, which falls under harm to communities and violations of rights. The AI system's role is pivotal in creating and spreading these images. Although the harm is non-physical, it is significant and clearly articulated. Hence, this qualifies as an AI Incident rather than a hazard or complementary information. The article also discusses broader concerns and responses but the primary focus is on the realized harm from the AI-generated images.
Thumbnail Image

Met Gala 2024: AI-Generated Impostors Invade 'The Garden Of Time'

2024-05-07
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate realistic fake images of celebrities, which were spread on social media and caused misinformation and deception. This constitutes harm to communities by eroding trust and spreading false information, fulfilling the criteria for an AI Incident. The harm is realized as people were fooled by the AI-generated images, and the misinformation spread widely. Therefore, this is classified as an AI Incident.
Thumbnail Image

Don't Be Fooled by These AI-Generated Met Gala Looks

2024-05-07
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake images that mislead viewers, but there is no indication that this has directly or indirectly caused harm such as injury, rights violations, or disruption. The article highlights the challenge of discerning real from fake images, emphasizing media literacy. Since no actual harm or violation is reported, and the event mainly illustrates the presence and impact of AI-generated content without resulting harm, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides context on AI's societal impact and the challenges posed by AI-generated deepfakes.
Thumbnail Image

AI-Generated Met Gala Images Of Katy Perry, Rihanna Went Viral: Here's How To Spot A Deepfake

2024-05-07
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images that have been widely disseminated and believed by many, causing misinformation and reputational harm. The AI-generated content directly led to harm by misleading the public and potentially damaging the reputations of the celebrities involved. The political deepfakes further illustrate misuse of AI to influence public opinion negatively. These harms fall under harm to communities and violations of rights related to misinformation and false representation. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

Katy Perry Suggests Her Mom Was Fooled By An AI Picture Of Her At The Met Gala

2024-05-07
Yahoo
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to create deepfake images that misled people, including Katy Perry's mother, into believing false information. The AI system's outputs caused misinformation and deception, which is a form of harm to communities. Since the harm (misinformation and deception) has already occurred and is directly linked to the AI-generated images, this qualifies as an AI Incident under the framework.
Thumbnail Image

Met Gala: Katy Perry, Dua Lipa and Rihanna AI fakes go viral

2024-05-07
Yahoo
Why's our monitor labelling this an incident or hazard?
AI systems were used to generate fake images of celebrities, which were widely shared and fooled many people. However, the article does not describe any direct or indirect harm resulting from these images, such as reputational damage, legal violations, or physical harm. The event is about the spread of AI-generated fake content and public reaction, which fits the definition of Complementary Information as it provides context and understanding of AI's societal impact without describing a specific AI Incident or Hazard.
Thumbnail Image

Fake photos, but make it fashion. Why the Met Gala pics are just the beginning of AI deception

2024-05-09
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create realistic but fake images of public figures, which have been widely shared and believed by the public, including the celebrities' own family members. This constitutes a direct harm to communities by spreading misinformation and undermining trust in visual evidence, which is a form of social harm. Additionally, the article explicitly connects these AI-generated images to broader risks such as election interference and propaganda, indicating the AI's role in causing or enabling these harms. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm through misinformation and potential societal disruption.
Thumbnail Image

Amazing Fake Katy Perry Met Gala Photo Fools Fans, and Even Her Mom

2024-05-07
Aol
Why's our monitor labelling this an incident or hazard?
An AI system was clearly involved in generating a fake image that caused misinformation and deception among the public, including close family members. This constitutes harm to communities by spreading false information and misleading people, which fits the definition of an AI Incident. The harm is realized as the fake image fooled millions and caused confusion, even requiring Katy Perry to clarify the truth publicly. Therefore, this event qualifies as an AI Incident due to the direct role of AI-generated content in causing misinformation and social deception.
Thumbnail Image

Katy Perry forced to deny she was at the MET Gala after AI-created photos go viral

2024-05-07
MARCA
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated images of Katy Perry at the MET Gala that are false and have gone viral, causing misinformation. The AI system's use here directly led to reputational harm and misinformation dissemination, which can be considered harm to communities and individuals. Since the harm has occurred (misinformation and reputational damage), this qualifies as an AI Incident. There is no indication that this is merely a potential risk or a complementary update; the event involves realized harm due to AI misuse.
Thumbnail Image

Met Gala 2024: AI takes over fashion's biggest night as fake images of Rihanna, Katy Perry, and Lady Gaga go viral

2024-05-07
The Indian Express
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images that are being widely shared and believed, which constitutes misinformation and deception. This can cause harm to the celebrities' reputations and mislead the public, representing harm to communities through misinformation. Since the AI-generated content is actively causing misleading perceptions and social impact, this qualifies as an AI Incident due to harm to communities through the spread of false information.
Thumbnail Image

Deepfake hits Met Gala: After Rihanna, Gaga, Katy Perry says, 'my mom fell for it'

2024-05-07
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake images that have been actively circulated and have deceived people, including Katy Perry's mother, indicating direct harm through misinformation and reputational impact. The AI system's use directly led to this harm by creating and spreading false images. This fits the definition of an AI Incident because the AI system's use has directly led to harm to individuals' reputations and communities (through misinformation). Although the harm is non-physical, it is significant and clearly articulated. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Rihanna Falls Prey To Deepfake As She Skips Met Gala 2024, Edited Photos Go Viral - News18

2024-05-07
News18
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images that have been widely disseminated, causing misinformation and confusion among the public. This constitutes harm to communities by spreading false information and misleading the public, which fits the definition of an AI Incident. The AI system's use directly led to this harm through the creation and viral spread of fake images. Therefore, this is classified as an AI Incident.
Thumbnail Image

Even Katy Perry's mom was fooled by what appeared to be AI-generated Met Gala pics

2024-05-07
NBC News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images that fooled people and were widely shared, causing misinformation. The harm is realized as people were deceived, including close family members of the celebrities, which impacts public trust and information integrity. The AI system's use in generating these fake images is central to the event, fulfilling the criteria for an AI Incident due to harm to communities through misinformation. The event is not merely a potential risk or complementary information but a realized incident of AI misuse causing harm.
Thumbnail Image

A New Task for Met Gala Fashion Police: Is That AI?

2024-05-07
PC Magazine
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated images of celebrities that are falsely presented as real, leading to misinformation and confusion among the public and the celebrities themselves. The AI system's use in generating these images directly leads to harm in the form of misinformation and reputational damage, which qualifies as harm to communities and a violation of rights. The presence of disclaimers by social media platforms indicates recognition of the harm caused. Therefore, this is an AI Incident due to the realized harm caused by the AI-generated fake images.
Thumbnail Image

One huge problem with Met Gala photo

2024-05-07
News.com.au
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated images of Katy Perry at the Met Gala that are fake but highly realistic, leading to widespread deception on social media. The AI system's use directly led to misinformation and potential reputational harm, which qualifies as harm to communities. The harm is realized as the images fooled many users and spread widely. This fits the definition of an AI Incident because the AI system's use directly led to harm through misinformation and deception affecting communities.
Thumbnail Image

AI pictures fool star's parents, a rare peak inside and where was Rihanna? Here's what you missed from the Met Gala

2024-05-08
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The presence of AI-generated images is clear, and their use has led to misinformation about celebrity attendance at the Met Gala. However, the article does not describe any direct or indirect harm resulting from these AI images, such as reputational damage, rights violations, or other significant harms. The harm is potential, as AI-generated fake images could plausibly lead to misinformation-related harms in the future, but no such harm is documented here. Thus, the event fits the definition of an AI Hazard, as the AI-generated images could plausibly lead to harm (misinformation, reputational harm) but no incident has occurred yet.
Thumbnail Image

Did Rihanna and Katy Perry attend the Met Gala? No, but AI had fans thinking otherwise

2024-05-07
Los Angeles Times
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated images of Rihanna and Katy Perry at the Met Gala that fooled fans and social media users. While the AI system's use led to misinformation and confusion, there is no evidence of direct or indirect harm such as health injury, rights violations, or significant community harm. The misinformation was quickly identified and labeled with disclaimers. Therefore, this does not meet the threshold for an AI Incident. It also does not represent a plausible future harm scenario (AI Hazard) since the harm is already realized but limited and not significant. The article mainly provides information about the use and impact of AI-generated images in social media, which fits the category of Complementary Information as it enhances understanding of AI's societal impact without describing a new primary harm.
Thumbnail Image

Katy Perry Did Not Attend the Met Gala But Fake Photo Goes Viral, Fools Her Own Mom

2024-05-08
The Inquisitr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deep fake images that fooled the public and even Katy Perry's mother, indicating AI system involvement. However, no actual harm such as injury, rights violations, or property/community/environmental harm occurred. The event is a viral misinformation incident but without significant harm or legal violation. It also does not present a credible risk of future harm beyond the current viral hoax. Thus, it does not meet the criteria for AI Incident or AI Hazard. The article mainly provides supporting information about AI's societal effects and public reaction, making it Complementary Information.
Thumbnail Image

Met Gala Deepfakes Are Flooding Social Media

2024-05-07
Wired
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create deepfake images of celebrities, which are being widely disseminated on social media. This involves an AI system generating false content that misleads viewers. However, the article does not report any direct or indirect harm such as injury, rights violations, or significant community harm resulting from these deepfakes. While there is potential for misinformation or reputational harm, the article does not document realized harm or legal breaches. Therefore, this event does not meet the threshold for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context and details about the use and impact of generative AI in media and social platforms, enhancing understanding of AI's societal implications without reporting a specific harm or credible imminent harm.
Thumbnail Image

Don't Be Fooled by These AI-Generated Met Gala Looks

2024-05-07
TIME
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images (deepfakes) circulating online, which involves AI systems. However, the event does not describe any direct or indirect harm resulting from these images, such as misinformation causing social disruption or rights violations. The harm is potential or societal in nature (media literacy challenges), but no concrete incident of harm is reported. Thus, it does not meet the threshold for AI Incident or AI Hazard. Instead, it informs about the presence and implications of AI deepfakes in a high-profile context, fitting the definition of Complementary Information.
Thumbnail Image

Met Gala 2024: Deepfake pics of Rihanna, Katy Perry, Lady Gaga go viral

2024-05-07
India Today
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to create deepfake images of Rihanna, Katy Perry, and Lady Gaga, which were circulated widely and led to misinformation about their attendance. However, there is no indication that these AI-generated images caused direct harm such as injury, rights violations, or significant community harm. The event illustrates a misuse of AI-generated content that could potentially lead to misinformation-related harms, but the article does not report any realized harm or significant consequences beyond confusion. Therefore, this situation is best classified as an AI Hazard, as the AI-generated deepfakes could plausibly lead to harm such as misinformation or reputational damage if misused further, but no such harm is confirmed in the article.
Thumbnail Image

Katy Perry and Rihanna's Met Gala looks went viral. But they weren't real.

2024-05-07
Mashable
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic fake images of celebrities, which is a clear use of AI. The AI-generated content has led to misinformation and confusion among the public, which can be considered a form of harm to communities through misinformation. However, the article does not report any direct or significant harm resulting from these AI-generated images, such as defamation lawsuits, rights violations, or physical harm. The harm is more about the potential for misinformation and media literacy erosion, which is a plausible risk but not confirmed as an incident. Therefore, this event is best classified as an AI Hazard, reflecting the plausible future harm from AI-generated misinformation and deepfakes in this context.
Thumbnail Image

Met Gala 2024: AI-generated Fake Photos Of Rihanna, Katy Perry, Selena Gomez Go Viral

2024-05-07
Republic World
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images of celebrities that went viral, indicating the involvement of AI systems in generating realistic but fake content. However, the event does not describe any realized harm such as injury, rights violations, or significant disruption. The concern is about the potential for misinformation and confusion, which could plausibly lead to harm if such images are widely believed or misused. Since no actual harm has been reported, but there is a credible risk of future harm related to misinformation and media literacy, this event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Met Gala 2024: Rihanna, Katy Perry, Lady Gaga Among Recent Targets Of AI Deepfake, Images Go Viral

2024-05-07
Jagran English
Why's our monitor labelling this an incident or hazard?
AI systems were used to generate deepfake images of celebrities, which were widely circulated and viewed by millions. The AI-generated content misrepresents reality, potentially causing harm to the individuals' reputations and misleading the public. This constitutes a violation of rights related to personal image and could be considered harm to communities through misinformation. Since the harm is realized and directly linked to the use of AI-generated deepfakes, this qualifies as an AI Incident.
Thumbnail Image

Katy Perry's AI-generated Met Gala pictures fool millions

2024-05-09
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The AI system involvement is clear as AI was used to create deepfake images. However, the event does not describe any realized harm such as health injury, rights violations, or disruption. The harm is limited to deception and misinformation, but no significant or clearly articulated harm as defined is reported. The mention of platform commitments to identify and report AI-generated content is a governance response, making the overall article primarily complementary information rather than an incident or hazard.
Thumbnail Image

Katy Perry clears the air after AI generated Met Gala pic goes viral

2024-05-07
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated images that caused misinformation and confusion among the public, including Perry's own mother. However, there is no indication that this misinformation caused direct harm such as injury, rights violations, or disruption. The AI system's use led to a misleading viral image but did not result in realized harm or a plausible future harm scenario described in the article. Therefore, this is best classified as Complementary Information, as it provides context on AI-generated content's societal impact and public reaction without constituting an AI Incident or Hazard.
Thumbnail Image

Katy Perry and Rihanna didn't attend the Met Gala. But AI-generated images still fooled fans

2024-05-08
The Star
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images (deepfakes) that fooled people into believing false information about celebrity appearances, which constitutes harm to communities through misinformation and deception. The AI system's use directly led to this harm. Additionally, the article discusses the broader societal harms linked to such AI misuse, reinforcing the classification as an AI Incident. The presence of AI systems generating realistic fake images and their impact on public perception and potential for misuse fits the definition of an AI Incident.
Thumbnail Image

Katy Perry falls victim to viral fake Met Gala photo

2024-05-09
RTE.ie
Why's our monitor labelling this an incident or hazard?
The AI system generated fake images that were widely shared and believed, causing misinformation and misleading the public. This constitutes harm to communities by spreading false information and undermining trust, which fits the definition of an AI Incident. The harm is realized as people have been deceived by the AI-generated content, including close family members of the celebrity. Therefore, this event qualifies as an AI Incident due to the direct role of AI-generated content in causing misinformation harm.
Thumbnail Image

Why AI deepfakes stole the show at this year's Met Gala - Fast Company

2024-05-07
Fast Company
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI creating deepfake images) and their use (creation and dissemination of AI-generated images). However, there is no indication that any harm has occurred or that the AI use has directly or indirectly led to injury, rights violations, or other harms as defined. The article focuses on the novelty and cultural impact rather than any realized harm or credible risk of harm. Therefore, this is best classified as Complementary Information, providing context and insight into AI's societal impact without describing an AI Incident or Hazard.
Thumbnail Image

After Katy Perry fools her own mother, can you tell an AI photo from the real thing?

2024-05-08
The Sunday Times
Why's our monitor labelling this an incident or hazard?
The AI system was used to create a realistic fake image, but the event does not describe any harm resulting from this use. There is no evidence of injury, rights violations, or other significant harms. The article highlights the convincing nature of AI-generated images but does not indicate any incident or hazard beyond that. Therefore, this is best classified as Complementary Information, as it provides context on AI's capabilities and societal reactions without reporting an incident or hazard.
Thumbnail Image

Katy Perry's Mom Fell for Those AI Photos, Too

2024-05-07
The Cut
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to generate realistic fake images of celebrities, which were widely disseminated and believed to be real by many, including Katy Perry's own mother. This misinformation can be considered harm to communities through the spread of false information and potential reputational damage. Since the AI-generated content directly led to this harm, it qualifies as an AI Incident under the framework's definition of harm to communities caused by AI systems.
Thumbnail Image

Katy Perry's mother 'fooled' by 2024 Met Gala AI picture

2024-05-07
The News International
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to generate fake images of a public figure attending an event she did not attend, leading to public deception and misinformation. The AI system's outputs directly caused confusion and false beliefs among fans and family members, which is a form of harm to communities. Although the harm is non-physical, it fits within the definition of an AI Incident as it involves realized harm caused by AI-generated misinformation. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Katy Perry went viral after an AI-generated photo crowned her as one of the best dressed at the 2024 MET Gala

2024-05-08
Hola.com
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated images that falsely depict Katy Perry at an event she did not attend. While this is a clear example of AI-generated synthetic media, the event does not report any direct or indirect harm such as health injury, rights violations, or significant community harm. The confusion caused is limited and resolved by Katy Perry's clarification. Therefore, this is not an AI Incident. It also does not present a plausible future harm scenario beyond the current event, so it is not an AI Hazard. The article mainly reports on the viral spread of AI-generated content and the social reaction, which fits the definition of Complementary Information as it provides context and understanding of AI's impact on media and public perception without describing a new harm or risk.
Thumbnail Image

Katy Perry and Rihanna didn't attend the Met Gala. But AI-generated images still fooled fans

2024-05-07
National Post
Why's our monitor labelling this an incident or hazard?
AI-generated images were used to create false impressions of celebrities attending an event, leading to misinformation and deception among the public. The AI system's outputs directly caused this harm by misleading people. Although the harm is non-physical, it affects communities through misinformation, which fits within the definition of harm to communities. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fake image of Katy Perry at the Met Gala fooled her own mother

2024-05-07
National Post
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated images that mislead people, which is a form of misinformation and can cause harm to individuals' reputations and potentially to communities by spreading false information. Since the AI system's use has directly led to the dissemination of misleading content that fooled people, this constitutes an AI Incident due to harm to communities through misinformation and deception.
Thumbnail Image

Social Media Users are Fooled By AI Photos of Met Gala

2024-05-07
PetaPixel
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate realistic fake images that were widely shared and believed, leading to misinformation and deception of the public. This constitutes harm to communities by spreading false information and misleading users, which fits the definition of an AI Incident. The harm is realized as users were fooled and expressed embarrassment and concern about AI's impact on trust and information authenticity.
Thumbnail Image

'Society is in trouble': People are getting bamboozled by these AI-generated photos of Katy Perry, Rihanna at the Met Gala

2024-05-07
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images (deepfakes) that are being widely shared and believed to be real, misleading the public. This use of AI has directly caused harm by spreading false information and infringing on the celebrities' rights to control their image and brand. The harm is realized and ongoing, not merely potential, fitting the definition of an AI Incident due to violations of rights and harm to communities through misinformation and reputational damage.
Thumbnail Image

Katy Perry Fans Fooled by AI Photos of Her Dress at the 2024 Met Gala

2024-05-07
Life & Style
Why's our monitor labelling this an incident or hazard?
The AI system generated realistic fake images of a celebrity, which were widely shared and believed, causing misinformation and deception. This fits the definition of an AI Incident because the AI's use directly led to harm to communities through misinformation. Although the harm is non-physical, it is significant and clearly articulated. The event is not merely a potential risk (hazard) or complementary information, but an actual incident of AI-generated misinformation causing harm.
Thumbnail Image

Katy Perry and Rihanna didn't attend the Met Gala. But AI-generated images still fooled fans

2024-05-07
San Diego Union-Tribune
Why's our monitor labelling this an incident or hazard?
The event describes the use of generative AI to create realistic fake images (deepfakes) that misled people, including fans and family members, about celebrity appearances. This constitutes harm to communities through misinformation and deception, and potentially violates rights related to identity and reputation. The AI system's use directly led to these harms by generating and spreading false content. Therefore, this qualifies as an AI Incident. The article also discusses broader implications and governance responses, but the primary focus is on the realized harm caused by the AI-generated images.
Thumbnail Image

Could a deepfake take over your life?

2024-05-10
Daily Maverick
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images that mislead social media audiences, which is a clear AI system use. While the article describes actual dissemination of misleading AI-generated content, the harm described is primarily reputational and informational, with no direct or concrete harm such as physical injury, legal rights violations, or critical infrastructure disruption reported. The main focus is on the potential dangers of such AI misuse in the future and the societal responses to it. Therefore, this is best classified as Complementary Information, as it provides context, warnings, and responses related to AI-generated disinformation rather than documenting a specific AI Incident or an AI Hazard event causing or plausibly causing harm at this time.
Thumbnail Image

Katy Perry, Rihanna Absence Sparks AI-Generated Images at Met Gala

2024-05-08
Tech Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create realistic fake images that have been disseminated widely, misleading people including close family members. This constitutes an AI Incident because the AI-generated content has directly led to misinformation and deception, which harms communities by undermining trust and spreading false narratives. The article also references the broader context of AI misuse for disinformation and related harms, reinforcing the classification as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Katy Perry and Rihanna didn't attend the Met Gala. But AI-generated images still fooled fans

2024-05-07
Financial Post
Why's our monitor labelling this an incident or hazard?
AI systems were used to generate fake images of celebrities attending an event they did not attend, leading to misinformation and deception among fans, including Perry's own mother. This constitutes harm to communities through the spread of false information. Since the AI-generated content has already caused confusion and deception, this qualifies as an AI Incident due to realized harm from the AI system's outputs.
Thumbnail Image

Deepfake Alert! AI generated Met Gala images of Rihanna and Katy Perry go viral

2024-05-07
WION
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images that were widely shared and believed to be authentic, leading to misinformation and potential harm to the reputations of the celebrities involved and misleading the public. This constitutes harm to communities through misinformation and deception. Since the AI-generated content has already been disseminated and caused confusion, this qualifies as an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Katy Perry & Dua Lipa AI Fakes Took the Fun Out of Met Gala

2024-05-07
Teen Vogue
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated images of celebrities that were widely shared and believed to be real, causing misinformation and public confusion. The AI system's role in creating and disseminating these fake images directly led to harm in the form of misleading the public and distorting reality, which fits the definition of an AI Incident due to harm to communities. Although no physical harm occurred, the social harm from misinformation is significant and clearly articulated.
Thumbnail Image

AI-generated images of Katy Perry went viral, convincing many internet users -- including Perry's mother -- that the singer attended the 2024 Met Gala wearing a floral gown. Perry said on Instagram that she did not attend the gala.

2024-05-07
Politi Fact
Why's our monitor labelling this an incident or hazard?
The AI system generated realistic but fake images that were widely believed to be true, causing misinformation and confusion among the public. This misinformation can be considered harm to communities as it distorts public understanding and trust. Since the AI-generated images directly led to this misinformation harm, the event qualifies as an AI Incident under the definition of harm to communities.
Thumbnail Image

Katy Perry and Rihanna didn't attend the Met Gala. But AI-generated images still fooled fans - WTOP News

2024-05-07
WTOP
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated images (deepfakes) that fooled some people but does not report any direct injury, rights violation, or significant harm caused by these images. The event illustrates the capabilities and risks of generative AI but does not document an AI Incident (harm realized) or an AI Hazard (plausible future harm from a specific event). Instead, it discusses the broader societal concerns and governance challenges related to AI misuse, making it Complementary Information according to the framework.
Thumbnail Image

Katy Perry and Rihanna didn't attend the Met Gala. But AI-generated images still fooled fans

2024-05-07
Toronto Sun
Why's our monitor labelling this an incident or hazard?
AI-generated deepfake images were created and disseminated, causing misinformation and misleading the public, including the celebrities themselves. This constitutes harm to communities through the spread of false information and deception. Since the AI system's use directly led to this misinformation harm, this qualifies as an AI Incident.
Thumbnail Image

Met Gala deepfakes abound, trick fans, even Katy Perry's mom

2024-05-07
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake images that have fooled people into believing false information about celebrity attendance at a major event. The AI system's outputs have directly caused misinformation and deception, which is a form of harm to communities. Since the harm is realized and directly linked to the AI system's use, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Themed Met Gala? Rihanna and Katie Perry Shrug It Off But Deepfakes Are No Laughing Matter

2024-05-07
CCN - Capital & Celeb News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating deepfake images, which are a form of synthetic media created by generative AI. Although the deepfakes have fooled people and spread widely, the article does not report any realized harm such as fraud, defamation, or other direct injury. Instead, it emphasizes the potential dangers of such technology and the need for detection and transparency measures. Therefore, this event fits the definition of an AI Hazard, as the development and use of AI-generated deepfakes could plausibly lead to harms like misinformation, fraud, or social disruption. The article also includes discussion of societal and technical responses, but the primary focus is on the plausible future harm from deepfakes rather than a completed incident or a complementary update to a past incident.
Thumbnail Image

AI-generated images of Met gala fools Internet

2024-05-09
Hurriyet Daily News
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (generative AI for image creation) and their use to create misleading content. While the images fooled some people, including a celebrity's mother, there is no evidence in the article that this has directly caused harm such as injury, rights violations, or significant disruption. The concerns raised about disinformation, scams, and election manipulation are warnings about plausible future harms rather than realized harms. Therefore, this event fits the definition of an AI Hazard, as the AI-generated deepfakes could plausibly lead to incidents of misinformation or other harms in the future, but no such incident has yet occurred according to the article.
Thumbnail Image

Katy Perry Forced to Clarify After AI Photos of Her at Met Gala 2024 Go Viral

2024-05-07
AceShowbiz
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated photos that falsely depict Katy Perry at an event she did not attend. While the AI system was used to create misleading content, the article does not report any direct harm such as health injury, rights violations, or significant disruption. The event is about viral AI-generated misinformation and the celebrity's response clarifying the truth. This fits the definition of Complementary Information, as it enhances understanding of AI's societal impact and public perception but does not constitute an AI Incident or AI Hazard.
Thumbnail Image

Katy Perry says her own mom was fooled by AI images of her at Met Gala

2024-05-08
Portland Press Herald
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake images that were widely shared and believed, causing misinformation and confusion. This fits the definition of an AI Incident because the AI-generated content has directly led to harm in the form of misinformation and deception affecting communities. The article describes actual harm occurring, not just potential harm, and the AI system's role is pivotal in creating and spreading these fake images. Therefore, it is classified as an AI Incident.
Thumbnail Image

Garden Of Deepfake! Katy Perry's AI Generated Met Gala Look Goes Viral

2024-05-07
Capital FM Kenya
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating a deepfake image, which fooled a person (Katy Perry's mother). This shows AI use and its potential to mislead. However, the article does not describe any harm occurring or any disruption caused by this AI-generated content. There is no evidence of injury, rights violations, or other harms. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on AI-generated content and public reaction without describing harm or plausible harm.
Thumbnail Image

FACT CHECK: Image Claims To Show Katy Perry At The Met Gala

2024-05-08
Check Your Fact
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic but fake image of a public figure at a high-profile event, which was then disseminated and believed by many, including close relatives. This constitutes harm to communities by spreading misinformation and misleading the public. Since the AI-generated content directly led to this misinformation harm, this qualifies as an AI Incident under the framework, specifically harm to communities through false information dissemination.
Thumbnail Image

Katy Perry's Fan-Made AI Image Is So Real It Fooled the World Into Thinking She Was at the Met Gala

2024-05-08
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The article describes the creation and spread of AI-generated images that misled people, including Katy Perry's own mother, about her presence at the Met Gala. This involves AI systems (Bing's Copilot Designer and Leonardo.AI) used to generate realistic images. While the images caused misinformation, there is no direct or indirect harm such as injury, rights violations, or disruption of critical infrastructure. The event highlights ethical concerns and public perception issues related to AI-generated content, which aligns with Complementary Information as it informs about societal and governance responses and debates around AI misuse. There is no indication of realized harm or credible plausible future harm that would qualify as an AI Incident or AI Hazard.
Thumbnail Image

Katy Perry urges 'hold on to your common sense hat' after viral fake Met photo

2024-05-09
The Irish News
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake images that have gone viral, misleading people including Katy Perry's own mother. This is a clear example of AI-generated misinformation. While the misinformation is spreading, the article does not report any realized harm such as health injury, rights violations, or societal disruption. Katy Perry's warning about the upcoming election suggests a credible risk of future harm from such AI-generated content. Therefore, this event fits the definition of an AI Hazard, as the AI-generated images could plausibly lead to harm such as misinformation influencing public opinion or election interference, but no direct harm has yet occurred.
Thumbnail Image

With AI Fake Photos of Met Gala Swirling Online, Viewers Call for Ban on Dupes

2024-05-07
The New York Sun
Why's our monitor labelling this an incident or hazard?
AI systems were explicitly used to generate fake images of Rihanna and Katy Perry at the Met Gala, which were widely circulated and believed to be real by many viewers. This misinformation can harm public trust, mislead audiences, and potentially damage the reputations of the individuals depicted. The harm to communities through misinformation dissemination is a recognized form of AI harm. Since the harm is realized (the images have been widely shared and believed), this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Katy Perry and Rihanna didn't attend the Met Gala. But AI-generated images still fooled fans

2024-05-07
KULR-8 Local News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic fake images (deepfakes) of celebrities, which were spread online and fooled some individuals, including a celebrity's own mother. This constitutes direct harm through misinformation and deception, impacting public trust and potentially causing reputational harm. The article explicitly discusses the harm caused by such AI-generated content and the societal risks it poses, including disinformation and abuse. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to communities through misinformation and deception.
Thumbnail Image

Katy Perry fools internet with fake AI photos of Met Gala

2024-05-07
KULR-8 Local News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake images (deepfakes) that have been disseminated online and have fooled viewers, including Katy Perry's own mother. This constitutes a violation of trust and can be considered harm to communities by spreading misinformation and potentially damaging reputations. Since the AI-generated images have already been viewed and caused confusion or deception, this is a realized harm rather than a potential one. Therefore, this qualifies as an AI Incident due to the direct involvement of AI-generated content causing harm through misinformation and deception.
Thumbnail Image

Katy Perry and Rihanna didn't attend the Met Gala. But AI-generated images still fooled fans

2024-05-07
The Tribune
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create deepfake images that were widely disseminated and believed by some, including close family members, demonstrating direct harm through misinformation and reputational impact. The article explicitly mentions the harm caused by such AI-generated content, including nonconsensual deepfakes and potential societal harms like disinformation and election interference. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to communities (misinformation) and individuals (reputational harm).
Thumbnail Image

This Photo of Katy Perry at the 2024 Met Gala Went Viral. It's Fake.

2024-05-07
K945
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate realistic fake images of a public figure at a high-profile event, which went viral and misled the public. Although the images were not real, the AI-generated content caused misinformation and potential reputational harm. The platform's addition of a context label indicates recognition of the misinformation. This constitutes an AI Incident because the AI system's use directly led to harm in the form of misinformation and deception affecting public perception and potentially the individual's reputation.
Thumbnail Image

Katy Perry calls out AI fake Met Gala photo of herself; Lady Gaga and Rihanna also AI'd into the event

2024-05-07
WCPZ
Why's our monitor labelling this an incident or hazard?
AI systems were used to generate fake images of celebrities at a high-profile event, which were then shared publicly. While the AI-generated content is deceptive, the article does not report any realized harm such as injury, rights violations, or significant disruption. The potential for misinformation or reputational harm exists, but since no such harm is described as having occurred, this situation represents a plausible risk rather than an actual incident. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Met Gala 2024: Deepfakes Haunt Celebs

2024-05-07
SheThePeople
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake images of celebrities that have been widely shared and believed, causing misinformation and deception. The AI system's use in generating these realistic fake images directly leads to harm by misleading the public and potentially infringing on the celebrities' rights. This fits the definition of an AI Incident as it involves the use of AI leading to harm to communities and violations of rights through misinformation and unauthorized use of likenesses.
Thumbnail Image

Don't be fooled by AI, Katy Perry didn't attend the Met Gala - ExBulletin

2024-05-09
ExBulletin
Why's our monitor labelling this an incident or hazard?
The article focuses on AI-generated fake images of a celebrity at an event she did not attend. While this involves AI-generated content, there is no indication that this has caused injury, rights violations, or other significant harm. The article does not report any actual incident of harm but rather illustrates a case of AI-generated misinformation that could plausibly lead to confusion or misinformation in the future. Therefore, it fits the definition of an AI Hazard, as the AI-generated images could plausibly lead to harm such as misinformation or reputational damage, but no harm has yet occurred or been reported.
Thumbnail Image

Fotos falsas de Katy Perry no Met Gala viralizam e famosa expõe confusão com mãe

2024-05-07
Band
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the photos circulating were generated by artificial intelligence and were false, leading to public confusion and misinformation. The AI system's use directly led to the spread of false information, which is a form of harm to communities. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI-generated fake images.
Thumbnail Image

Imagens de Inteligência Artificial do Met Gala enganaram até a mãe de Katy Perry

2024-05-08
Publico
Why's our monitor labelling this an incident or hazard?
AI systems were used to generate realistic images of a celebrity at an event she did not attend, which misled a close individual. This involves AI-generated content and potential misinformation. However, the article does not report any realized harm such as reputational damage, rights violations, or broader societal harm. The event represents a plausible risk of misinformation and deception through AI-generated images but does not document an incident of harm. Therefore, it qualifies as an AI Hazard, as the AI-generated images could plausibly lead to harm if such misinformation spreads more widely or is used maliciously.
Thumbnail Image

Katy Perry não foi à Met Gala mas nem a mãe acreditou: fotografia viral foi criada por inteligência artificial

2024-05-07
Jornal Expresso
Why's our monitor labelling this an incident or hazard?
An AI system was used to create synthetic images that misled people into believing Katy Perry attended the Met Gala. This constitutes the use of AI to generate false content that can cause misinformation and potential reputational harm. However, there is no indication that any direct or indirect harm (such as injury, rights violations, or significant community harm) has occurred or is ongoing. The event highlights the misuse of AI-generated content but does not describe realized harm or a credible imminent risk of harm. Therefore, it is best classified as Complementary Information, as it provides context on AI's role in misinformation without a specific AI Incident or Hazard occurring.
Thumbnail Image

Katy Perry no Met Gala? Fotos feitas com IA enganam internautas e cantora desmente: "Não fui

2024-05-07
Correio do povo
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content (images) that caused public confusion but did not result in any direct or indirect harm such as injury, rights violations, or disruption. The AI system's role is central to the creation of the images, but no harm or plausible future harm is described. This is a case of AI-generated misinformation or deepfake images causing social confusion but not meeting the threshold for harm or incident. Therefore, it is best classified as Complementary Information, as it provides context on AI's impact on public perception without constituting an AI Incident or Hazard.
Thumbnail Image

Katy Perry causa confusão ao postar fotos falsas do Met Gala em Nova York

2024-05-07
UOL notícias
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to generate fake images (content generation), which were then disseminated publicly, causing misinformation and deception. This constitutes harm to communities by spreading false information and misleading the public. Since the AI-generated content directly led to confusion and deception, it qualifies as an AI Incident under the harm category of harm to communities (d).
Thumbnail Image

Katy Perry no Met Gala? Inteligência artificial engana até a mãe da cantora

2024-05-07
Rolling Stone
Why's our monitor labelling this an incident or hazard?
The article focuses on AI-generated images that misled people but does not describe any harm resulting from these images. There is no mention of injury, rights violations, or other significant harms caused by the AI system's use. Therefore, this is not an AI Incident. It also does not describe a plausible future harm scenario or risk that could lead to harm, so it is not an AI Hazard. The main content is about the AI-generated images causing confusion, which is a known phenomenon but without reported harm here. This fits best as Complementary Information about AI's impact on social perception and misinformation potential.
Thumbnail Image

Katy Perry, la foto al Met Gala 2024 generata dall'IA inganna la madre

2024-05-08
Sky
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a synthetic image that deceived people about a real-world event. However, there is no indication that this caused any direct or indirect harm such as injury, rights violations, or disruption. The event describes a case of AI-generated content causing misinformation or deception on a personal level but does not report any significant harm or plausible future harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is a news item about AI-generated content and its social impact, which fits best as Complementary Information.
Thumbnail Image

Se l'IA inganna pure la mamma di Katy Perry

2024-05-08
il Giornale.it
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating highly realistic fake images (deepfakes) that deceive people, including close family members. While no direct harm has yet occurred, the article highlights the credible risk that such AI-generated misinformation could be used to manipulate elections and sabotage democracy. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harm to communities and democratic processes in the near future. There is no indication of an actual AI Incident having occurred, nor is the article primarily about responses or updates, so it is not Complementary Information.
Thumbnail Image

Intelligenza artificiale ha ingannato anche la mamma di Katy Perry: cosa è successo durante il Met Gala

2024-05-08
superEva
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images that fooled people, including Katy Perry's mother, demonstrating the AI system's role in creating and spreading misleading content. This constitutes an AI Incident because the AI system's use directly led to harm in the form of misinformation and deception, which affects communities and individuals' understanding of reality. Although the harm is non-physical, it is significant and clearly articulated, fitting the definition of harm to communities. Therefore, this event is classified as an AI Incident.