AI-Generated Zelda Movie Posters Cause Widespread Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated posters using DALL-E and Midjourney falsely suggested a Netflix Legend of Zelda film starring Tom Holland and others. Despite creator Dan Leveille's clarification, the realistic images went viral, misleading thousands on social media and fueling widespread misinformation about a non-existent movie.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event clearly involves AI systems (Midjourney, DALL-E) used to generate fake images that have been widely shared and mistaken for real, causing misinformation and confusion among the public. This misinformation can be classified as harm to communities, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as people have been misled and discussions have been influenced by the fake posters. Although the harm is social and informational rather than physical or legal, it fits within the broad scope of harms defined for AI Incidents. Hence, the classification is AI Incident.[AI generated]
AI principles
Transparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

No, a new 'Zelda' series isn't coming to Netflix

2022-10-12
Mashable
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake images that misled some people temporarily, but the creator clarified the images were not real. There is no evidence of harm such as misinformation causing societal disruption, rights violations, or other harms. The event does not describe a realized AI Incident or a plausible future harm (AI Hazard). It is primarily a news item about AI-generated content and public reaction, which fits the category of Complementary Information as it enhances understanding of AI's impact on media and public perception without reporting harm.
Thumbnail Image

'Zelda' Fake Posters Convince People There's A Tom Holland Film Coming

2022-10-11
UPROXX
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic fake images that caused some people to believe in a false movie announcement. This fits the definition of an AI system's use leading to misinformation. However, the article does not report any direct or indirect harm such as injury, rights violations, or significant community harm. The misinformation is limited to a false belief about a movie's existence, which is a form of misinformation but not clearly linked to significant harm as defined. Since no harm has materialized but the AI-generated content could plausibly lead to misinformation-related harms if scaled or used maliciously, this is best classified as an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the AI-generated fake posters and their misleading effect, not on responses or updates to prior incidents.
Thumbnail Image

Fake Zelda Netflix Posters Blow Up, Make People Think Tom Holland Will Play Link

2022-10-11
Kotaku
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (Midjourney, DALL-E) used to generate fake images that have been widely shared and mistaken for real, causing misinformation and confusion among the public. This misinformation can be classified as harm to communities, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as people have been misled and discussions have been influenced by the fake posters. Although the harm is social and informational rather than physical or legal, it fits within the broad scope of harms defined for AI Incidents. Hence, the classification is AI Incident.