‘Megalopolis’ Trailer Pulled for AI-Generated Fake Critic Quotes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Lionsgate’s promotional trailer for Francis Ford Coppola’s “Megalopolis” used AI-generated quotes falsely attributed to renowned film critics to disparage his earlier work. Once exposed, the studio withdrew the trailer, apologized to the critics and Coppola, and severed ties with the marketing consultant responsible.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI-generated fabricated quotes were included in a public film trailer, causing reputational damage to the film and studio and resulting in job loss for a marketing consultant. The AI system's hallucination (fabrication of quotes) directly led to these harms. The involvement of AI in content generation and the resulting negative consequences meet the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's malfunction (hallucination).[AI generated]
AI principles
Transparency & explainabilityAccountabilityPrivacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketingArts, entertainment, and recreation

Affected stakeholders
Workers

Harm types
ReputationalEconomic/Property

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Megalopolis Trailer Fiasco: How To Avoid Your Own AI Content Blunders

2024-08-27
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated fabricated quotes were included in a public film trailer, causing reputational damage to the film and studio and resulting in job loss for a marketing consultant. The AI system's hallucination (fabrication of quotes) directly led to these harms. The involvement of AI in content generation and the resulting negative consequences meet the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's malfunction (hallucination).
Thumbnail Image

AI was responsible for the fake quotes in the Megalopolis trailer

2024-08-25
The Verge
Why's our monitor labelling this an incident or hazard?
The AI system was used to create fabricated quotes falsely attributed to critics, which constitutes misinformation and reputational harm. The harm is realized as the false quotes were publicly disseminated, misleading audiences and damaging the credibility of the films and marketing team. The involvement of AI in generating these falsehoods is explicit and central to the incident. Hence, this event meets the criteria for an AI Incident as it directly led to harm to communities (misinformation) and reputational damage.
Thumbnail Image

Surprise: Fake Quotes in 'Megalopolis' Trailer Were Generated by AI

2024-08-26
PC Magazine
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate fabricated quotes that were falsely attributed to critics, leading to a misleading marketing campaign. This constitutes harm through misinformation and deception, which affects the reputation of individuals and organizations involved. However, the harm does not rise to the level of injury, critical infrastructure disruption, or legal rights violations as defined. The event is a clear example of AI misuse causing reputational and ethical harm, which fits the definition of an AI Incident due to the direct harm caused by the AI-generated false content.
Thumbnail Image

Marketing consultant fired over controversial 'Megalopolis' trailer

2024-08-26
Euronews English
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (ChatGPT) used to generate fabricated content that was then used in a marketing trailer. This use of AI directly led to reputational harm and ethical violations in marketing practices. However, the harm is primarily reputational and ethical rather than physical injury, infrastructure disruption, or legal rights violations as defined by the framework. The incident is a misuse of AI-generated content causing harm to the community's trust and the individuals involved. Therefore, it qualifies as an AI Incident due to the realized harm caused by the AI-generated fake quotes and the subsequent fallout.
Thumbnail Image

Fake Quotes In Coppola's 'Megalopolis' Were AI-Generated: Report

2024-08-26
The Daily Wire
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to generate fake quotes attributed to film critics, which were then used in the official trailer. This misuse of AI directly led to reputational harm to the critics and a public apology from the studio, indicating realized harm. The AI system's use in fabricating false information that misled the public and harmed individuals' reputations fits the definition of an AI Incident, as it caused harm to communities and violated rights related to intellectual property and personal reputation. The event is not merely a potential risk or a complementary update but a clear case of harm caused by AI misuse.
Thumbnail Image

Fake critic quotes in 'Megalopolis' trailer result in studio dropping marketing consultant

2024-08-25
NME Music News, Reviews, Videos, Galleries, Tickets and Blogs | NME.COM
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content (quotes created by ChatGPT) that was used in a misleading way in the marketing trailer. The AI system's outputs directly led to harm by fabricating false statements attributed to real critics, damaging their reputations and misleading audiences. The studio's response to recall the trailer and apologize confirms the recognition of harm caused. This fits the definition of an AI Incident because the AI system's use directly led to a violation of intellectual property rights and harm to communities through misinformation.
Thumbnail Image

New Coppola film trailer axed for using fake movie reviews

2024-08-25
WION
Why's our monitor labelling this an incident or hazard?
While AI was used to generate fake reviews, the harm described is primarily reputational and related to misleading marketing practices. There is no evidence of physical harm, violation of fundamental rights, or significant societal harm as defined in the framework. The incident is a misuse of AI-generated content but does not meet the threshold for an AI Incident. It is more appropriately classified as Complementary Information about AI's impact on the entertainment industry and marketing ethics.
Thumbnail Image

Lionsgate Marketing Consultant Built Movie Trailer Filled With AI Generated Fake Movie Reviews Of Old Films

2024-08-28
Techdirt
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-generated fake quotes attributed to real film critics in a movie trailer, which misrepresented their opinions and deceived the public. This constitutes a violation of intellectual property and moral rights, as well as harm to the reputation of the critics and the studio. The AI system's involvement in generating false content that was disseminated publicly directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations protecting intellectual property rights.
Thumbnail Image

New Coppola film trailer axed for using fake movie reviews - kuwaitTimes

2024-08-25
kuwaittimes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake content (movie reviews) used in a promotional trailer, which led to misinformation and reputational harm. This fits the definition of an AI Incident because the AI system's use directly led to a harm—misleading the public and misrepresenting critics' opinions, which can be seen as a violation of intellectual property rights and ethical norms. Although the harm is non-physical, it is significant and clearly articulated. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.