Emirates slams TikTok and X over viral AI-generated plane crash hoax

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Emirates has condemned social platforms such as TikTok and X for slow removal of AI-generated videos depicting a fabricated jet crash in its livery. Despite being evidently computer-generated, the clips have gone viral, fueling misinformation and alarming air travellers during heightened aviation safety concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

AI systems are used to generate fabricated videos of plane crashes, which could plausibly lead to harm by spreading false and alarming information that might cause public panic or reputational damage. However, since these videos are not real incidents and no direct or indirect harm has occurred yet, this qualifies as an AI Hazard rather than an AI Incident. The airline's efforts to remove or label the content further indicate the potential risk rather than an actualized harm event.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyHuman wellbeing

Industries
Media, social platforms, and marketingTravel, leisure, and hospitalityDigital security

Affected stakeholders
BusinessGeneral public

Harm types
ReputationalPsychologicalEconomic/PropertyPublic interest

Severity
AI hazard

Business function:
Monitoring and quality controlMarketing and advertisement

AI system task:
Content generationOrganisation/recommendersEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Emirates: Social Media Slow To Act On Fake Plane Crash Video

2025-01-05
Forbes
Why's our monitor labelling this an incident or hazard?
The AI system is involved in generating fake video content, which is causing misinformation and public concern, a form of harm to communities. However, no direct or indirect physical harm, violation of rights, or disruption has occurred as a result of the AI-generated video. The main focus is on the airline's response and the slow action of social media platforms to remove the content. This fits the definition of Complementary Information, as it updates on societal and governance responses to AI-generated misinformation rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

Emirates sends warning over AI-generated plane crash TikTok videos

2025-01-07
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The AI system's involvement is in the generation of fabricated videos (use of AI for content creation). While these videos could plausibly lead to harm such as public panic or reputational damage, the article does not report any realized harm such as injury, disruption, or rights violations directly caused by the AI-generated content. The main focus is on the airline's response to misinformation and efforts to mitigate its spread. Therefore, this event is best classified as Complementary Information, as it provides context and updates on societal responses to AI-generated misinformation rather than describing a direct AI Incident or a plausible AI Hazard causing harm.
Thumbnail Image

Emirates sends warning over AI-generated plane crash TikTok videos

2025-01-07
The Independent
Why's our monitor labelling this an incident or hazard?
AI systems are used to generate fabricated videos of plane crashes, which could plausibly lead to harm by spreading false and alarming information that might cause public panic or reputational damage. However, since these videos are not real incidents and no direct or indirect harm has occurred yet, this qualifies as an AI Hazard rather than an AI Incident. The airline's efforts to remove or label the content further indicate the potential risk rather than an actualized harm event.
Thumbnail Image

Emirates responds to fake plane crash video circulating on social media

2025-01-04
GULF NEWS
Why's our monitor labelling this an incident or hazard?
The video is an AI-generated or digitally altered content (implied by 'digitally created footage') that is spreading false information about a plane crash. However, there is no actual harm caused by the AI system itself, only misinformation that is being addressed. Since the video is fabricated and no real incident has occurred, and the main focus is on the response to misinformation, this qualifies as Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

Emirates Declares Plane Crash "Fabricated Content" Amid Recent String Of Deadly Plane Crashes

2025-01-05
TheTravel
Why's our monitor labelling this an incident or hazard?
The AI system is involved in generating fabricated crash videos that are misleading the public. Emirates has confirmed these videos are AI-generated and false. The videos have not caused any physical harm or direct injury but could plausibly lead to harm by spreading misinformation and causing public fear or disruption. Since the harm is potential and not realized, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the existence and circulation of AI-generated fabricated content with potential for harm, not on an actual AI-caused incident.
Thumbnail Image

Social media video on plane crash fabricated, says Emirates

2025-01-04
The Gulf Today
Why's our monitor labelling this an incident or hazard?
The event centers on the dissemination of false information via a fabricated video, which is a form of misinformation. However, there is no indication that an AI system was involved in creating or spreading the video, nor that any actual harm has occurred or is plausibly expected from an AI system's development, use, or malfunction. The focus is on debunking misinformation and the company's response, which aligns with complementary information about managing false content rather than an AI incident or hazard. Therefore, this is best classified as Complementary Information.
Thumbnail Image

Emirates calls out social media platforms over fake plane crash video

2025-01-04
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The video is AI-generated or computer-generated content (implying use of AI or similar technology) that is fake and misleading. While it is causing concern and misinformation, there is no indication that the video has directly caused harm such as injury, disruption, or rights violations. The main issue is the potential for harm through misinformation and public alarm if the video spreads unchecked. Therefore, this qualifies as an AI Hazard because the AI-generated content could plausibly lead to harm, but no harm has yet materialized.
Thumbnail Image

Emirates Slams TikTok and Elon Musk's X For Delay in Removing Fake AI Video of Plane Crash

2025-01-04
Paddle Your Own Kanoo
Why's our monitor labelling this an incident or hazard?
The event describes a fake AI-generated video of a plane crash spreading widely on social media, causing false and alarming information to circulate. This misinformation can harm communities by causing distress and confusion, which fits the definition of harm to communities under AI Incident. The AI system's involvement is explicit in generating the fake video. The harm is realized as the video is actively shared and believed by some, not just a potential risk. The delay in content removal by platforms contributes to the ongoing harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Emirates Denounces Fake Plane Crashes

2025-01-07
AVweb
Why's our monitor labelling this an incident or hazard?
The videos are explicitly described as AI-generated, indicating the involvement of AI systems in creating fabricated content. However, the harm described is indirect and potential, such as misinformation and reputational damage, rather than direct physical harm or legal violations. Since the videos are circulating but no actual crash or injury has occurred, this situation represents a plausible risk of harm rather than a realized incident. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.