AI-Generated Deepfake Video Falsely Depicts CN Tower Fire, Spreads Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated deepfake video falsely showing Toronto's CN Tower on fire went viral on Facebook, amassing around 12 million views and thousands of shares. The video misled viewers and spread misinformation, despite official confirmation that no fire occurred. The incident highlights the potential harm of AI-generated fake content to public trust and information integrity.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event clearly involves an AI system used to generate a deepfake video showing a false fire at the CN Tower. The video went viral, causing a spike in Google searches and public concern, which constitutes harm to communities through misinformation and potential panic. Since the AI-generated content directly led to this harm, it qualifies as an AI Incident under the framework, specifically harm to communities through misinformation. There is no indication that the harm is only potential; the misinformation has already spread and caused public confusion, so it is not merely a hazard or complementary information.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

FACT FILE: AI video sparks false rumour of CN Tower fire

2025-09-23
Toronto Sun
Why's our monitor labelling this an incident or hazard?
The video is an AI-generated or manipulated video (deepfake or similar) that falsely shows the CN Tower on fire, which is not true. This constitutes misinformation that could plausibly lead to harm such as public panic or disruption, but no actual harm has been reported. Therefore, it is an AI Hazard because the AI system's use in generating or manipulating the video could plausibly lead to harm, but no direct harm has occurred yet.
Thumbnail Image

The CN Tower isn't on fire. Google searches spike after AI video shows flames above Toronto's skyline

2025-09-23
The Star
Why's our monitor labelling this an incident or hazard?
The AI system is involved in generating a fake video showing flames on the CN Tower, which is disinformation. This could plausibly lead to harm such as public panic or misinformation spread, but the article indicates the video is recognized as fake and no actual fire or injury occurred. Therefore, this is a potential risk scenario rather than an incident with realized harm. The event fits the definition of an AI Hazard because the AI-generated content could plausibly lead to harm (misinformation, public confusion), but no direct or indirect harm has materialized according to the article. It is not Complementary Information since the main focus is on the AI-generated disinformation itself, nor is it Unrelated as AI is central to the event.
Thumbnail Image

Fact File: AI video sparks false rumour of CN Tower fire

2025-09-23
The Star
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to generate a deepfake video showing a false fire at the CN Tower. The video went viral, causing a spike in Google searches and public concern, which constitutes harm to communities through misinformation and potential panic. Since the AI-generated content directly led to this harm, it qualifies as an AI Incident under the framework, specifically harm to communities through misinformation. There is no indication that the harm is only potential; the misinformation has already spread and caused public confusion, so it is not merely a hazard or complementary information.
Thumbnail Image

Fact File: AI video sparks false rumour of CN Tower fire

2025-09-23
Winnipeg Free Press
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a deepfake video depicting a false event (fire at the CN Tower). The video went viral, spreading misinformation that could harm public perception and trust, which constitutes harm to communities. Since the harm (misinformation and potential public alarm) is occurring due to the AI-generated content, this qualifies as an AI Incident. The AI system's use directly led to the spread of false information causing social harm, even though no physical damage occurred.
Thumbnail Image

The CN Tower is not on fire -- here's why a viral deepfake video of the Toronto landmark is making people think it is

2025-09-23
Yorkregion.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake video generation) that has been used to create and disseminate a false video showing the CN Tower on fire. This has directly led to misinformation and public confusion, which is a form of harm to communities and individuals' perception of reality. The article confirms the video is fake and discusses the social consequences of such AI-generated misinformation. Since the harm (misinformation and social disruption) is occurring and directly linked to the AI system's use, this qualifies as an AI Incident. The article also provides context on the broader implications of deepfake misuse, but the primary event is the viral deepfake causing realized harm.
Thumbnail Image

Fact File: AI video sparks false rumour of CN Tower fire

2025-09-23
Winnipeg Sun
Why's our monitor labelling this an incident or hazard?
The video was created with the help of artificial intelligence to produce a realistic but false depiction of a fire at the CN Tower. This AI-generated content went viral, misleading millions and causing misinformation harm to the community. The AI system's use directly led to the spread of false information, which is a form of harm to communities under the AI Incident definition. There is no indication that this is merely a potential risk or a complementary update; the harm from misinformation is realized. Hence, the event is classified as an AI Incident.
Thumbnail Image

Fact File: AI video sparks false rumour of CN Tower fire

2025-09-23
thepeterboroughexaminer.com
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated deepfake content, indicating AI system involvement in its creation. The harm is misinformation and potential public alarm, which could affect communities indirectly. Since no actual fire or physical harm occurred, and the harm is potential rather than realized, this event fits the definition of an AI Hazard rather than an AI Incident. The AI system's use could plausibly lead to harm if such misinformation spreads widely and causes disruption or panic.
Thumbnail Image

Fact File: AI video sparks false rumour of CN Tower fire

2025-09-23
Prince George Citizen
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated deepfake content that falsely shows a fire at the CN Tower, a famous landmark. The AI system's use here is in generating misleading visual content that has been widely disseminated, causing misinformation. Although no actual physical harm or damage occurred, the spread of false information about a major landmark can cause public alarm and harm to community trust, which qualifies as harm to communities. Therefore, this event constitutes an AI Incident due to the direct role of AI-generated content in causing misinformation and potential social harm.
Thumbnail Image

Fact File: AI video sparks false rumour of CN Tower fire

2025-09-23
Lethbridge News Now
Why's our monitor labelling this an incident or hazard?
The video is an AI-generated deepfake falsely showing the CN Tower on fire, which is misinformation. The AI system's use in creating this misleading content could plausibly lead to harm such as public panic or disruption, fitting the definition of an AI Hazard. Since no actual fire or harm occurred, it is not an AI Incident. The event is more than general AI news because it involves a specific AI-generated misleading video with potential for harm, so it is not Complementary Information or Unrelated.
Thumbnail Image

Fact File: AI video sparks false rumour of CN Tower fire

2025-09-23
CityNews Halifax
Why's our monitor labelling this an incident or hazard?
The video is an AI-generated deepfake, which is an AI system creating false visual content. The event involves the use of AI in generating misleading content that falsely shows a fire at a landmark. Although this could cause harm to communities by spreading false information, the article states the fire did not actually occur, so no direct harm has materialized. The harm is reputational and misinformation-based, which is a form of harm to communities but not physical or legal harm. Since the harm is realized (viral false information causing misinformation), this qualifies as an AI Incident rather than a hazard or complementary information. The AI system's use directly led to the spread of false information causing harm to the community's trust and potentially public order.
Thumbnail Image

A video of the CN Tower on fire went viral on Facebook. The problem? It's fake | CBC News

2025-09-25
CBC News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video that falsely shows the CN Tower on fire. The video went viral, causing public alarm and misinformation, which is a harm to communities. The article discusses the direct consequences of the AI-generated content spreading false information with intent to cause panic, which fits the definition of an AI Incident due to realized harm (misinformation and public alarm). Although no physical damage occurred, the harm to community trust and the spread of false information are significant and directly linked to the AI system's use. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Fact-check shows viral CN Tower fire video was AI-generated

2025-09-25
The Globe and Mail
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a deepfake video depicting a fire at the CN Tower, a false and misleading visual. Although no actual fire or harm occurred, the AI-generated content caused misinformation and potential harm to public perception and trust. Since the video is AI-generated and falsely depicts a harmful event, this constitutes an AI Incident due to the violation of truthful information and potential harm to communities through misinformation. The AI system's use directly led to the dissemination of false harmful content, fulfilling the criteria for an AI Incident.