AI-Generated Footage Error in Myanmar Earthquake News

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Minshi News mistakenly aired AI-generated footage in a Myanmar earthquake report, leading to the spread of misinformation. The network promptly removed the footage and apologized, while the National Communications Commission (NCC) emphasized stricter source verification to protect the public’s right to accurate information.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes a news broadcaster using AI-generated video footage that was not factual, leading to misinformation being disseminated to the public. The AI system's use directly contributed to the harm of misleading the audience, which is a form of harm to communities and a violation of rights. The broadcaster's subsequent apology and correction do not negate the fact that harm occurred. Hence, this is an AI Incident due to the realized harm caused by the AI system's misuse in news reporting.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicBusiness

Harm types
Public interestReputational

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

民視誤用AI生成影片播緬甸地震新聞 NCC:應落實事實查證

2025-03-31
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event describes a news broadcaster using AI-generated video footage that was not factual, leading to misinformation being disseminated to the public. The AI system's use directly contributed to the harm of misleading the audience, which is a form of harm to communities and a violation of rights. The broadcaster's subsequent apology and correction do not negate the fact that harm occurred. Hence, this is an AI Incident due to the realized harm caused by the AI system's misuse in news reporting.
Thumbnail Image

誤用緬甸地震AI假影片...民視道歉了!NCC發3點聲明回應

2025-03-31
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating fake video content that was used in a news broadcast, leading to misinformation and public confusion. The harm is realized as the public was misled by AI-generated fake footage, which is a form of harm to communities and a violation of rights related to truthful information. The news channel's apology and the NCC's statements confirm the incident's impact and the need for remediation. Hence, this is an AI Incident due to the direct harm caused by the AI-generated fake content being broadcast as news.
Thumbnail Image

綠媒用AI假影片沒事?粉專幫中天喊冤:迅速更正被罰百萬

2025-04-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating video content used in news reporting, which was misleading and caused misinformation harm to the public (harm to communities). The media outlet's use of AI-generated footage without proper verification led to dissemination of false information, which is a violation of the right to accurate information and can be considered harm to communities. The incident has already occurred, with the AI-generated content broadcast and then retracted after public outcry. Therefore, this qualifies as an AI Incident due to realized harm from the use of AI-generated content causing misinformation. The regulatory response and comparison to other cases is complementary information but does not change the classification of the core event as an AI Incident.
Thumbnail Image

緬甸地震AI假影片到處流傳難辨!民視中招後發聲明道歉 | 全球 | NOWnews今日新聞

2025-03-31
NOWnews今日新聞
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate fake videos that were widely disseminated, causing misinformation and confusion, which is a form of harm to communities. The news outlet's use of these AI-generated videos in their reporting directly contributed to the spread of misinformation. This constitutes an AI Incident because the AI-generated content directly led to harm in the form of misinformation and public confusion. The event involves the use and misuse of AI-generated content leading to realized harm, not just a potential risk.
Thumbnail Image

誤用緬甸地震AI假影片...民視道歉了!NCC發3點聲明回應

2025-03-31
中時新聞網
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating fake video content that was used in a news broadcast, leading to misinformation and public confusion. This misuse of AI-generated content directly caused harm to the community by spreading false information about a serious event, violating the public's right to truthful information. The broadcaster's apology and the regulator's corrective measures confirm the harm occurred and the AI system's role was pivotal. Hence, this is classified as an AI Incident.
Thumbnail Image

綠媒用AI假影片沒事?粉專幫中天喊冤:迅速更正被罰百萬

2025-04-01
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI-generated video content in news reporting, which is an AI system application. The AI-generated content was misleading and caused misinformation, which harms the community by distorting public knowledge and trust. The media outlet's use of this AI content without proper verification led to the dissemination of false information, fulfilling the criteria for harm to communities and violation of rights. The subsequent apology and content removal do not negate the fact that harm occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

緬甸強震誤用畫面 民視新聞正式聲明回應

2025-03-31
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article involves an AI-generated image mistakenly used in a news report, which is an AI-related issue. However, no harm to people, infrastructure, rights, property, or communities is reported or implied. The news outlet's corrective actions and apology aim to maintain public trust and accuracy. This fits the definition of Complementary Information, as it provides context and response to an AI-related error without describing an AI Incident or Hazard.
Thumbnail Image

民視誤用AI影像播報緬甸強震 NCC:已要求落實事實查證

2025-04-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved as the source of the misleading imagery used in the news report. The misuse of AI-generated content led to the dissemination of false information about a significant natural disaster, which constitutes harm to communities by spreading misinformation. The broadcaster's apology and content removal indicate the harm was realized. Therefore, this qualifies as an AI Incident due to the direct role of AI-generated content in causing misinformation harm.
Thumbnail Image

電視新聞使用 AI 生成影像 NCC 預計第3季推公版 AI 指引 | 聯合新聞網

2025-04-02
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI-generated images) in news reporting, but the harm (misleading viewers) is addressed promptly with apology and content removal. The article focuses mainly on regulatory and governance responses to prevent future misuse rather than describing an actual harm incident or a plausible future harm scenario. Therefore, it fits the definition of Complementary Information, as it provides context and updates on governance and societal responses to AI use in media.
Thumbnail Image

網友指用緬甸地震AI影片播新聞 NCC:民視誤用已發更正 | 產經 | 中央社 CNA

2025-03-31
Central News Agency
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating video content that was used in a news broadcast, leading to misinformation. This constitutes a violation of journalistic standards and potentially harms the public by spreading false information, which can be considered harm to communities. The broadcaster's misuse of AI-generated content directly led to this harm, and the regulatory response confirms the incident's significance. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's misuse in news reporting.
Thumbnail Image

網友指用緬甸地震AI影片播新聞 NCC:民視誤用已發更正

2025-03-31
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The event describes a news broadcaster's use of AI-generated video content that was mistaken for real footage in a news report, leading to misinformation. This misinformation harms the public's right to accurate information, a form of harm to communities. The AI system's misuse (use of AI-generated content without proper verification) directly contributed to this harm. The broadcaster's apology and correction do not negate the fact that harm occurred. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm (misinformation).
Thumbnail Image

新聞台誤用AI生成影像 NCC:預計第三季訂出公版AI應用指引

2025-04-02
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the incident concerns the use of AI-generated images in news reporting. The misuse of AI-generated content led to misinformation, which can be considered harm to communities by spreading false or misleading information. Although the broadcaster took corrective action, the harm occurred. The NCC's planned guidelines are a response to this incident, not the incident itself. Therefore, this qualifies as an AI Incident due to the realized harm from misuse of AI-generated content in media.
Thumbnail Image

網友指用緬甸地震AI影片播新聞 NCC:民視誤用已發更正 | 社會 | 三立新聞網 SETN.COM

2025-03-31
三立新聞
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate video content that was mistakenly presented as factual news footage, leading to misinformation. This constitutes an AI Incident because the AI-generated content caused harm by misleading the public, violating the right to accurate information (harm to communities). The broadcaster's apology and correction do not negate the fact that harm occurred. The regulatory body's involvement and corrective measures are complementary information but do not change the classification of the original event as an AI Incident.