Elderly Couple Misled by AI-Generated Video Travels for Non-Existent Cable Car Ride

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An elderly Malaysian couple traveled several hours from Kuala Lumpur to Perak after being deceived by a realistic AI-generated video promoting a fictional cable car attraction. The incident highlights the emotional distress and wasted resources caused by AI-driven misinformation, raising concerns about the vulnerability of individuals to deepfake content.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to generate a realistic fake video that misled people, causing emotional harm and misinformation. The couple traveled based on the AI-generated content, which directly led to their disappointment and confusion. This fits the definition of an AI Incident as the AI system's use directly led to harm to individuals (harm to communities or individuals through misinformation and emotional distress). The event is not merely a potential hazard or complementary information, but a realized harm caused by AI-generated content.[AI generated]
AI principles
Transparency & explainabilitySafetyHuman wellbeingAccountabilityRobustness & digital security

Industries
Media, social platforms, and marketingTravel, leisure, and hospitality

Affected stakeholders
General public

Harm types
PsychologicalEconomic/Property

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AI reporter sends couple on imaginary adventure

2025-07-04
Daily Express Sabah
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic fake video that misled people, causing emotional harm and misinformation. The couple traveled based on the AI-generated content, which directly led to their disappointment and confusion. This fits the definition of an AI Incident as the AI system's use directly led to harm to individuals (harm to communities or individuals through misinformation and emotional distress). The event is not merely a potential hazard or complementary information, but a realized harm caused by AI-generated content.
Thumbnail Image

Couple travels across country for cable car ride - only to find out it was AI

2025-07-04
The Independent
Why's our monitor labelling this an incident or hazard?
The AI system's use here is the generation of a realistic but fake video that misled people. While the couple was deceived and traveled unnecessarily, the event does not involve injury, violation of rights, disruption of infrastructure, or significant harm to property or communities. Authorities have not reported any fraud or public disorder resulting from the video. The event mainly illustrates the societal impact of AI-generated misinformation and the responses by local authorities and the public. Thus, it fits the definition of Complementary Information, as it provides context and updates on AI's societal effects without constituting a direct or plausible harm incident or hazard.
Thumbnail Image

Elderly duo travel across Malaysia for cable car ride seen in clip - only to find out it was all AI

2025-07-03
The Straits Times
Why's our monitor labelling this an incident or hazard?
The AI system's use in generating a realistic but false video directly led to the couple's misguided travel and emotional distress, constituting harm to individuals (a form of harm to persons). Although the harm is non-physical and limited to misinformation and emotional impact, it is a direct consequence of the AI-generated content. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm to people through misinformation and deception.
Thumbnail Image

Elderly Couple Travels 6 Hours To Try Non-Existent Cable Car Ride From Viral AI Video

2025-07-04
SAYS
Why's our monitor labelling this an incident or hazard?
The AI system generated a realistic fake video that directly misled people, causing them to take actions based on false information. The elderly couple traveled six hours and experienced disappointment and embarrassment due to the AI-generated misinformation. This is a clear case where the AI system's use led to realized harm (emotional distress, wasted resources), fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

M'sian elderly couple duped by AI video, travelled 4.5hrs from KL to Perak for fake cable car ride

2025-07-02
The Online Citizen
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video that caused the elderly couple to undertake a long journey based on false information. The harm is realized as the couple was deceived, experienced emotional distress, and wasted time and resources. This fits the definition of an AI Incident because the AI system's use directly led to harm to persons (the elderly couple) and harm to communities (through spreading misinformation). The event is not merely a potential risk but a realized harm, so it is not an AI Hazard or Complementary Information. It is not unrelated because AI-generated content is central to the incident.
Thumbnail Image

Elderly couple travels to Perak for non-existent cable car, duped by AI-generated video

2025-07-02
thesun.my
Why's our monitor labelling this an incident or hazard?
The AI system generated a fake video that was convincing enough to deceive the elderly couple, leading them to undertake a futile trip. This constitutes harm to persons (emotional distress, wasted time and money) and harm to communities (spread of misinformation). The AI system's use directly caused this harm, fitting the definition of an AI Incident. The event is not merely a potential hazard or complementary information but a realized harm caused by AI-generated content.
Thumbnail Image

Elderly couple wanted to take a cable car ride they saw in a video. Only problem, it wasn't real

2025-07-10
WION
Why's our monitor labelling this an incident or hazard?
An AI system was involved in creating a realistic but fake video that misled the couple. The harm is indirect, stemming from the couple's reliance on the AI-generated content, resulting in wasted time, travel expenses, and emotional distress. This fits the definition of an AI Incident as the AI system's use directly led to harm to persons (emotional and financial).
Thumbnail Image

Cable car dreams: how an AI hoax led an elderly couple on a wild goose chase in Malaysia

2025-07-07
IOL
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating realistic but false video content that misled people. Although no physical harm, legal violation, or property damage occurred, the AI-generated hoax directly caused the couple to undertake a futile trip, constituting harm to individuals through deception and misinformation. This fits the definition of an AI Incident because the AI system's use directly led to harm (emotional distress and wasted resources).
Thumbnail Image

Couple travels across Malaysia for cable car ride, only to discover it was created by AI - VnExpress International

2025-07-07
VnExpress International – Latest news, business, travel and analysis from Vietnam
Why's our monitor labelling this an incident or hazard?
An AI system generated a realistic but false video that directly misled individuals, causing them to travel based on fabricated information. This deception constitutes harm to the individuals (wasted time, resources, emotional distress) and to the community by spreading misinformation. The AI's role is pivotal as the video would not exist without the AI generation. The police investigation confirms the video is fabricated by AI, and the public confusion and potential for similar incidents further support classification as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Couple travels 300 km to visit a tourist spot that does not exist, gets fooled by AI-generated viral video | Today News

2025-07-10
mint
Why's our monitor labelling this an incident or hazard?
The AI system's involvement is in generating a fake video that misled people. The couple was deceived and traveled unnecessarily, which is a form of harm (emotional and inconvenience). However, the harm is not severe or legally significant (no injury, no rights violation, no critical infrastructure disruption). The article focuses on the viral AI-generated content causing confusion and the need for public awareness, which fits the description of Complementary Information. There is no indication that the AI system malfunctioned or was misused in a way that caused direct or indirect significant harm. The event does not describe a plausible future harm scenario beyond the existing misinformation. Hence, it is not an AI Incident or AI Hazard but Complementary Information about the societal implications of AI-generated content.
Thumbnail Image

Malaysia couple journey to tour site after watching AI-video, unaware it's unreal

2025-07-10
South China Morning Post
Why's our monitor labelling this an incident or hazard?
An AI system generated a realistic video depicting a fictional tourist attraction, which the couple believed to be real. While this caused them to travel unnecessarily, the incident does not meet the threshold for AI Incident since no injury, rights violation, or significant harm occurred. It also does not qualify as an AI Hazard because harm has already occurred (albeit minor and not fitting the harm categories). The event is best classified as Complementary Information illustrating the societal impact and risks of AI-generated misinformation without resulting in significant harm.
Thumbnail Image

Elderly duo travel across Malaysia for cable car ride seen in clip - only to find out it was all AI

2025-07-10
The Nation Thailand
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fabricated video that directly misled people, causing them to take unnecessary travel actions based on false information. This constitutes harm to individuals through misinformation and confusion, which can be considered harm to communities or individuals. Although no physical injury or legal violation is reported, the AI-generated content caused real-world consequences. Therefore, this qualifies as an AI Incident due to the realized harm from AI-generated misinformation leading to confusion and potential emotional distress.
Thumbnail Image

Elderly Couple Travels Across Country To Enjoy Cable Car Ride, Turns Out To Be AI-Generated

2025-07-11
NDTV
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake news video that caused the elderly couple to travel 370km unnecessarily, directly leading to harm in the form of wasted resources and emotional distress. The AI-generated content was realistic enough to deceive viewers, including the couple, and the authorities had to intervene to clarify the misinformation. This fits the definition of an AI Incident because the AI system's use directly led to harm to individuals and communities through misinformation and deception. Although the harm is non-physical, it is significant and clearly articulated.
Thumbnail Image

全是假的!馬來西亞老夫婦誤信「AI影片」出遊 跋涉300公里後心碎了 - 自由財經

2025-07-11
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a realistic but fake video that misled people into believing in a non-existent tourist attraction. This directly caused harm to the elderly couple who undertook a long journey based on the false information, resulting in emotional distress and wasted time and resources. The AI system's role in producing the deceptive content is pivotal to the harm experienced, fitting the definition of an AI Incident due to harm to persons and communities through misinformation and deception.
Thumbnail Image

影/眼見不為憑!被AI影片騙去「虛構景點」 長輩花6萬包車白跑一趟 | 國際焦點 | 國際 | 經濟日報

2025-07-10
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article describes an AI system generating a completely fabricated promotional video for a tourist attraction that does not exist. The AI-generated content directly caused harm by misleading people to spend money and time on a false destination, fulfilling the criteria for an AI Incident due to realized harm (financial loss and emotional distress). The AI system's use in creating deceptive content is central to the incident.
Thumbnail Image

真的假的?花六萬元去「不存在景點」 AI虛構影片太真實 | 東南亞新聞 | 四方報 | NOWnews今日新聞

2025-07-11
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate highly realistic but fabricated video content that misled people into believing in a non-existent tourist attraction. This misuse of AI directly caused harm by inducing financial loss and emotional distress to the elderly couple and others similarly affected. The AI-generated content's role is pivotal in causing this harm, fulfilling the criteria for an AI Incident under the definitions provided.
Thumbnail Image

AI短片虛構景點 大馬老夫婦去了才知受騙 | 馬來西亞 | 大紀元

2025-07-10
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating realistic but false content that misled individuals, causing them to undertake unnecessary travel. This constitutes harm to individuals through deception and potential emotional distress, which can be considered harm to persons. Although no physical injury or legal violation has been reported, the AI-generated misinformation directly led to the couple being deceived and inconvenienced. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

老夫婦心碎被騙!記者採訪纜車、山景餐廳後花6萬元包車衝了 竟是AI生成|壹蘋新聞網

2025-07-11
Nextapple
Why's our monitor labelling this an incident or hazard?
The AI system generated a realistic fake video that caused the elderly couple to be deceived into spending money and traveling unnecessarily, resulting in emotional distress and financial loss. This constitutes harm to persons (emotional and financial harm) and harm to communities (misinformation). The AI system's use directly led to this harm, qualifying the event as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

記者採訪無敵山景纜車「影片竟是AI」!夫婦花6萬包車朝聖 才發現被騙了|壹蘋新聞網

2025-07-12
Nextapple
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic fake video presenting a fabricated tourist attraction. The couple relied on this AI-generated content, resulting in direct harm: wasted money and emotional distress upon discovering the deception. This fits the definition of an AI Incident because the AI system's use directly led to harm to people (emotional and financial), fulfilling criteria (a) injury or harm to persons and (e) other significant harms where AI's role is pivotal.
Thumbnail Image

Malaysian Couple Tricked by AI Video Travel Hours to Fake Tourist Attraction

2025-07-14
PetaPixel
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to generate a realistic but entirely fabricated video that misled viewers into believing in a fake tourist attraction. The couple's decision to travel and spend money was directly influenced by the AI-generated content, causing realized harm (financial loss and emotional upset). This fits the definition of an AI Incident because the AI system's use directly led to harm to people (financial and emotional), fulfilling criteria (a) and (e) of the AI Incident definition.
Thumbnail Image

Couple Saw 'Exciting' Cable Car Ride On Social Media, Travelled Over 300 Km. Then Found Out...

2025-07-12
News18
Why's our monitor labelling this an incident or hazard?
The AI system's use here is the generation of a fake video that misled people. While this caused the couple to travel unnecessarily, no direct or indirect harm as defined (injury, rights violation, disruption, or significant harm) occurred. The event illustrates risks of AI-generated misinformation but does not document an incident or a plausible future harm scenario causing significant damage. The authorities' response urging verification and potential legal action is a governance response, making this Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Malaysian Couple Fooled By AI Generated Video Of Fake Tourist Spot

2025-07-15
Lowyat.NET
Why's our monitor labelling this an incident or hazard?
The AI system's use in generating a fake video directly led to the couple being misled and harmed by wasting time and resources. This fits the definition of an AI Incident as the AI system's use has directly led to harm (emotional and economic) to persons. The event is not merely a potential hazard or complementary information, but a realized harm caused by AI-generated content.
Thumbnail Image

AI travel videos are getting so real, people are falling for fake attractions

2025-07-15
Phone Arena
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Veo 3 model) was used to create a fabricated video that deceived viewers, leading to real-world harm to the Malaysian couple who traveled based on false information. This constitutes an AI Incident because the AI-generated content directly caused harm to people and communities through misinformation and deception.
Thumbnail Image

Shock as Elderly Couple Travel 300km to Visit 'Viral' Tourist Spot -- Only to Learn It Was AI-Generated

2025-07-14
International Business Times UK
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic but fake video of a tourist attraction, which directly misled people into believing in a false reality. The elderly couple's journey and subsequent shock represent harm to individuals (emotional distress and wasted resources). The police warnings highlight the societal impact of such AI-generated misinformation. The AI system's use in fabricating the video is central to the harm, fulfilling the criteria for an AI Incident involving harm to communities and individuals through misinformation.
Thumbnail Image

Elderly couple travels for hours just to find tourist attraction was AI-generated

2025-07-15
Cybernews
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a highly realistic but fake video depicting a tourist attraction and a reporter, which directly misled people. The elderly couple's travel and subsequent confusion constitute harm to individuals (emotional distress and wasted time/resources). This fits the definition of an AI Incident because the AI system's use directly led to harm to people. The police response and social media attention are complementary but the core event is the realized harm caused by the AI-generated content.
Thumbnail Image

Pareja de jubilados paga el viaje de sus vidas, pero es engañada con un video hecho por la inteligencia artificial: ¿Por qué le hacen esto a la gente?

2025-07-17
EL UNIVERSO
Why's our monitor labelling this an incident or hazard?
An AI system (Google's generative AI engine Veo3) was used to create a realistic but fake video of a tourist attraction that does not exist. The couple relied on this AI-generated content to plan their trip, resulting in them being defrauded and suffering harm (financial loss and emotional distress). The AI system's use directly contributed to the harm experienced by the couple, fulfilling the criteria for an AI Incident involving harm to persons and communities through deception and fraud.
Thumbnail Image

La IA ya engaña incluso a turistas: esta pareja pensó que viajaba a un idílico destino que nunca existió

2025-07-18
El Español
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a deepfake video that falsely advertised a tourist attraction, which directly caused harm to the couple who traveled based on this misinformation. The harm includes financial loss (travel expenses) and emotional distress, fulfilling the criteria of harm to persons and communities. The AI system's use directly led to this harm, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Alerta con los vídeos turísticos: una pareja acaba en un destino falso engañada por una IA

2025-07-15
El Confidencial
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Google's Veo3 generative AI) creating deepfake video content that directly caused harm to the couple by misleading them to a non-existent tourist attraction, resulting in economic and emotional harm. The AI system's use is central to the incident, as the fabricated video was the cause of the harm. The article also references broader societal harms from AI-generated deepfakes, but the primary event is the couple's experience, which meets the criteria for an AI Incident due to realized harm caused by AI-generated misinformation.
Thumbnail Image

Ya está aquí el 'turismo deepfake': una pareja termina visitando una atracción inexistente tras verla en un vídeo generado con IA

2025-07-17
Genbeta
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating deepfake video content that directly misled people, causing them to take real-world actions based on false information. This constitutes harm through deception and misinformation, which fits within the definition of an AI Incident as it caused harm to individuals (emotional distress, wasted resources) and communities (misinformation). The AI system's use was central to the incident, and the harm is realized, not just potential. Therefore, this is classified as an AI Incident.
Thumbnail Image

El peligro de destinos turísticos creados por inteligencia artificial

2025-07-18
El Output
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI video generation model to create a completely fabricated tourist destination video that deceived real people, leading to realized harm (disappointment, wasted travel expenses, potential legal considerations). The AI system's use directly caused the harm by producing false content that was believed to be real. This fits the definition of an AI Incident as the AI system's use directly led to harm to people and communities through misinformation and deception. The article also discusses broader societal impacts and calls for regulation, but the core event is a realized harm caused by AI-generated disinformation.
Thumbnail Image

Pareja viaja 3 horas por un destino turístico que nunca existió: ancianos son engañados por video hecho con IA

2025-07-16
Panamericana Televisión
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI for deepfake video creation) whose output directly caused harm by deceiving people, leading to wasted time, emotional distress, and potential economic harm. The harm is realized, not just potential, as the elderly couple was misled and emotionally affected. The article also references other cases of financial fraud caused by AI deepfakes, reinforcing the classification. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Dos jubilados recorren 400km para hacer un viaje en teleférico y descubren tarde el engaño de la IA: "Fue emocionante"

2025-07-18
as
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate a realistic but fake video that misled viewers. While the couple was deceived and traveled a long distance unnecessarily, there is no evidence of direct or indirect harm as defined by injury, rights violations, or property damage. The event highlights risks of AI-generated misinformation but does not document realized harm or legal breaches. Therefore, it constitutes an AI Hazard, as the AI-generated content could plausibly lead to harm (e.g., fraud or public disorder) if such misinformation spreads or is used maliciously, but no such harm has materialized yet.
Thumbnail Image

Vieron un video de un teleférico turístico, fueron a visitarlo y resultó ser una invención hecha con IA

2025-07-18
La Nacion
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a deepfake video that falsely depicted a tourist attraction, leading to real-world harm in the form of deception and wasted resources by the tourists. The AI-generated content directly caused the harm by misleading people. This meets the criteria for an AI Incident because the AI system's use directly led to harm (emotional distress, wasted time and resources) and misinformation affecting the community. The article also discusses broader societal impacts and legal responses, but the core event is an AI Incident due to realized harm from AI-generated deceptive content.
Thumbnail Image

Viajaron para conocer "turístico" teleférico que vieron en TikTok, pero no existía: resultó ser IA

2025-07-19
Primera Hora
Why's our monitor labelling this an incident or hazard?
The AI system (deepfake video generation) was used to create false content that directly misled people, causing them to travel unnecessarily and experience disappointment. This constitutes harm to individuals (emotional and economic), which fits within the scope of AI Incident as the AI system's use directly led to harm. Although the harm is not physical injury or legal rights violation, the framework includes harm to people or groups, and the misleading nature of AI-generated content causing real-world consequences qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Casal viaja 300 km para conhecer teleférico e descobre que atração nunca existiu

2025-07-23
Correio
Why's our monitor labelling this an incident or hazard?
The AI system was used to create a realistic but fake video of a teleféric attraction that does not exist. The couple relied on this AI-generated content and traveled a long distance, suffering inconvenience and emotional harm. This constitutes an AI Incident because the AI-generated misinformation directly led to harm to the people involved. The harm is not hypothetical or potential but has already occurred, fulfilling the criteria for an AI Incident under harm to people and communities.
Thumbnail Image

Vídeo feito por IA leva turistas a destino inexistente; saiba como se proteger

2025-07-22
Canaltech
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a realistic but false video that misled tourists, causing them to travel unnecessarily and suffer a loss (harm to persons). The AI's role is pivotal as the video content was entirely AI-generated, including characters and scenes, leading directly to the harm. Although the harm is non-physical, it is a clear and direct consequence of the AI system's use. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Casal viaja quase 400 km e descobre que atração turística de vídeo foi feita por IA

2025-07-20
TecMundo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a generative AI system creating a deepfake video that misled people, causing them to travel based on false information. This is a direct use of AI leading to harm (deception, wasted resources, emotional distress). The harm is realized, not just potential, as the couple acted on the AI-generated content. The incident also highlights risks of misinformation from AI-generated media. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Casal é enganado por IA ao viajar para ver ponto turístico que não existe

2025-07-23
Jornal Estado de Minas | Not�cias Online
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic but fake video of a non-existent tourist attraction, which directly misled people into traveling unnecessarily. This constitutes harm to the individuals involved (emotional harm, wasted resources) and harm to the community by spreading misinformation. The AI's role was pivotal in creating the false narrative that caused the harm. Therefore, this qualifies as an AI Incident under the definition of harm to communities and individuals caused by AI-generated misinformation.
Thumbnail Image

Casal viaja para conhecer atração turística na Malásia criada por IA

2025-07-22
33giga.com.br
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate a realistic but fictitious video that misled people. However, the article states no financial loss, injury, or other harm occurred. The event highlights the potential for AI-generated misinformation to cause harm, but since no harm has materialized, it is best classified as an AI Hazard. The authorities' warnings about possible future legal actions reinforce the plausible risk of harm from such AI-generated content.