AI-Generated Travel Advice Sends Tourists to Non-Existent and Dangerous Destinations in Peru

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Travelers using AI tools like ChatGPT for trip planning have been misled to non-existent destinations, such as the fabricated 'Sacred Canyon of Humantay' in Peru. This misinformation has resulted in tourists being stranded in remote areas, facing financial loss and potential physical danger due to hazardous conditions.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (ChatGPT and similar large language models) used for travel planning. The AI's outputs included fabricated locations ('Sacred Canyon of Humantay') that do not exist, causing tourists to end up in unsafe or meaningless places, incurring financial loss and risking physical harm due to environmental dangers like high altitude without proper preparation. This constitutes direct harm to people (harm to health and safety) caused by the AI system's use and its hallucinations. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyHuman wellbeing

Industries
Travel, leisure, and hospitality

Affected stakeholders
Consumers

Harm types
Physical (injury)Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Σχεδιάζουν ταξίδια μέσω του ChatGPT και καταλήγουν σε μέρη που δεν υπάρχουν | Η ΚΑΘΗΜΕΡΙΝΗ

2025-09-29
H Kαθημερινή
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT and similar large language models) used for travel planning. The AI's outputs included fabricated locations ('Sacred Canyon of Humantay') that do not exist, causing tourists to end up in unsafe or meaningless places, incurring financial loss and risking physical harm due to environmental dangers like high altitude without proper preparation. This constitutes direct harm to people (harm to health and safety) caused by the AI system's use and its hallucinations. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Όταν η Τεχνητή Νοημοσύνη σας στέλνει στον Πύργο του Άιφελ... στο Πεκίνο

2025-09-30
newsbomb.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models like ChatGPT) whose use has directly led to harm or risk of harm to people (travelers) through misinformation and hallucinated content. The incidents described include actual harm or near-harm situations caused by reliance on AI-generated travel advice, fulfilling the criteria for an AI Incident. The discussion of regulatory and societal responses serves as complementary context but does not overshadow the primary focus on realized harm from AI use.
Thumbnail Image

Η τεχνητή νοημοσύνη στέλνει ταξιδιώτες σε μέρη που δεν υπάρχουν

2025-09-30
ΣΚΑΪ
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT, Microsoft Copilot, Google Gemini, Layla) being used for travel planning and generating false or misleading information that has directly led to harm or risk of harm to people (e.g., travelers stranded on a mountain without a way down, tourists going to non-existent places). This constitutes indirect harm to health and safety of persons, fulfilling the criteria for an AI Incident. The AI systems' use and malfunction (providing inaccurate outputs) are central to the event. Therefore, this is classified as an AI Incident.
Thumbnail Image

Τεχνητή Νοημοσύνη: Χρήσιμη για τους ταξιδιώτες αλλά... κρύβει κινδύνους

2025-09-30
Sigma Live
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT, Microsoft Copilot, Google Gemini, Layla) being used for travel planning and providing false information that led to travelers being misled to non-existent or unsafe destinations. This misinformation has directly led to harm or risk of harm to people (e.g., being stranded on a mountain without a way down, traveling to non-existent places). Therefore, the event qualifies as an AI Incident because the AI system's use has directly or indirectly caused harm to people, fulfilling the criteria for harm to health or safety and harm to communities.
Thumbnail Image

Οι κίνδυνοι του να αφήνεις την Τεχνητή Νοημοσύνη να σχεδιάζει το επόμενο ταξίδι σου

2025-09-30
ertnews.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT, AI travel planners) whose use has directly led to harm or risk of harm to people (travelers) through misinformation and hallucinated content. Examples include travelers being misled to non-existent destinations, being stranded due to incorrect timing information, and receiving incoherent or impossible travel routes. These outcomes constitute harm to persons and communities, fulfilling the criteria for an AI Incident. The article also discusses the nature of AI hallucinations and the challenges in verifying AI-generated information, reinforcing the direct link between AI use and harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Οι κίνδυνοι του να αφήνεις την Τεχνητή Νοημοσύνη να σχεδιάζει το επόμενο ταξίδι σου

2025-10-01
KontraNews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT, AI travel planners) used in travel planning that have produced false information leading to real-world harm, such as travelers being misled to non-existent locations or unsafe conditions. The harms include physical risk to health and safety, financial loss, and misinformation. The AI's role is pivotal as the source of the misleading information. The harms are realized, not just potential, making this an AI Incident rather than a hazard or complementary information. The article also discusses the nature of AI hallucinations and the difficulty in verifying AI outputs, reinforcing the direct link between AI use and harm.
Thumbnail Image

Γιατί να... μη σχεδιάζετε ταξίδια με τη βοήθεια της Τεχνητής Νοημοσύνης | Protagon.gr

2025-10-02
Protagon.gr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models and generative AI travel planners) whose use has directly led to harm to people (travelers) through misinformation causing financial loss, inconvenience, and potential physical risk. The AI systems' malfunction or limitations (hallucinations) are central to these harms. Therefore, this qualifies as an AI Incident under the framework, as the AI's use has directly led to realized harm to persons and communities through misleading travel guidance.
Thumbnail Image

Turismo: os perigos de deixar IA organizar sua próxima viagem - BBC News Brasil

2025-10-04
BBC
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models like ChatGPT and AI-generated travel planning tools) whose outputs have directly misled users, resulting in physical risk (e.g., being stranded at high altitude without oxygen or proper guidance) and other harms such as wasted resources and frustration. The AI's malfunction in generating false or fabricated information ('hallucinations') is a direct cause of these harms. The presence of realized harm linked to AI use in travel planning meets the criteria for an AI Incident rather than a hazard or complementary information. The article also discusses the broader implications and regulatory responses, but the primary focus is on actual harms caused by AI-generated misinformation in travel contexts.
Thumbnail Image

Os perigos de deixar que a IA organize sua próxima viagem

2025-10-04
Terra
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT, Microsoft Copilot, Google Gemini, and AI-based travel planning tools) generating false or hallucinated information that misleads travelers. These AI outputs have directly led to harmful outcomes, including tourists paying for nonexistent destinations, being stranded in unsafe locations, and facing risks related to altitude and accessibility. The harms described include potential injury or harm to health and safety, fulfilling the criteria for an AI Incident. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI systems' erroneous outputs in travel planning contexts.
Thumbnail Image

Os perigos de deixar que a IA organize sua próxima viagem

2025-10-04
Correio Braziliense
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models and generative AI tools) used in travel planning. It documents concrete cases where AI-generated misinformation led to travelers being misled, stranded, or exposed to unsafe conditions, which constitutes harm to health and safety. The AI's malfunction or inherent limitations (hallucinations) directly caused these harms. The article also discusses the difficulty in distinguishing AI-generated falsehoods from facts, reinforcing the AI system's pivotal role in the incidents. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Viajar com IA? Veja os riscos de deixar a tecnologia planejar seu passeio

2025-10-05
Jornal Estado de Minas | Not�cias Online
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (large language models like ChatGPT) being used to plan travel, resulting in misinformation that caused tourists to be misled, stranded, or exposed to potential physical harm (e.g., high altitude without preparation). This constitutes direct harm to people (harm to health and safety) caused by the AI system's outputs. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to realized harm or dangerous situations for individuals.
Thumbnail Image

Os perigos de deixar que a IA organize sua próxima viagem

2025-10-04
agazeta.com.br
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT, Google Gemini, Layla) generating false travel itineraries and information that tourists relied upon, resulting in real-world negative consequences such as being stranded without transport, traveling to non-existent destinations, and exposure to dangerous conditions. These outcomes constitute harm to persons and communities. The AI systems' malfunction (hallucination and misinformation) is a direct cause of these harms. Hence, this qualifies as an AI Incident under the OECD framework, as the AI system's use and malfunction have directly led to harm.
Thumbnail Image

Turistas pagaron 160 dólares para visitar el "Cañon Sagrado de Humantay", un destino creado por la IA

2025-10-08
BioBioChile
Why's our monitor labelling this an incident or hazard?
The AI system's use in generating false travel destinations and erroneous itineraries directly led to harm to people (tourists) through deception, financial loss, and potential physical danger. This fits the definition of an AI Incident because the AI's outputs caused realized harm to individuals. The event involves the use of AI systems for travel planning, and the harm is direct and materialized, not just potential. Therefore, it is classified as an AI Incident.
Thumbnail Image

IA inventa destinos turísticos y pone en riesgo a viajeros

2025-10-07
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models like ChatGPT) used in travel planning that have produced fabricated or inaccurate information leading to real-world harm or risk to travelers. The harms include physical danger from following false itineraries and financial risks from fraudulent AI-generated platforms. The AI's malfunction or limitations in providing accurate, up-to-date information have directly contributed to these harms. Hence, this is a clear case of an AI Incident as per the definitions provided.
Thumbnail Image

Turismo en alerta: expertos aconsejan utilizar la inteligencia artificial solo como "guía inicial"

2025-10-08
El Litoral
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT and other generative AI) providing false or misleading travel information that caused tourists to pay for nonexistent destinations, become stranded in dangerous locations, or follow unsafe itineraries. These outcomes constitute harm to persons and communities. The AI systems' malfunction or limitations in providing accurate, real-time travel data are central to these incidents. The article also notes fraudulent uses of AI to create fake travel platforms, further contributing to harm. Since actual harm has occurred due to the AI systems' outputs and use, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Advierten que planificar viajes con aplicaciones de IA es riesgoso para turistas: "El sistema encuentra lo que quiere y empieza a creerlo"

2025-10-08
Rosario3
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as chatbots and conversational models used for travel planning. The harms described include travelers being misled to nonexistent locations, incorrect schedules causing them to be stranded, and potential physical risks in remote areas. These harms have already occurred as a direct consequence of relying on AI-generated information. The article highlights multiple concrete examples of such incidents, confirming realized harm rather than hypothetical risk. Hence, this is an AI Incident due to direct harm caused by AI system outputs.
Thumbnail Image

Destinos creados por IA confunden a millones de turistas

2025-10-08
Almomento | Noticias, información nacional e internacional
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating false travel destinations and inaccurate travel details, which have caused real harm to tourists, including financial loss and physical risk (e.g., being stranded at high altitude without oxygen or phone signal). This constitutes direct harm caused by the use of AI systems, fulfilling the criteria for an AI Incident under harm to persons and communities. Therefore, the event is classified as an AI Incident.
Thumbnail Image

IA inventa destinos turísticos y pone en riesgo a viajeros

2025-10-08
eldia.com.bo
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models like ChatGPT) used in travel planning. The AI's outputs have directly caused harm by misleading travelers into dangerous or impossible situations, which constitutes injury or harm to persons. The harm is realized, not just potential, as tourists have been stranded or misled. The article also mentions fraudulent AI-generated travel sites posing financial risks. Hence, this is an AI Incident due to direct harm caused by AI system use.
Thumbnail Image

IA irrumpe en el mundo del turismo y pone en riesgo a viajeros

2025-10-08
El Observador Mexico
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models like ChatGPT) used in travel planning that have directly caused harm by providing false or misleading information leading to dangerous situations for travelers. The harms are realized and documented, including physical risk and financial fraud. The AI's malfunction or limitations in providing accurate, real-time travel data are central to the incidents described. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Turistas Engañados por la IA: Pagaron por un Destino que Nunca Existió en los Andes Peruanos - Diario Cambio 22 - Península Libre

2025-10-08
Diario Cambio 22 - Península Libre
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots like ChatGPT) used for travel planning. The AI's outputs have directly led to harm (tourists being stranded in dangerous locations, misled to non-existent destinations, or sent to closed trails), fulfilling the criteria for an AI Incident. The harms include injury or risk to health (being stranded at high altitude without oxygen or signal), and harm to communities (disruption and potential danger to tourists). Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Pagaron para ir a un lugar paradisíaco, pero cuando llegaron solo había una carretera: el engaño de la IA

2025-10-09
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT, Layla, etc.) being used to plan travel itineraries, which produced false or fabricated information (hallucinations). Tourists followed these AI-generated plans, resulting in harm such as financial loss, being stranded in dangerous locations, and disrupted travel experiences. These harms fall under injury or harm to persons and harm to communities. The AI systems' outputs were a direct contributing factor to these harms, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by AI use.