ChatGPT visa error strands Australian author at airport

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Mark Pollard, an Australian marketing strategist, was stranded at a Chilean airport and missed his conference after ChatGPT wrongly informed him he didn’t need a visa. This mishap highlights generative AI’s fallibility when used for critical travel advice, showing how inaccurate AI outputs can directly harm users’ plans.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions that ChatGPT, an AI system, gave false information about visa requirements, which directly caused the author to be blocked at the airport and miss a conference. This is a clear example of harm to a person resulting from the use of an AI system's output. The harm is realized and directly linked to the AI's malfunction (providing incorrect information). Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to a person.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyHuman wellbeing

Industries
Travel, leisure, and hospitalityConsumer services

Affected stakeholders
Consumers

Harm types
Economic/PropertyPsychologicalReputational

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

"Pour les choses très importantes, je n'utiliserai plus ChatGPT": un Australien bloqué à l'aéroport à cause d'une erreur du chatbot

2025-03-28
BFMTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that ChatGPT, an AI system, gave false information about visa requirements, which directly caused the author to be blocked at the airport and miss a conference. This is a clear example of harm to a person resulting from the use of an AI system's output. The harm is realized and directly linked to the AI's malfunction (providing incorrect information). Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to a person.
Thumbnail Image

Un Australien bloqué à l'aéroport après des conseils erronés sur ChatGPT | AbidjanTV.net

2025-03-29
AbidjanTV.net
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose incorrect response directly led to a tangible harm: the individual missed his flight and conference. This constitutes harm to the person (a), as it affected his professional activities and caused inconvenience and potential financial or reputational damage. The AI system's malfunction (providing incorrect information) is a direct contributing factor to the harm. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

ChatGPT gâche le séjour d'un voyageur qui s'est retrouvé bloqué à l'aéroport sans visa, la tuile !

2025-03-28
Clubic.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) providing incorrect information that directly led to the traveler being blocked at the airport without a visa, causing harm to the individual (missed conference, embarrassment). This fits the definition of an AI Incident as the AI's use directly led to harm to a person.
Thumbnail Image

ChatGPT lui assure qu'il n'a pas besoin de visa, il reste bloqué à l'aéroport

2025-03-28
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) providing incorrect information that directly led to a tangible harm: the traveler was unable to enter Chile and attend a conference. This constitutes harm to the person (travel disruption and potential financial or professional loss). The AI system's use and its erroneous output are directly linked to the incident. Therefore, this qualifies as an AI Incident.
Thumbnail Image

ChatGPT lui donne une fausse information, il se retrouve bloqué à l'aéroport et rate sa conférence

2025-03-28
Ouest France
Why's our monitor labelling this an incident or hazard?
The event describes a clear case where the use of an AI system (ChatGPT) provided false information that directly caused harm to a person by preventing entry into a country and missing an important event. The harm is realized and directly linked to the AI's output. Although the harm is not physical injury, it is a significant personal and professional disruption, fitting the definition of harm to a person. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT lui dit qu'il n'a pas besoin de visa, il reste bloqué à l'aéroport

2025-03-28
20minutes
Why's our monitor labelling this an incident or hazard?
The event involves ChatGPT, an AI system, providing inaccurate visa information that the user trusted, resulting in the user being blocked at the airport. This is a direct consequence of the AI system's use causing harm to the individual (harm to a person). The harm is realized, not just potential, and the AI system's role is pivotal in the chain of events. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.