Google Maps and Waze Alter Routes After Tourists Harmed in South Africa

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-powered navigation apps Google Maps and Waze directed tourists through Cape Town’s high-crime Nyanga area, resulting in shootings, robbery, and a fatality. Following these incidents, Google is removing these routes and labeling them as high-risk to prevent further harm, highlighting the risks of AI-driven routing in unsafe regions.[AI generated]

Why's our monitor labelling this an incident or hazard?

Google Maps uses AI algorithms to generate navigation routes. In this case, the AI system's routing led tourists into a violent area where they were attacked, causing injury and death. The AI system's use directly led to harm, fulfilling the criteria for an AI Incident. The removal of the route and addition of alerts are responses to the incident but do not change the classification of the event itself.[AI generated]
AI principles
SafetyRobustness & digital securityTransparency & explainabilityAccountabilityRespect of human rightsHuman wellbeingFairness

Industries
Mobility and autonomous vehiclesTravel, leisure, and hospitalityConsumer services

Affected stakeholders
Consumers

Harm types
Physical (death)Physical (injury)Economic/PropertyPsychological

Severity
AI incident

Business function:
Other

AI system task:
Organisation/recommendersGoal-driven organisationForecasting/prediction


Articles about this incident or hazard

Thumbnail Image

Route removed from Google Maps after tourists shot driving through South Africa

2023-11-15
The Independent
Why's our monitor labelling this an incident or hazard?
Google Maps uses AI algorithms to generate navigation routes. In this case, the AI system's routing led tourists into a violent area where they were attacked, causing injury and death. The AI system's use directly led to harm, fulfilling the criteria for an AI Incident. The removal of the route and addition of alerts are responses to the incident but do not change the classification of the event itself.
Thumbnail Image

Google Maps removes crime hotspot Nyanga from its routes | News24

2023-11-14
news24
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google Maps navigation) whose use previously led to harm (shootings of tourists directed through Nyanga). The removal of Nyanga from routes is a response to these harms, indicating the AI system's role in the incidents. Since harm has already occurred due to the AI system's routing, this qualifies as an AI Incident. The article describes direct harm linked to the AI system's use and the subsequent mitigation action.
Thumbnail Image

Google Maps to erase dangerous routes in South Africa after spate of attacks on tourists

2023-11-14
The Sunday Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google Maps) whose routing algorithms directed tourists into dangerous areas, resulting in physical harm (shootings). The harm is directly linked to the AI system's use, fulfilling the criteria for an AI Incident. The company's response to modify the system to remove dangerous routes is a mitigation step but does not change the classification of the original harm caused.
Thumbnail Image

Google Maps will dodge South Africa's crime hotspots

2023-11-14
MyBroadband
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Google Maps and Waze navigation apps) that generate route recommendations. The modification to avoid crime hotspots is a use of AI to influence navigation decisions. While no harm has yet occurred due to this change, the system's use is intended to reduce harm by avoiding dangerous areas. This is a proactive safety feature, indicating a plausible future reduction of harm rather than an incident of harm or a hazard of harm. Therefore, this is best classified as Complementary Information about an AI system's development and governance response to safety concerns, rather than an AI Incident or AI Hazard.
Thumbnail Image

My robbery nightmare in Nyanga, Cape Town, directed by Google Maps

2023-11-16
Daily Maverick
Why's our monitor labelling this an incident or hazard?
Google Maps is an AI system that provides route recommendations based on various data inputs. In this case, the AI system suggested a route through a known dangerous area, which directly led to the user being attacked and injured. The harm is realized and directly linked to the AI system's use, as the victim was following the AI's directions. Therefore, this qualifies as an AI Incident due to the AI system's role in causing physical harm through its routing recommendation.
Thumbnail Image

South Africa: Google Maps Will No Longer Direct Visitors Through Cape Town Township After Attacks On Motorists

2023-11-14
allAfrica
Why's our monitor labelling this an incident or hazard?
Google Maps uses AI to determine optimal routes based on various data inputs. The app's prior routing through high-crime areas directly led to harm (the shooting and robbery of a tourist). This constitutes an AI Incident because the AI system's use indirectly caused injury and harm to a person. The event involves the AI system's use and its impact on user safety, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Apple users RISK their safety when visiting Cape Town

2023-11-16
The South African
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems is reasonably inferred as both Apple Maps and Google Maps use AI-based routing algorithms to recommend navigation routes. The harm is direct and materialized, as tourists have been harmed due to following unsafe routes recommended by these AI systems. Apple's failure to adjust its AI routing to avoid dangerous areas despite requests constitutes a failure in the use of the AI system that has directly led to injury and harm to persons. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's outputs and realized harm to people.
Thumbnail Image

Google to avoid dangerous routes in South Africa

2023-11-14
Africanews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Google Maps and Waze) whose routing algorithms directly influenced users to travel through dangerous neighborhoods, resulting in actual harm (robbery, shooting, and death). The AI system's use indirectly led to injury and death, fulfilling the criteria for an AI Incident. The company's response to modify routing to avoid these areas is a mitigation measure but does not change the fact that harm occurred due to the AI system's prior routing guidance.
Thumbnail Image

Apple ignores CoCT request for Maps routes to avoid unsafe areas

2023-11-15
CapeTown ETC
Why's our monitor labelling this an incident or hazard?
Mapping applications like Apple Maps and Google Maps use AI systems to generate route recommendations based on various data inputs. The routing AI's outputs have indirectly led to harm by directing users through crime hotspots where tourists have been attacked. This constitutes indirect harm to persons (injury and harm to health). The City's request to Apple to change routing algorithms to avoid unsafe areas is a response to this harm. Since harm has already occurred due to AI system routing decisions, this qualifies as an AI Incident. The article describes realized harm linked to AI system use, not just potential harm or general information, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Google Maps and Waze to stop directing commuters through Nyanga

2023-11-14
CapeTown ETC
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Google Maps and Waze navigation algorithms) whose routing recommendations have indirectly led to harm (tourists being attacked after following suggested routes). The change to stop recommending these routes is a response to realized harm caused indirectly by the AI system's use. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to persons (tourists).
Thumbnail Image

Private sector steps in due to government incompetence in Tourism

2023-11-17
The Democratic Alliance
Why's our monitor labelling this an incident or hazard?
The AI systems (GPS navigation apps) are involved in the use phase, providing route designations to avoid crime hotspots. There is no indication that the AI systems caused or contributed to harm; rather, they are helping to prevent harm. The article focuses on government incompetence and private sector responses, with AI systems playing a supportive role. This fits the definition of Complementary Information, as it provides context and updates on AI use in societal safety without describing a new AI Incident or Hazard.
Thumbnail Image

Private sector steps in due to govt incompetence in tourism - Manny de Freitas

2023-11-17
Moneyweb
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-powered GPS navigation applications that identify and label certain routes as high-risk due to crime, directly influencing tourist safety. The AI system's role in designating these routes is pivotal in preventing further harm to tourists, which is a form of injury or harm to people. The incident of a tourist being shot and robbed is a realized harm, and the AI system's use is a direct response to this harm, thus fitting the definition of an AI Incident. The AI system's involvement is in its use phase, influencing decisions that affect physical safety, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google Maps and Waze to stop suggesting notorious Nyanga, Borcherds Quarry routes

2023-11-16
Daily Voice
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Google Maps and Waze) whose routing algorithms have indirectly led to harm by recommending routes that exposed users to crime. The harm (physical injury and death) has already occurred due to following these AI-generated routes. The decision to remove these routes from recommendations is a response to this harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to persons.