Waymo Self-Driving Car Kills Beloved San Francisco Bodega Cat

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Waymo autonomous vehicle in San Francisco allegedly struck and killed KitKat, a well-known bodega cat at Randa's Market. The incident, which occurred late at night, has sparked community grief and criticism over the AI system's failure to avoid the animal, highlighting concerns about autonomous vehicle safety.[AI generated]

Why's our monitor labelling this an incident or hazard?

Waymo vehicles operate using AI systems for autonomous driving. The incident involves the AI system's use leading directly to harm (the death of the cat). Although the harm is to an animal rather than a person, the definition of AI Incident includes harm to groups of people or communities, and harm to property, communities, or the environment. The death of a beloved community animal can be considered harm to the community and property (the cat as property of the store). Therefore, this qualifies as an AI Incident due to the AI system's use directly causing harm.[AI generated]
AI principles
AccountabilitySafetyRobustness & digital security

Industries
Mobility and autonomous vehicles

Affected stakeholders
OtherGeneral public

Harm types
Psychological

Severity
AI incident

Business function:
Logistics

AI system task:
Recognition/object detectionForecasting/predictionGoal-driven organisationReasoning with knowledge structures/planning

In other databases

Articles about this incident or hazard

Thumbnail Image

San Francisco's Mission District mourns beloved store cat killed by Waymo

2025-10-30
CBS News
Why's our monitor labelling this an incident or hazard?
Waymo vehicles operate using AI systems for autonomous driving. The incident involves the AI system's use leading directly to harm (the death of the cat). Although the harm is to an animal rather than a person, the definition of AI Incident includes harm to groups of people or communities, and harm to property, communities, or the environment. The death of a beloved community animal can be considered harm to the community and property (the cat as property of the store). Therefore, this qualifies as an AI Incident due to the AI system's use directly causing harm.
Thumbnail Image

'Buy Tesla, save cats': Elon Musk dragged into KitKat controversy after San Francisco bodega pet death caused by Waymo car

2025-10-30
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The Waymo self-driving car is an AI system whose operation directly led to the death of the cat, KitKat. The AI system's failure to detect or appropriately respond to the presence of the cat on the street caused physical harm to the animal, which qualifies as harm to a living being (harm to property, communities, or the environment). The incident is clearly described as having occurred, not just a potential risk, so it meets the criteria for an AI Incident rather than a hazard or complementary information. The involvement of Elon Musk is social commentary and does not affect the classification.
Thumbnail Image

Outrage as Google self-driving car kills beloved cat without stopping

2025-10-30
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a self-driving car, which is an AI system making autonomous driving decisions. The death of the cat is a direct harm caused by the AI system's use. The AI system's failure to avoid the cat constitutes a malfunction or operational failure leading to harm. This fits the definition of an AI Incident because the AI system's use directly led to harm to a living being and caused community distress. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Beloved bodega cat killed by driverless Waymo robotaxi: 'People loved him'

2025-10-30
Aol
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously. The event reports that the vehicle did not slow down, swerve, or attempt to avoid the cat, indicating a failure or malfunction in the AI system's perception or decision-making. The harm (death of the cat) is a direct consequence of the AI system's operation. The incident has caused community distress and outrage, highlighting harm to the community and property (the cat). Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Beloved Bodega Cat in San Francisco Allegedly Killed by Waymo

2025-10-30
PEOPLE.com
Why's our monitor labelling this an incident or hazard?
The event involves a Waymo self-driving car, which is an AI system performing autonomous navigation. The incident resulted in the death of a cat, a direct harm caused by the AI system's failure to avoid the collision. This meets the criteria for an AI Incident as the AI system's use directly led to harm to property and community. The complaint explicitly states the vehicle did not attempt to stop, indicating malfunction or failure in the AI system's operation.
Thumbnail Image

SF neighborhood mourns loss of bodega cat allegedly killed by Waymo

2025-10-30
San Francisco Gate
Why's our monitor labelling this an incident or hazard?
The article describes an autonomous vehicle (Waymo) equipped with AI that allegedly hit and killed a cat. The AI system's operation directly led to harm (the cat's death). This fits the definition of an AI Incident because the AI system's use directly caused harm to a living being. The event is not merely a potential hazard or complementary information, but a realized harm linked to AI system use.
Thumbnail Image

Beloved San Francisco Bodega Cat Killed by Waymo Self-Driving Car, Neighbors Say

2025-10-30
TMZ
Why's our monitor labelling this an incident or hazard?
The Waymo self-driving car is an AI system whose autonomous operation caused a fatal incident involving a community cat. The harm is direct and materialized, as the AI system's use led to the cat's death. This fits the definition of an AI Incident because the AI system's use directly led to harm to a community fixture and emotional harm to the community. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Beloved Bodega Cat Reportedly Killed by Driverless Waymo

2025-10-30
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves a Waymo self-driving taxi, which is an AI system operating autonomously. The vehicle's sudden jerk off course and failure to stop led directly to the death of a community animal, causing harm to the community and property (the cat). This fits the definition of an AI Incident because the AI system's use directly led to harm. The harm is materialized and not just potential, so it is not an AI Hazard. It is not merely complementary information or unrelated news.
Thumbnail Image

Bodega Cat Allegedly Killed by Self-Driving Car Gets Second Life via Meme Coins - Decrypt

2025-10-30
Decrypt
Why's our monitor labelling this an incident or hazard?
The self-driving car is an AI system operating autonomously. The alleged incident where the car hit and killed the cat is a direct harm caused by the AI system's use or malfunction (failure to detect or avoid the cat). This harm to a community's beloved animal fits within the harm to communities or property category. Hence, the event meets the criteria for an AI Incident. The meme coin activity is unrelated to AI harm and is a separate social reaction.
Thumbnail Image

Waymo pledges donation after beloved San Francisco corner store cat struck, killed

2025-10-31
KRON4
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a driverless car operated by Waymo, which is an AI system. The autonomous vehicle struck and killed a cat, causing harm to a community member's property (the pet) and emotional harm to the community. The AI system's use directly led to this harm. The incident is not hypothetical or potential but has already occurred, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Beloved store cat killed by Waymo

2025-10-30
Channel 3000
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (Waymo's autonomous driving AI) whose use directly caused harm (the death of the cat). This fits the definition of an AI Incident because the AI system's use led directly to injury or harm to a living being. Although the harm is to an animal rather than a person, the definition includes harm to groups of people or communities, and harm to property, communities, or the environment. The death of a community's beloved cat can be considered harm to the community and to property (the cat as property of the store). Therefore, this qualifies as an AI Incident.
Thumbnail Image

Beloved Bodega Cat Allegedly Killed By Waymo In Mission District

2025-10-29
SFist - San Francisco News, Restaurants, Events, & Sports
Why's our monitor labelling this an incident or hazard?
The incident involves a Waymo autonomous vehicle, which is an AI system controlling the vehicle's navigation and operation. The vehicle allegedly ran over and killed KitKat, a beloved community cat, causing harm to the community and property (the cat). The AI system's use directly led to this harm. Although the harm is to an animal, the framework includes harm to property and communities, which applies here. The event is not speculative or potential harm but an actual incident. Hence, it is classified as an AI Incident.
Thumbnail Image

Outrage as Google-run driverless car flattens beloved tabby cat without stopping

2025-10-30
lunaticoutpost.com
Why's our monitor labelling this an incident or hazard?
The incident involves a driverless car, which is an AI system performing autonomous navigation. The car's failure to stop or avoid the cat caused the cat's death, which is harm to a living being and the community. This qualifies as an AI Incident because the AI system's malfunction directly led to harm.
Thumbnail Image

Waymo allegedly kills cat at San Francisco store

2025-10-28
The San Francisco Standard
Why's our monitor labelling this an incident or hazard?
The incident involves an autonomous vehicle (Waymo robotaxi), which is an AI system making real-time decisions in a physical environment. The vehicle allegedly caused the death of a cat, which is a harm to property and community (the cat was a local fixture). The harm occurred as a direct result of the AI system's use, fulfilling the criteria for an AI Incident. There is no indication that this is merely a potential hazard or complementary information; the harm has already occurred.
Thumbnail Image

'Kill a Waymo, Save a Cat': Internet mourns as crypto launches KitKat coins

2025-10-30
Protos
Why's our monitor labelling this an incident or hazard?
The autonomous Waymo taxi is an AI system operating in a real-world environment. Its failure to stop and the resulting death of the cat is a direct harm caused by the AI system's use. This meets the criteria for an AI Incident as it involves injury or harm (to an animal, which is part of the community/environmental harm category). The article describes the event as having occurred, not just a potential risk, so it is not a hazard. The memecoin activity is unrelated to AI harm classification. Hence, the classification is AI Incident.
Thumbnail Image

Beloved San Francisco Bodega Cat Killed by Waymo Self-Driving Car, Neighbors Say - World Byte News

2025-10-30
World Byte News
Why's our monitor labelling this an incident or hazard?
The Waymo self-driving car is an AI system whose autonomous operation caused a fatal incident involving a community cat. The harm is realized and directly linked to the AI system's use and malfunction (failure to stop). This fits the definition of an AI Incident as it caused harm to a community fixture and emotional harm to the community. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Waymo confirms its robotaxi killed beloved bodega cat 'KitKat' in SF: 'It darted under our vehicle'

2025-11-01
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously in a real-world environment. The incident involved the AI system's use leading directly to harm—the death of a cat. This harm affects the community and property (the cat as a living being and community fixture). The event also highlights concerns about the AI system's ability to detect small animals and manage unexpected behaviors, which is part of the AI system's operational domain. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Robot kills cat.

2025-10-31
The Verge
Why's our monitor labelling this an incident or hazard?
The robotaxi is an AI system as it operates autonomously to navigate and transport passengers. The incident describes a direct harm caused by the AI system's failure to avoid hitting the cat, which is a harm to property and community interests. The event is not merely a potential risk but a realized harm, qualifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waymo acknowledges its vehicle hit a San Francisco corner store cat

2025-10-31
San Francisco Gate
Why's our monitor labelling this an incident or hazard?
The incident involves a Waymo robotaxi, an AI system, which directly caused the death of a cat by hitting it while pulling away. This is a direct harm caused by the AI system's use. The harm is materialized and not hypothetical. The event fits the definition of an AI Incident because the AI system's use led to harm to a living being, which falls under harm to property, communities, or the environment. The company's acknowledgment confirms the AI system's involvement. Hence, the classification is AI Incident.
Thumbnail Image

Elon Musk speaks out as beloved San Francisco bodega cat is killed by Waymo vehicle

2025-10-31
The Independent
Why's our monitor labelling this an incident or hazard?
The incident involves a Waymo autonomous vehicle, which is an AI system, directly causing the death of a pet cat. This is a direct harm resulting from the use of an AI system. The harm is materialized and not hypothetical, fulfilling the criteria for an AI Incident. The presence of the AI system is explicit, the harm is direct, and the event is not merely a potential risk or a complementary update. Therefore, the classification is AI Incident.
Thumbnail Image

Elon Musk Says Pets Will Be 'Saved' by Autonomous Vehicles

2025-10-31
TMZ
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (Waymo's autonomous vehicle) whose operation directly caused harm to a pet, fulfilling the criteria for an AI Incident under harm to a person or groups of people (pets are living beings). The article reports a realized harm caused by the AI system's use, not just a potential risk or general commentary. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Elon Musk Defends Cat-Killing Robotaxis

2025-10-31
The Daily Beast
Why's our monitor labelling this an incident or hazard?
The article explicitly describes a self-driving car (an AI system) causing the death of a cat, which is a direct harm to a living being and the community. The AI system's malfunction or failure to stop after hitting the cat is central to the incident. The harm is realized and not hypothetical, meeting the criteria for an AI Incident. The subsequent social media reactions and Elon Musk's defense do not change the classification but provide context.
Thumbnail Image

Elon Musk Wades Into the Debate Over Robotaxis Killing Cats. Guess Which Side He's On

2025-10-31
Gizmodo
Why's our monitor labelling this an incident or hazard?
The article describes a concrete event where a Waymo autonomous vehicle, an AI system, ran over and killed a cat. This is a direct harm caused by the AI system's operation. The harm is materialized and specific, not hypothetical or potential. The involvement of the AI system is explicit and central to the incident. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Elon Musk reacts to San Francisco cat being killed by Waymo driverless car

2025-10-31
Newsweek
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, the Waymo driverless car, which is an autonomous AI system controlling the vehicle. The death of the cat is a direct harm caused by the AI system's failure to prevent the collision. Although the harm is to an animal, this fits within the definition of harm to property or communities. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's malfunction during its use.
Thumbnail Image

Waymo Confirms Vehicle's Role In Death of 16th Street Bodega Cat, as Mourning Continues

2025-10-31
SFist - San Francisco News, Restaurants, Events, & Sports
Why's our monitor labelling this an incident or hazard?
The article explicitly states that a Waymo autonomous vehicle, which is an AI system, ran over and killed a cat. This is a direct harm caused by the AI system's use. The harm is to the cat (property/community harm) and has caused public distress. The involvement of the AI system is clear and direct, and the harm has materialized. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Social media mourns beloved cat, KitKat, hit by Waymo car

2025-11-01
Mashable
Why's our monitor labelling this an incident or hazard?
The Waymo vehicle is an AI system (self-driving car) whose operation directly caused harm (death) to a cat. The incident involves the AI system's use and failure to avoid harm, leading to a fatality. This fits the definition of an AI Incident as the AI system's use directly led to harm to a community member (the cat) and the community mourns the loss. The event is not merely a hazard or complementary information but a realized harm caused by AI.
Thumbnail Image

Social media mourns beloved cat hit by Waymo car

2025-11-01
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The Waymo vehicle is an AI system (self-driving car) whose operation directly caused the death of the cat, a harm to a living being and the community. This fits the definition of an AI Incident because the AI system's use directly led to harm. The event is not merely a potential hazard or complementary information, but a realized harm caused by the AI system's malfunction or failure to avoid the cat.
Thumbnail Image

San Francisco Bodega Cat's Death in Waymo Robotaxi Incident Sparks Safety Debate

2025-11-01
Bangla news
Why's our monitor labelling this an incident or hazard?
The incident explicitly involves an AI system (Waymo's self-driving car) whose operation directly led to the death of a cat, which is harm to a community member (animal). The vehicle's sensors or AI perception system apparently failed to detect the cat, resulting in the fatality. This is a direct harm caused by the AI system's use. The event has prompted official investigation and public debate about safety, confirming the significance of the harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

San Francisco's Mission District is mourning an iconic bodega cat after it was run over by a Waymo

2025-11-02
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an autonomous vehicle (Waymo) hitting and killing a cat, which is a direct harm caused by the AI system's operation. The AI system's inability to detect the cat under the vehicle and prevent the accident is a malfunction or limitation leading to harm. The harm is realized and significant to the community and the pet owner. Thus, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Waymo killed my cat: California neighbourhood mourns beloved 'KitKat'

2025-11-03
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: the Waymo robotaxi's autonomous driving system. The incident resulted in direct harm to a living being (the cat), which is a form of harm to communities and property. The AI system's malfunction or failure to detect and avoid the cat directly caused the harm. The article also references previous similar incidents involving Waymo vehicles, reinforcing the AI system's role in causing harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'He brought warmth, smiles, and comfort': California neighborhood in mourning after local cat struck and killed by a Waymo robotaxi

2025-11-03
We Got This Covered
Why's our monitor labelling this an incident or hazard?
The Waymo driverless car is an AI system explicitly mentioned as responsible for the incident. The event involves the use of the AI system (the autonomous vehicle) leading directly to harm (the death of the cat). The harm is material and emotional to the community, fulfilling the harm criteria under (d) harm to property, communities, or the environment. The incident is not merely a potential risk but a realized harm, so it is classified as an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

Waymo killed KitKat. California neighborhood mourns a corner-store cat

2025-11-03
Los Angeles Times
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously. The incident involved the AI system's use leading directly to the death of a cat, which is a harm to a living being and the community. The AI system's failure to detect or avoid the cat caused the harm. This fits the definition of an AI Incident because the AI system's use directly led to harm. The article also references previous similar incidents, reinforcing the pattern of harm caused by the AI system's operation.
Thumbnail Image

Waymo faces flak after robotaxi kills California cat: Report

2025-11-03
The Hindu
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system as it autonomously navigates and makes driving decisions. The incident involved the AI system's use leading directly to the death of a cat, which is a harm to property and communities (animal welfare). The vehicle did not slow down or avoid the cat, indicating a failure or limitation in the AI system's perception or decision-making. This meets the criteria for an AI Incident as the AI system's use directly caused harm. The event is not merely a potential hazard or complementary information but a realized harm caused by the AI system.
Thumbnail Image

Waymo killed KitKat: Neighborhood mourns a corner-store cat

2025-11-03
The Seattle Times
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously or semi-autonomously. The incident describes the vehicle pulling away and running over the cat, causing its death. This is a direct harm caused by the AI system's use. The harm is realized and specific, involving injury and death of a living being. The involvement of the AI system is explicit and central to the event. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

A Waymo robotaxi killed a beloved S.F. cat. Now a city supervisor wants driverless car reform

2025-11-04
San Francisco Chronicle
Why's our monitor labelling this an incident or hazard?
The article explicitly describes a Waymo robotaxi, which is an AI system operating autonomously, hitting and killing a cat. This is a direct harm to property and community sentiment. The incident has led to political responses and calls for legislation, indicating the harm is materialized and significant. The AI system's malfunction or failure to avoid the cat is central to the event. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

KitKat, San Francisco's internet-famous bodega cat, mourned after being struck by Waymo vehicle: "Shouldn't be on the street"

2025-11-03
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a Waymo autonomous vehicle, which uses AI systems for navigation and decision-making. The vehicle's AI system was operating when it struck the cat, causing its death. This is a direct harm caused by the AI system's malfunction or failure to act appropriately (failure to detect and avoid the cat). The harm is realized and significant to the community, as the cat was a beloved local figure. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Californians Vandalize Waymo Car After Beloved Bodega Cat's Death: 'Because if it Won't Stop for a Cat Why Should We Trust it With Children'

2025-11-03
The Nerd Stash
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously without a human driver. The death of the cat caused by the vehicle is a direct harm to a living being, fulfilling the criteria for injury or harm to a person or group (in this case, an animal as a proxy for potential human harm). The AI system's failure to prevent this harm is central to the incident. The public protests and vandalism further demonstrate the societal impact and harm to communities. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Waymo killed my cat: California neighbourhood mourns beloved 'KitKat'

2025-11-03
dpa International
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously. The incident involved the AI system's use leading directly to harm—the death of a pet cat. This fits the definition of an AI Incident because the AI system's operation directly caused harm to property (the cat) and affected the community emotionally. The article details the event and its consequences, not just potential or future harm, so it is not a hazard or complementary information. Therefore, the classification is AI Incident.
Thumbnail Image

Waymo killed my cat: California neighbourhood mourns beloved 'KitKat'

2025-11-03
dpa International
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system engaged in autonomous driving. The incident resulted in the death of a cat due to the vehicle's movement, which was controlled by the AI system. The harm is direct and materialized, as the AI system's use caused injury and death to the cat, a community member's pet. This fits the definition of an AI Incident because the AI system's use directly led to harm to property and community. The article also references previous similar incidents involving Waymo vehicles, reinforcing the AI system's role in causing harm.
Thumbnail Image

San Francisco Mourns 'Mayor of 16th Street' After Beloved Bodega Cat KitKat Fatally Struck by Waymo Robotaxi

2025-11-04
Santa Monica Observer
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously. The vehicle's movement and failure to stop or avoid the cat directly caused the cat's death, which is a harm to property (the cat) and harm to the community's well-being and sentiment. The article explicitly links the AI system's use to the fatality and community impact. Hence, this is an AI Incident as the AI system's use directly led to harm.
Thumbnail Image

Waymo Killed Kitkat. California Neighborhood Mourns A Corner-store Cat

2025-11-03
Beritaja.com
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system as it operates autonomously without a human driver. The incident describes the AI system's use leading to the death of a cat, a direct harm to a living being and the community that cared for it. Although the harm is to an animal rather than a person, it still qualifies under harm to communities or property. Therefore, this is an AI Incident due to the AI system's use directly causing harm.
Thumbnail Image

Beloved Mission cat's death sparks call for local robotaxi oversight

2025-11-05
Axios
Why's our monitor labelling this an incident or hazard?
The incident involves a self-driving car, which is an AI system operating autonomously. The death of the cat is a direct harm caused by the AI system's operation. Although the harm is to an animal (property/community), it is a clear realized harm linked to the AI system's use. The article focuses on the incident and its consequences, including regulatory responses, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

San Francisco supervisor calls for new robotaxi rules after neighborhood cat killed by Waymo

2025-11-04
Yahoo
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously on public streets. The death of the cat due to the robotaxi's operation constitutes harm to a community member (animal) and the community's emotional well-being. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The article focuses on the incident and its consequences rather than potential future harm or general information, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

San Francisco mourns cat killed by Waymo self-driving car

2025-11-05
The Star
Why's our monitor labelling this an incident or hazard?
The article explicitly describes a Waymo autonomous vehicle (an AI system) involved in a collision that killed a cat, a direct harm caused by the AI system's use. This meets the criteria for an AI Incident because the AI system's operation directly led to harm (death of the cat). Although the harm is to an animal rather than a human, the framework includes harm to property, communities, or the environment, which covers animals. Therefore, this event is classified as an AI Incident.
Thumbnail Image

San Francisco supervisor calls for new robotaxi rules after neighborhood cat killed by Waymo

2025-11-04
ABC7 News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous vehicle) whose use directly caused harm to a living being (the cat). This fits the definition of an AI Incident as it involves harm to property or communities (the cat as part of the community). The incident is not hypothetical or potential but has already occurred, and the AI system's operation is central to the harm. Therefore, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

SF supervisor calls for change in rules for self-driving cars

2025-11-04
NBC Bay Area
Why's our monitor labelling this an incident or hazard?
The Waymo self-driving car, an AI system, directly caused harm by running over and killing a cat. This is a clear example of harm resulting from the use of an AI system. The article discusses the incident and the resulting calls for regulatory changes, but the primary event is the harm caused by the AI system's operation. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

SF Supervisor Holding Rally for Cat Slain by Waymo, Promises Autonomous Car Legislation

2025-11-04
SFist - San Francisco News, Restaurants, Events, & Sports
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a Waymo self-driving car (an AI system) running over and killing a cat, which is a direct harm caused by the AI system's operation. This meets the definition of an AI Incident as the AI system's use directly led to harm to a living being (harm to property, communities, or the environment includes harm to animals). The subsequent rally and proposed legislation are responses to this incident and thus are complementary information but do not negate the incident classification. The incident is not hypothetical or potential; the harm has occurred, so it is not an AI Hazard. Therefore, the correct classification is AI Incident.
Thumbnail Image

Waymo driverless taxi kills beloved bodega cat, KitKat, in San Francisco

2025-11-04
WPTV
Why's our monitor labelling this an incident or hazard?
The Waymo vehicle is an AI system operating autonomously. The incident describes the vehicle hitting and killing a cat, which is a direct harm caused by the AI system's use. The harm is realized and not hypothetical. The event involves the AI system's malfunction or failure to avoid harm to a living being. This fits the definition of an AI Incident due to harm to property, communities, or the environment (harm to an animal).
Thumbnail Image

Waymo Haunted by Killing of Beloved Neighborhood Cat

2025-11-05
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous vehicle) whose use directly caused harm (the death of a cat). The incident has led to community backlash and political efforts to regulate autonomous vehicles, indicating the harm is realized and significant. The AI system's malfunction or failure to avoid the accident is central to the event. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

A Robotaxi Killed a Beloved Bodega Cat in San Francisco. People Are Pissed

2025-11-05
Rolling Stone
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously. The event describes the AI system's use leading directly to the death of a cat, which constitutes harm to property and harm to the community. The incident is not hypothetical or potential but has already occurred, fulfilling the criteria for an AI Incident. The community reaction and calls for regulation further emphasize the significance of the harm caused. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Death of beloved neighborhood cat sparks outrage against robotaxis in San Francisco

2025-11-05
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly states that a Waymo autonomous vehicle, which is an AI system, struck and killed a cat named KitKat. This is a direct harm caused by the use of an AI system. The harm is to a living being within the community, which fits under harm to communities or property. The incident has sparked political and social responses, but the primary event is the harm caused by the AI system's operation. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

San Francisco leaders call for new checks on Waymo after death of beloved cat KitKat

2025-11-05
The Hill
Why's our monitor labelling this an incident or hazard?
The autonomous vehicle is an AI system whose operation directly led to the death of a pet, which is harm to property and community. The incident is a realized harm caused by the AI system's use, meeting the criteria for an AI Incident. The political calls for legislation are responses to this incident, not the main event. Hence, the classification is AI Incident.
Thumbnail Image

Cat's Death Could Change Robotaxi Rules in SF

2025-11-05
Newser
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system operating autonomously. The death of the cat was directly caused by the vehicle's movement under AI control, constituting harm to property and community (the cat was a local fixture). The incident has led to calls for regulatory changes, indicating the harm is material and recognized. The AI system's involvement is explicit and central to the harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

Cat's death prompts call for driverless car laws in San Francisco - UPI.com

2025-11-05
UPI
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—Waymo's autonomous vehicle—and its use led directly to harm (the death of a cat). Although the harm is to an animal rather than a human, harm to property, communities, or the environment includes harm to animals and communities. The incident has prompted calls for regulation, indicating recognition of the AI system's role in causing harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's operation.
Thumbnail Image

San Francisco Supervisor Calls for Robotaxi Reform After Waymo Kills Neighborhood Cat | KQED

2025-11-05
KQED
Why's our monitor labelling this an incident or hazard?
The autonomous vehicle's AI system directly caused harm by killing a cat, which is a harm to property and community (the cat being part of the community and valued by residents). The event also highlights broader safety concerns related to AI-driven robotaxis. Since the harm has already occurred and is linked to the AI system's use, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Death of beloved neighborhood cat sparks outrage against robotaxis in San Francisco

2025-11-05
Head Topics
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous vehicle) whose operation directly caused the death of a cat, a harm to property and community well-being. The harm is realized and not hypothetical, meeting the criteria for an AI Incident. The incident has also led to societal and political responses, but the primary classification is based on the direct harm caused by the AI system's use. There is no indication that this is merely a potential risk or a complementary update; the harm has occurred.
Thumbnail Image

Death of KitKat, a beloved San Francisco cat, reignites fury over robotaxis

2025-11-06
Washington Post
Why's our monitor labelling this an incident or hazard?
The Waymo robotaxi is an AI system that autonomously navigates and operates a vehicle. The death of KitKat, caused by the robotaxi, is a direct harm resulting from the AI system's use. This fits the definition of an AI Incident as the AI system's use directly led to harm to a living being (harm to a community member's pet).
Thumbnail Image

La mort d'un chat provoque la colère contre les taxis autonomes de San Francisco

2025-11-05
Ouest France
Why's our monitor labelling this an incident or hazard?
The autonomous taxi is an AI system as it uses AI and sensors to navigate and operate without a human driver. The death of the cat is a direct harm caused by the AI system's operation (use). Although the harm is to an animal (property/community/environment), it qualifies as harm under the framework. The incident has led to public and regulatory responses, but the core event is the AI system causing harm through its malfunction or failure to avoid the accident. Therefore, this qualifies as an AI Incident.
Thumbnail Image

À San Francisco, la mort d'un chat écrasé par une voiture autonome suscite une vague d'émotions et pose une question essentielle sur cette technologie

2025-11-03
BFMTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous vehicle) whose use directly caused harm (the death of a cat). The harm is materialized and not hypothetical. The incident also highlights the limitations and risks of autonomous vehicle AI systems. The emotional and community impact further supports the classification as an AI Incident. Although the harm is to an animal, the definitions include harm to property, communities, or the environment, and emotional harm to communities is recognized. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La chatte KitKat écrasée par une voiture autonome : polémique autour des véhicules sans conducteur

2025-11-06
Ladepeche.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as a Waymo autonomous taxi, which is an AI system performing autonomous driving. The death of the cat due to collision with this vehicle is a direct harm caused by the AI system's use. Although the victim is an animal, the harm to property, communities, or the environment includes harm to animals and local community well-being. Therefore, this qualifies as an AI Incident. The article also mentions ongoing investigations and public reactions, but the primary focus is the realized harm from the AI system's use, not just potential or complementary information.
Thumbnail Image

La mort du chat KitKat, renversé par un taxi autonome Waymo, émeut les États-Unis

2025-11-04
Numerama.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as a Waymo autonomous taxi. The AI system's use directly led to the death of the cat, which constitutes harm to property and communities. The incident is not hypothetical or potential but has already occurred, fulfilling the criteria for an AI Incident. The discussion about sensor blind spots and AI decision-making further supports the AI system's role in causing harm. Hence, this is not merely a hazard or complementary information but a clear AI Incident.
Thumbnail Image

Et si le décès d'un chat démarrait la guerre contre les voitures robots ?

2025-11-06
Konbini - All Pop Everything : #1 Media Pop Culture chez les Jeunes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Waymo autonomous taxi) involved in an event where a cat was killed. This is a direct harm caused by the use of the AI system. Although the victim is an animal, harm to animals is considered harm to property, communities, or the environment under the definitions. The event is not speculative or potential but has already happened, so it is not a hazard. It is not merely complementary information or unrelated news. Hence, the classification is AI Incident.
Thumbnail Image

Taxis autonomes : un chat peut-il ébranler la confiance dans la technologie de demain ?

2025-11-04
Génération-NT
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous driving system) whose use directly caused harm (the death of a cat). The harm affects the community and raises ethical and safety concerns about AI deployment in autonomous vehicles. The AI system's malfunction or failure to act appropriately is central to the incident. Although the victim is an animal, the harm to the community and property is significant and fits the definition of an AI Incident. Hence, the classification is AI Incident.
Thumbnail Image

Un robotaxi tue un chat vedette de San Francisco et provoque un débat politique: "Cela aurait pu être un enfant

2025-11-07
7sur7
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo's autonomous vehicle) whose use directly caused harm (death of a cat). The harm is materialized and not hypothetical. The incident has also led to political responses, but the primary classification is AI Incident due to the realized harm caused by the AI system's operation. The death of the cat is a direct harm to property and community sentiment, fulfilling the criteria for an AI Incident.
Thumbnail Image

Waymo prépare son arrivée au Canada avec des démarches officielles

2025-11-07
Leblogauto.com
Why's our monitor labelling this an incident or hazard?
Waymo's autonomous vehicles are AI systems by definition, and their planned expansion involves the use of AI for fully autonomous driving. The article focuses on lobbying and regulatory preparation, indicating that deployment is not yet realized. No harm or incident is reported, but the potential for harm exists if these vehicles operate without adequate regulation or safety measures. Hence, this is an AI Hazard reflecting plausible future harm from the use of AI systems in autonomous vehicles in Canada.
Thumbnail Image

Taxi autonome Waymo : San Francisco endeuillé par la mort d'un chat

2025-11-07
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Waymo autonomous taxi) whose use directly caused harm (death of a cat, a community mascot). The AI system's failure to detect and react to the cat, as described by witnesses and experts, indicates a malfunction or limitation in the AI's perception. This harm to property and the resulting community impact meet the criteria for an AI Incident. The political and social reactions are complementary but do not change the primary classification of the event as an incident.
Thumbnail Image

Death of KitKat, a beloved San Francisco cat, reignites fury over robotaxis

2025-11-06
UnionLeader.com
Why's our monitor labelling this an incident or hazard?
The incident clearly involves an AI system (Waymo's self-driving car) whose operation directly caused harm (death of the cat). The harm is materialized and directly linked to the AI system's use. Although the victim is an animal, the death has significant community impact and raises questions about AI system safety and accountability. Therefore, this qualifies as an AI Incident under the definition of harm to communities and property (the cat as property/pet).
Thumbnail Image

Cat killed by driverless taxi sparking neighbourhood fury

2025-11-06
The Telegraph
Why's our monitor labelling this an incident or hazard?
The driverless taxi is an AI system operating autonomously. The event describes the AI system's use leading directly to the death of a cat, which is harm to property, communities, or the environment under the definitions. The incident has already occurred, so it is not a potential hazard. The community reaction and the company's response further confirm the incident's impact. Hence, this is classified as an AI Incident.
Thumbnail Image

Robotaxi runs over and kills popular cat that greeted people in a corner shop

2025-11-06
Metro
Why's our monitor labelling this an incident or hazard?
The event involves a self-driving car, which is an AI system performing autonomous navigation and decision-making. The death of the cat was directly caused by the AI system's operation (the vehicle running over the cat). This is a clear harm to property and community (the cat was a beloved local figure), fulfilling the criteria for an AI Incident. The article also highlights public concern and the potential for more serious harm (e.g., to children), but the realized harm here is the cat's death. Hence, the classification is AI Incident.
Thumbnail Image

How a cat named KitKat became San Francisco's latest symbol of anti-tech rage

2025-11-08
The Star
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous vehicle) whose use directly caused harm (the death of a cat). This qualifies as an AI Incident because the AI system's operation led to injury (harm to an animal) and has triggered significant social and political consequences. The involvement of the AI system is explicit, and the harm is realized, not just potential. Therefore, this is classified as an AI Incident.
Thumbnail Image

Cat killed by driverless taxi sparking neighbourhood fury

2025-11-06
AOL.com
Why's our monitor labelling this an incident or hazard?
The driverless taxi is an AI system as it operates autonomously to provide taxi services. The event describes the use of this AI system leading directly to harm—the death of a cat, which is a harm to property and community well-being. The incident has caused community outrage and raises questions about the safety and regulation of autonomous vehicles. The AI system's operation was a necessary factor in the harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

KitKat's Untimely Death vs San Francisco's 500 Overdoses Citywide

2025-11-07
California Globe
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (Waymo's autonomous vehicle) whose operation directly led to the death of a cat, a harm to property and community. Although the article contrasts this with human overdose deaths, those are unrelated to AI systems. The autonomous vehicle's sensors and algorithms failed to prevent the accident, constituting a malfunction or failure in use. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by an AI system's malfunction.
Thumbnail Image

How a cat named KitKat became San Francisco's latest symbol of anti-tech rage

2025-11-09
The Star
Why's our monitor labelling this an incident or hazard?
The Waymo vehicle is an AI system (an autonomous vehicle) whose operation directly caused the death of KitKat the cat. This constitutes harm caused by the AI system's use. The incident has catalyzed public and political responses, indicating the harm is realized and significant. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Waymo erweitert US-Robotaxi-Präsenz auf Minneapolis, Tampa und New Orleans

2025-11-20
de.marketscreener.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of AI systems (autonomous driving technology) in public transportation. However, the article does not describe any realized harm or incidents caused by these AI systems, nor does it indicate any immediate or plausible future harm arising from the expansion itself. Instead, it provides information about the development and rollout strategy of AI-powered robotaxi services, which is a significant development in the AI ecosystem but does not constitute an incident or hazard by itself.
Thumbnail Image

Kritik an Waymo wegen überfahrener Katze

2025-11-17
heise online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—Waymo's autonomous driving technology—that directly caused harm by running over a cat. The harm is materialized (death of the cat) and has led to community protests, indicating harm to the community and property. The AI system's malfunction or failure to prevent the accident is central to the incident. Prior related incidents further support the classification as an AI Incident rather than a hazard or complementary information. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Robotaxis von Zoox sind in San Francisco im Einsatz

2025-11-19
heise online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous vehicles) but does not describe any realized harm or incident resulting from their use, nor does it highlight any plausible future harm or risk. It is a factual report on the deployment and testing of AI-powered robotaxis, without any indication of incidents, hazards, or responses to such events. Therefore, it fits best as Complementary Information, providing context and updates on AI system deployment without reporting harm or risk.
Thumbnail Image

Amazon-Tochter Zoox startet kostenlose Robotaxi-Fahrten für frühe Nutzer in Teilen von San Francisco

2025-11-18
de.marketscreener.com
Why's our monitor labelling this an incident or hazard?
Zoox's robotaxi service clearly involves AI systems for autonomous driving. The article describes the use and deployment of these AI systems but does not report any injury, damage, rights violations, or other harms caused by the AI system. Therefore, no AI Incident has occurred. However, the deployment of autonomous vehicles inherently carries plausible risks of harm (e.g., accidents), so this event qualifies as an AI Hazard due to the plausible future harm from the use of AI in autonomous driving. The article does not focus on responses, legal proceedings, or updates to past incidents, so it is not Complementary Information. It is not unrelated because it involves AI systems and their deployment.
Thumbnail Image

Tesla-Aktie im Fokus: Musk plant grosse Veränderungen für Robotaxis

2025-11-20
finanzen.ch
Why's our monitor labelling this an incident or hazard?
Tesla's Robotaxi service uses AI systems for fully autonomous driving. The article reports malfunctions of these vehicles that have led to safety investigations by authorities, indicating that the AI system's use has directly or indirectly led to potential harm or risk to public safety. Although no specific accidents or injuries are described, the presence of malfunctions and official investigations related to safety concerns meets the threshold for an AI Incident. The article's focus is on the development, deployment, and operational challenges of an AI system that has caused safety-related issues, rather than on hypothetical future risks or governance responses. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

Waymo: Katze in San Francisco überfahren - Sind Robotaxis wirklich sicher?

2025-11-18
WAZ
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Waymo's autonomous vehicle) whose use directly caused harm (death of a cat). The harm is materialized and has led to community disruption and public debate. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm and societal impact. The incident is not merely a potential hazard or complementary information, but a realized harm involving an AI system.
Thumbnail Image

Tesla-Aktie im Fokus: Musk kündigt neue Robotaxi-Offensive an - Kritik an Waymos Sensoren

2025-11-21
finanzen.at
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in the form of Tesla's autonomous driving and robotaxi AI. The mention of malfunctions and safety investigations indicates potential safety risks, which could plausibly lead to injury or harm if the AI systems fail. However, no actual harm or accidents are reported, so this is a potential risk rather than a realized incident. The article also discusses sensor conflicts in Waymo's AI system, which is relevant context but does not describe a new incident. Thus, the event fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to harm, but no direct or indirect harm has yet occurred according to the article.
Thumbnail Image

Waymo unter Beschuss: Tod einer Katze entfacht Debatte über autonome Fahrzeuge

2025-11-16
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
An autonomous vehicle is an AI system by definition, as it infers from input data how to navigate and make driving decisions. The incident involves the use of such an AI system leading directly to harm (death of the cat). Although the harm is to an animal rather than a human, harm to property, communities, or the environment includes harm to animals and communities affected by such incidents. Therefore, this qualifies as an AI Incident. The article also discusses societal and regulatory responses, but the primary focus is the incident itself and the harm caused.
Thumbnail Image

Waymo treibt autonome Mobilität in den USA voran

2025-11-18
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (autonomous driving AI) whose deployment without safety drivers increases the risk of harm. Although no actual harm or incident is reported, the removal of human oversight in real-world autonomous vehicle operation plausibly increases the risk of accidents or other harms. The article's mention of regulatory investigations further supports the recognition of potential hazards. Hence, this is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Zoox startet kostenfreie Robotaxi-Fahrten in San Francisco

2025-11-18
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous robotaxis) in active use. There is mention of a past minor injury incident related to the AI system, but it is historical and has been mitigated by software updates. The current launch of free rides is a deployment step and user feedback gathering, with no new harm or plausible imminent harm described. The article also discusses public concerns and previous incidents as context. Since no new AI Incident or AI Hazard is reported, and the main focus is on deployment and safety updates, the classification is Complementary Information.
Thumbnail Image

Tesla plant Ausbau des Robotaxi-Services trotz Kritik an Waymo

2025-11-19
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves the use and planned expansion of AI systems for autonomous driving (Robotaxis). Although no actual incident or harm has been reported, the article highlights regulatory investigations and the risk of accidents, which are plausible harms that could arise from the AI system's malfunction or misuse. Therefore, this situation fits the definition of an AI Hazard, as the development and deployment of these autonomous AI systems could plausibly lead to injury or harm to people or disruption of infrastructure if accidents occur. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information since it focuses on the expansion and associated risks rather than responses or ecosystem context. Hence, the classification is AI Hazard.