GM's Cruise Robotaxis Resume Testing after Pedestrian Crash Amid Origin Suspension

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

GM’s Cruise autonomous taxi division suspended its brake-less Origin robotaxi after one vehicle failed to stop and struck a pedestrian, triggering investigations and a California ban. GM shifted development to a next-generation Bolt EV platform and has resumed safety-driver-monitored testing in Dallas, Houston and Phoenix while awaiting approvals.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an accident where a Cruise autonomous vehicle struck a pedestrian and dragged her 6 meters, which is a direct injury to a person caused by the AI system's operation. This qualifies as an AI Incident under the definition of harm to health caused directly or indirectly by the use of an AI system. The decision to halt production of the Origin vehicle and focus on other models is a response to this incident and regulatory challenges, but the core event is the accident and its consequences. Therefore, the event is classified as an AI Incident.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainabilityHuman wellbeing

Industries
Mobility and autonomous vehiclesRobots, sensors, and IT hardware

Affected stakeholders
General public

Harm types
Physical (injury)Economic/PropertyReputational

Severity
AI incident

Business function:
Research and developmentMonitoring and quality control

AI system task:
Recognition/object detectionForecasting/predictionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

GM kills autonomous car without steering wheel

2024-07-23
BusinessLIVE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an accident where a Cruise autonomous vehicle struck a pedestrian and dragged her 6 meters, which is a direct injury to a person caused by the AI system's operation. This qualifies as an AI Incident under the definition of harm to health caused directly or indirectly by the use of an AI system. The decision to halt production of the Origin vehicle and focus on other models is a response to this incident and regulatory challenges, but the core event is the accident and its consequences. Therefore, the event is classified as an AI Incident.
Thumbnail Image

GM puts self-driving car without steering wheel and pedals on hold

2024-07-23
Motor Authority
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Cruise's self-driving technology) used in robotaxis. The accident in San Francisco where a pedestrian was injured due to the robotaxi's actions directly involves the AI system's malfunction or failure to prevent harm, fulfilling the criteria for an AI Incident (harm to a person). The subsequent regulatory and operational decisions are responses to this incident. The event is not merely a potential hazard or complementary information but a realized harm caused by the AI system's use.
Thumbnail Image

GM puts self-driving vehicle without steering wheel on hold

2024-07-23
ThePrint
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of a self-driving vehicle designed to operate without human controls, which implies advanced AI for autonomous driving. However, since the vehicle without controls has not been deployed and no harm or incident has occurred, this situation represents a plausible future risk rather than an actual incident. The regulatory petition and the hold on development indicate potential future hazards if such vehicles were deployed without sufficient safety validation. Therefore, this is best classified as an AI Hazard due to the plausible risk of harm from deploying fully autonomous vehicles without human oversight.
Thumbnail Image

GM Shares Dip 6% After Halting Cruise Self-Driving Cars

2024-07-23
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a pedestrian being struck and dragged by a Cruise autonomous vehicle, which is an AI system. This constitutes direct harm to a person (harm category a). The involvement of regulatory investigations and internal reviews confirms the incident's seriousness. The AI system's malfunction or failure to prevent this harm is central to the event. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

GM's Cruise Ditches Origin Robotaxi for Self-Driving 2025 Bolt EV

2024-07-23
Yahoo Autos
Why's our monitor labelling this an incident or hazard?
The article involves AI systems as it discusses autonomous vehicles and their testing, which rely on AI for self-driving capabilities. However, it does not describe any new or specific AI incident causing harm, nor does it highlight a plausible future harm from the AI systems. The mention of past incidents is background context, and the main content is about strategic and regulatory developments and company updates. Therefore, this is best classified as Complementary Information, providing updates and context on AI system development and deployment without reporting a new incident or hazard.
Thumbnail Image

GM-owned Cruise has lost interest in cars without steering wheels. Its competitors haven't

2024-07-24
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article focuses on strategic and regulatory decisions by GM regarding their autonomous vehicle program, referencing past incidents but not reporting new harm or malfunction caused by AI systems. It discusses potential risks and regulatory hurdles but does not describe an event where AI use has directly or indirectly led to harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides complementary information about the evolving AI ecosystem in autonomous vehicles, including company responses and industry trends.
Thumbnail Image

GM's Cruise looks to start charging for robotaxi rides next year, Bloomberg News reports

2024-07-25
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article describes a past AI Incident involving Cruise's autonomous vehicle that caused physical harm to a pedestrian. The incident led to regulatory actions and investigations, indicating realized harm due to the AI system's malfunction or failure. The current plans to resume service and charge fares are future intentions but do not themselves constitute a new incident or hazard. Therefore, the main event described is an AI Incident (the prior accident) with ongoing developments. However, since the article mainly reports on the company's plans and regulatory context following the incident, and does not report a new harm or imminent hazard, the classification aligns best with Complementary Information, as it updates on the aftermath and responses to the prior AI Incident.
Thumbnail Image

G.M. Will Restart Cruise Taxi Service

2024-07-23
The New York Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a self-driving AI system (Cruise's autonomous vehicles) that caused harm by hitting and dragging a pedestrian, which is a direct injury to a person. This meets the criteria for an AI Incident as the AI system's malfunction directly led to harm. The regulatory revocation and company layoffs further confirm the seriousness of the incident. The restart of operations with human safety drivers is a mitigation step but does not change the classification of the original event as an AI Incident. Therefore, the overall event is classified as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

GM slams brakes on self-driving vehicle without steering wheel - ET Auto

2024-07-24
ETAuto.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous driving technology used by Cruise's robotaxis. It references a specific incident where a Cruise robotaxi struck and dragged a pedestrian, causing harm and triggering investigations and permit revocation, which meets the criteria for an AI Incident (harm to a person). The current decision to delay the Origin vehicle deployment is related to regulatory and technological challenges but does not negate the fact that harm has already occurred due to AI system use. The article also discusses ongoing testing and development, but the presence of a past harmful event linked to the AI system's use takes precedence, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

GM indefinitely delays Cruise Origin autonomous vehicle

2024-07-23
CNBC
Why's our monitor labelling this an incident or hazard?
The Cruise Origin autonomous vehicle is an AI system designed for autonomous driving. The article reports a specific incident where a pedestrian was harmed due to the robotaxi's operation, which directly involves the AI system's malfunction or failure. The harm to the pedestrian constitutes injury to a person, meeting the definition of an AI Incident. The regulatory and operational responses are consequences of this incident. Therefore, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

GM indefinitely suspends its self-driving vehicle

2024-07-23
Financial Times News
Why's our monitor labelling this an incident or hazard?
The autonomous vehicle Origin is an AI system designed for self-driving ride-sharing without human controls. The reported crash involving a Cruise autonomous vehicle, which is part of GM's self-driving subsidiary, led to a pedestrian being hit and dragged, constituting injury to a person. The regulatory response and internal investigation highlight failures in transparency and accountability related to the AI system's operation. Although the suspension itself is a business decision, it is directly linked to the harm caused by the AI system's malfunction and regulatory fallout. Therefore, this qualifies as an AI Incident due to the realized harm and the AI system's role in it.
Thumbnail Image

GM's Cruise abandons Origin robotaxi, takes $583 million charge

2024-07-23
Yahoo
Why's our monitor labelling this an incident or hazard?
The Origin robotaxi is an AI system (an autonomous vehicle without steering wheel or pedals). The article references a specific incident where a Cruise robotaxi dragged a pedestrian after being initially hit by a human-driven car, leading to suspension of permits by California regulators. This incident constitutes harm to a person (pedestrian injury) and disruption of operations (regulatory suspension). The decision to scrap the Origin project is a direct consequence of this incident and regulatory uncertainty. Hence, the event involves an AI system's malfunction and use leading to realized harm and operational disruption, fitting the definition of an AI Incident.
Thumbnail Image

GM slams brakes on self-driving vehicle without steering wheel

2024-07-25
GMA Network
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (autonomous driving AI) and references a past AI Incident (the pedestrian accident caused by a Cruise robotaxi). The current event is a corporate decision to delay a product due to regulatory and technological challenges following that incident. Since the article focuses on the response and ongoing developments after a known AI Incident, it qualifies as Complementary Information rather than a new AI Incident or AI Hazard. There is no new harm or plausible future harm described beyond the existing incident and its aftermath.
Thumbnail Image

GM indefinitely pauses Cruise Origin autonomous vehicle while it...

2024-07-24
New York Post
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous driving system of the Cruise Origin robotaxi) whose malfunction directly led to injury of a person, fulfilling the criteria for an AI Incident. The harm is realized (a woman was struck and dragged), and the AI system's malfunction was identified as the cause. The company's response and strategic shift are consequences of this incident but do not negate the classification as an AI Incident. The event is not merely a hazard or complementary information, as the harm has already occurred and is directly linked to the AI system's malfunction.
Thumbnail Image

GM Puts Self-Driving Vehicle Without Steering Wheel on Hold

2024-07-23
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The article centers on GM's decision to halt development of a fully driverless vehicle without human controls and focus on a different model, citing regulatory uncertainty and resource optimization. While the article references a prior AI Incident (the pedestrian accident caused by a Cruise robotaxi), this is not the primary focus of the article. The current event does not describe a new harm or a new plausible harm but rather a corporate response to past events and regulatory environment. Therefore, this is Complementary Information providing an update on the AI ecosystem and responses to a prior AI Incident, not a new AI Incident or AI Hazard.
Thumbnail Image

Elon Musk is not answering the most important questions about the Tesla robotaxi

2024-07-23
The Verge
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (autonomous vehicles) and their regulatory environment, focusing on exemptions and investigations. There is no direct or indirect harm reported, nor a specific incident of malfunction or misuse causing harm. However, the regulatory challenges and investigations suggest plausible future risks related to safety and compliance. Therefore, this qualifies as an AI Hazard, as the development and deployment of these autonomous AI systems could plausibly lead to incidents if safety standards are not met or if regulatory oversight is insufficient.
Thumbnail Image

GM ditches Cruise's custom-designed driverless car

2024-07-23
The Verge
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that Cruise's driverless car was involved in multiple incidents, including one where a pedestrian was hit and dragged, which is a direct injury to a person caused by the AI system's malfunction or failure. This meets the definition of an AI Incident due to harm to health. The regulatory uncertainty and suspension of production are consequences of these incidents. The use of human safety drivers in testing further indicates the AI system's involvement in the incidents. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Cruise Scraps 'Origin' Robotaxi, Will Stick With an Old Favorite Instead

2024-07-23
PCMag UK
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of autonomous vehicle technology, but it primarily discusses the discontinuation of a particular vehicle model due to regulatory and cost issues, and references a past pedestrian-related accident without detailing new harm or ongoing risk. There is no direct or indirect new harm caused by AI reported here, nor a clear plausible future harm beyond existing regulatory challenges. The content mainly provides updates on company strategy, regulatory environment, and industry developments, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Cruise Scraps 'Origin' Robotaxi, Will Stick With an Old Favorite Instead

2024-07-23
PC Magazine
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of autonomous vehicle technology but does not describe a new incident where AI use has directly or indirectly caused harm. It also does not present a new plausible hazard event but rather discusses regulatory and development challenges and company decisions. The mention of past incidents is background context, and the main focus is on strategic shifts and regulatory environment. Therefore, this is best classified as Complementary Information, providing context and updates on AI system deployment and governance challenges in autonomous vehicles.
Thumbnail Image

GM shelves the autonomous Cruise Origin shuttle van

2024-07-23
engadget
Why's our monitor labelling this an incident or hazard?
The Cruise Origin is an autonomous vehicle system, which by definition involves AI for navigation and control. The article mentions a pedestrian being dragged by a Cruise vehicle, which is a direct injury linked to the AI system's operation. Additionally, the regulatory uncertainty and operational pauses are consequences of this harm and safety concerns. These factors meet the criteria for an AI Incident, as the AI system's use has directly led to harm to a person and disruption of operations. The shelving of the Origin project is a response to these incidents and regulatory challenges, not merely a future risk or complementary information.
Thumbnail Image

GM's Cruise abandons Origin robotaxi, takes $583 million charge | TechCrunch

2024-07-23
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article references a prior AI Incident where a Cruise robotaxi caused harm to a pedestrian, leading to regulatory suspension and operational halts. The current event is about the company's decision to abandon the Origin robotaxi project and restructure, which is a response to that incident and regulatory challenges. Since the article's main focus is on the company's response and financial write-offs following the incident, and not on a new harm or new hazard, this qualifies as Complementary Information. The prior incident is background context, and the current news enhances understanding of the ongoing impact and responses related to that AI Incident.
Thumbnail Image

GM Slams Brakes on Self-Driving Robotaxi Without Steering Wheel

2024-07-24
Inc.
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of autonomous driving technology used in robotaxis. It references a past AI Incident where a Cruise robotaxi struck a pedestrian, causing harm and regulatory investigations, which qualifies as an AI Incident due to direct harm caused by the AI system's use. The current news about pausing the Origin vehicle development and shifting focus to a different platform is primarily an update on the company's response and strategic decisions following the incident and regulatory challenges, without new harm occurring. Therefore, the main content is Complementary Information as it provides context and updates related to a prior AI Incident rather than reporting a new incident or hazard.
Thumbnail Image

GM indefinitely pauses Cruise Origin autonomous vehicle while it refocuses unit

2024-07-23
Fox Business
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an accident where a Cruise robotaxi struck and dragged a pedestrian 20 feet, which is a direct harm to a person caused by the autonomous vehicle's AI system. The event led to a pause in production, executive firings, and a government investigation, confirming the severity and direct link to AI system malfunction. This fits the definition of an AI Incident as the AI system's malfunction directly led to injury and regulatory repercussions.
Thumbnail Image

GM slams brakes on robotaxi dreams, shelves Cruise Origin indefinitely

2024-07-26
TechSpot
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system involved in autonomous driving (Cruise's self-driving vehicles). It references a past AI Incident where a Cruise vehicle struck and dragged a pedestrian, causing harm and regulatory action. However, the main focus of this article is the announcement of shelving the Cruise Origin project and the company's strategic response to prior incidents and regulatory challenges. There is no new harm or new plausible future harm described here; rather, it is an update on the company's response to previous incidents and ongoing challenges. Therefore, this article constitutes Complementary Information, providing context and updates related to a prior AI Incident but not describing a new Incident or Hazard.
Thumbnail Image

GM slams brakes on self-driving vehicle without steering wheel

2024-07-23
CNA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically the autonomous driving AI used in Cruise's robotaxis. The accident involving a Cruise robotaxi that struck and dragged a pedestrian constitutes direct harm to a person caused by the AI system's use, fulfilling the criteria for an AI Incident. The article also discusses regulatory and operational responses to this incident. Although the main focus is on GM's strategic shift and regulatory challenges, the underlying cause is the AI system's involvement in a harmful event. Therefore, this qualifies as an AI Incident due to the realized harm (injury to a pedestrian) directly linked to the AI system's operation.
Thumbnail Image

GM's Cruise looks to start charging for robotaxi rides next year, Bloomberg News reports

2024-07-25
CNA
Why's our monitor labelling this an incident or hazard?
The article mentions a previous AI Incident involving a Cruise robotaxi causing injury to a pedestrian, which is a clear AI Incident. The current report focuses on the company's future plans to resume autonomous rides and charge fares, which does not itself describe a new incident or hazard but relates to ongoing developments following the prior incident. Therefore, this article is best classified as Complementary Information, as it provides an update on the AI system's deployment and regulatory context without reporting new harm or plausible future harm.
Thumbnail Image

GM indefinitely delays plans of self-driving vehicle without steering wheel

2024-07-24
Business Standard
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Cruise's autonomous driving technology) that directly caused harm to a person (a pedestrian struck and dragged by a robotaxi). The involvement of regulatory investigations and permit revocation further confirms the seriousness of the incident. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use and malfunction.
Thumbnail Image

GM's Cruise looks to start charging for robotaxi rides next year, Bloomberg News reports

2024-07-25
ThePrint
Why's our monitor labelling this an incident or hazard?
The article describes Cruise, a GM unit, preparing to operate fully autonomous rides, which involves AI systems for self-driving. While no harm is reported or implied, the deployment of such AI systems in public transportation carries plausible risks of harm (e.g., accidents, safety issues). However, since no incident or harm has occurred yet, and the article focuses on future plans, this qualifies as an AI Hazard due to the plausible future harm from the use of autonomous vehicles.
Thumbnail Image

GM slams brakes on self-driving vehicle without steering wheel

2024-07-23
ThePrint
Why's our monitor labelling this an incident or hazard?
The Cruise Origin is an AI system (fully autonomous vehicle) whose malfunction (accident involving a pedestrian) directly caused harm to a person, fulfilling the criteria for an AI Incident. The article details the consequences of this incident, including investigations and regulatory responses. The decision to halt production and switch platforms is a response to this incident. Therefore, this event is classified as an AI Incident due to the realized harm caused by the AI system's malfunction and its direct involvement in the accident.
Thumbnail Image

GM delays Origin self-driving vehicle without a steering wheel, citing regulatory risk

2024-07-24
Fast Company
Why's our monitor labelling this an incident or hazard?
The Origin vehicle is a fully autonomous AI system, and its deployment without human controls could plausibly lead to AI incidents involving safety risks. GM's decision to delay deployment due to regulatory risk indicates awareness of potential harm. Since no harm has occurred yet and the event centers on the potential for future harm from the AI system's use, it fits the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because it directly involves an AI system and its deployment risks.
Thumbnail Image

The Cruise Origin driverless pod is dead, GM tells investors

2024-07-23
Ars Technica
Why's our monitor labelling this an incident or hazard?
The article references a past AI Incident where a Cruise robotaxi caused injury to a pedestrian, which involved an AI system malfunction or failure. The current news does not report a new incident or hazard but rather a corporate decision to discontinue a specific autonomous vehicle model due to regulatory and operational challenges. This update enhances understanding of the AI ecosystem and responses to prior incidents but does not itself describe a new AI Incident or AI Hazard. Hence, it fits the definition of Complementary Information.
Thumbnail Image

GM puts self-driving vehicle without steering wheel on hold

2024-07-23
Times LIVE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the autonomous driving system in Cruise's robotaxi) involved in an accident causing injury to a pedestrian, which is a direct harm to a person. The incident led to investigations and revocation of permits, indicating recognized harm and malfunction. The event is not merely a potential risk or future hazard but a realized incident with direct consequences. The decision to pause development of the fully autonomous vehicle without human controls is a response to this incident, but the core event is the accident and its aftermath, qualifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

GM's Cruise self-driving vehicle without steering wheel put on hold - Autoblog

2024-07-23
Autoblog
Why's our monitor labelling this an incident or hazard?
The Cruise autonomous vehicle is an AI system performing complex real-time decision-making for driving. The October accident where a Cruise robotaxi struck and dragged a pedestrian is a direct harm to a person caused by the AI system's malfunction or failure. The regulatory investigations and permit revocation further confirm the incident's severity. Although the article also covers the halting of the Origin vehicle and strategic shifts, the core event is the realized harm from the AI system's use. Hence, this qualifies as an AI Incident under the framework.
Thumbnail Image

General Motors is scaling back its Google-rivaling AI product

2024-07-23
Post and Courier
Why's our monitor labelling this an incident or hazard?
The autonomous vehicles operated by Cruise and Waymo use AI systems for driving decisions. The crash involving a Cruise robotaxi hitting a pedestrian and the subsequent permit revocation and fleet removal demonstrate direct harm linked to AI system use. The NHTSA investigation into Waymo's systems for collisions and traffic violations further supports the presence of harm or risk caused by AI malfunction or use. Therefore, these events qualify as AI Incidents due to realized harm and operational disruption caused by AI systems in autonomous vehicles.
Thumbnail Image

GM-owned Cruise has lost interest in cars without steering wheels. Its competitors haven't

2024-07-24
Fortune
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of autonomous vehicle technology, but it does not describe any new harm caused by these systems. The mention of a past accident is background context, not a new incident. The focus is on GM's strategic decision to delay production of a fully autonomous vehicle without manual controls due to regulatory and cost concerns, which is a governance and operational update. Competitors' ongoing development of similar vehicles is also noted but without harm occurring. Thus, the event is an update on AI deployment and regulatory challenges, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Major brand nixes self-driving cars with no steering wheel despite $600m blow

2024-07-23
The US Sun
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically an autonomous vehicle with AI-based driving capabilities. The article references a prior incident where a Cruise self-driving car caused physical harm to a person (hitting and dragging a woman), which led to regulatory action. This prior incident constitutes an AI Incident due to direct harm to a person caused by the AI system's malfunction or failure. Although the current article focuses on the production halt and strategic decisions, the context includes a realized harm caused by the AI system. Therefore, the event is best classified as an AI Incident, as it relates to the consequences of AI system use leading to injury and regulatory response.
Thumbnail Image

GM's Cruise Gives Up On Steering Wheel-less Autonomous Vehicle, Will Use Next-Gen Chevy Bolt Instead

2024-07-23
Jalopnik
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of autonomous vehicles and references past AI incidents (the pedestrian accident). However, the main focus is on GM's strategic decision to pause the Origin vehicle and switch to a different platform due to regulatory and cost considerations. There is no new harm or new plausible future harm described. The past incident is background context, and the current event is a corporate decision and regulatory update. Therefore, this is best classified as Complementary Information, as it provides an update on the AI ecosystem and company responses rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

GM Cruise Ditches Boxy Robotaxis for Chevy Bolt EVs

2024-07-23
The Drive
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of autonomous vehicle technology developed and used by Cruise. However, the event focuses on a strategic business decision to discontinue a particular AI-enabled vehicle model due to cost and regulatory issues, rather than describing any new harm or malfunction caused by the AI system. Although past incidents and regulatory actions are mentioned, this article does not report any new AI Incident or AI Hazard. It provides context and updates on the company's response to previous challenges and its future plans, which fits the definition of Complementary Information.
Thumbnail Image

GM CEO Mary Barra said "regulatory uncertainty" led to axing of autonomous cab with no steering wheel

2024-07-24
Carscoops
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a collision involving a Cruise autonomous vehicle that struck and dragged a pedestrian, leading to regulatory suspension of testing licenses. This is a direct harm to a person caused by the AI system's malfunction or failure. Additionally, the regulatory uncertainty and operational pauses are disruptions to critical infrastructure (transportation services). The cancellation of the Origin project is a consequence of these harms and regulatory challenges. Thus, the event meets the criteria for an AI Incident as the AI system's use directly led to injury and operational disruption.
Thumbnail Image

GM delays self-driving vehicle production, shifts focus to new EV

2024-07-25
news.cgtn.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in autonomous vehicles, specifically the Cruise self-driving unit's development efforts. The delay and shift in focus are due to regulatory and engineering challenges, indicating potential future risks in deploying fully autonomous vehicles. However, there is no indication of any harm, malfunction, or incident caused by the AI systems. Therefore, this qualifies as an AI Hazard because the development and deployment of fully autonomous vehicles could plausibly lead to incidents in the future, but no incident has occurred yet. It is not Complementary Information because the article is not about responses to a past incident but about a strategic shift due to regulatory risk. It is not Unrelated because AI systems are central to the event.
Thumbnail Image

General Motors pauses self-driving vehicle development for Chevy Bolt 2.0

2024-07-23
TESLARATI
Why's our monitor labelling this an incident or hazard?
The article describes a strategic pause in the development of an AI system (self-driving vehicle) but does not report any realized harm or direct risk of harm from the AI system. There is no mention of accidents, failures, or misuse. The regulatory uncertainty and production pause do not constitute an AI Incident or AI Hazard. The content is primarily about corporate strategy and regulatory context, which fits the category of Complementary Information as it provides context and updates on AI system development without describing harm or plausible harm.
Thumbnail Image

Former Cruise CEO responds to GM canceling Origin self-driving vehicle

2024-07-23
TESLARATI
Why's our monitor labelling this an incident or hazard?
The article describes a past AI Incident involving a Cruise robotaxi accident that caused harm to a pedestrian, which led to regulatory and corporate consequences. However, the main focus of the article is on GM's strategic decision to cancel the Origin vehicle and shift focus, along with leadership responses and company restructuring. There is no new incident or hazard described; rather, the article provides complementary information about the aftermath and ongoing developments related to the prior incident and the AI system's deployment. Therefore, this is best classified as Complementary Information.
Thumbnail Image

GM Shifts Focus to Next-Gen Bolt Amid Self-Driving Tech Challenges | Technology

2024-07-23
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is clear as Cruise's autonomous vehicles rely on AI for self-driving capabilities. The article explicitly mentions an incident where a robotaxi struck a pedestrian, which constitutes injury or harm to a person, fulfilling the criteria for an AI Incident. The regulatory hurdles and investigations are consequences of this harm. The shift in focus to the next-gen Bolt is a response to these challenges but does not negate the fact that harm has already occurred due to the AI system's use. Hence, the event is classified as an AI Incident.
Thumbnail Image

GM's Cruise Shifts Focus to Next-Gen Chevrolet Bolt for Autonomous Future | Law-Order

2024-07-23
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Cruise's autonomous vehicles) involved in a prior incident causing harm to a pedestrian, which qualifies as an AI Incident. However, the main focus of this article is on the company's decision to pivot development strategy and regulatory context, not on a new incident or hazard. Therefore, the article itself is best classified as Complementary Information, providing updates and context related to a previously reported AI Incident.
Thumbnail Image

GM Halts Autonomous Cruise Origin, Shifts Focus to Chevrolet Bolt | Technology

2024-07-23
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of autonomous driving technology and robotaxis. However, the event focuses on the delay and regulatory challenges rather than any realized harm or malfunction causing injury, rights violations, or property/community/environmental harm. The mention of a prior accident is background and not the main event. The shift to a vehicle with human controls reduces regulatory risk and potential harm. Therefore, this is not an AI Incident or AI Hazard but rather complementary information about industry and regulatory developments and company strategy in AI deployment.
Thumbnail Image

GM's cruise targets resumption of driverless rides this year

2024-07-25
The Business Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Cruise's autonomous driving technology) involved in a collision with a pedestrian causing serious injury, which led to regulatory sanctions. This is a direct harm to a person caused by the AI system's malfunction or failure. The article also discusses ongoing remediation and safety improvements, but the primary event of harm has already occurred. Hence, the event meets the criteria for an AI Incident due to direct injury caused by the AI system's use and malfunction.
Thumbnail Image

GM's Cruise Robotaxis Very Softly Relaunch, But Not In SF, and With Human Drivers Again

2024-07-23
SFist - San Francisco News, Restaurants, Events, & Sports
Why's our monitor labelling this an incident or hazard?
The article explicitly references a past AI Incident involving Cruise's autonomous vehicles causing serious harm to a pedestrian, which led to regulatory actions and operational suspension. The current relaunch involves human drivers supervising the AI system, no paying passengers, and a much smaller fleet, indicating risk mitigation. There is no indication of new harm or plausible future harm beyond what is already known. The main focus is on the company's operational update and financial performance, which aligns with Complementary Information as it provides context and updates related to a prior AI Incident rather than reporting a new incident or hazard.
Thumbnail Image

GM slams brakes on self-driving vehicle without steering wheel (Reuters)

2024-07-23
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article discusses the postponement of a fully autonomous vehicle without steering wheel or human controls, which involves AI systems for self-driving. However, no harm or incident has occurred; rather, the event concerns development and regulatory challenges. There is no indication of direct or indirect harm caused by the AI system, nor a plausible imminent harm event. Therefore, this is not an AI Incident or AI Hazard but rather an update on AI system development and regulatory status, fitting the category of Complementary Information.
Thumbnail Image

GM's Cruise Will Standardize On The Chevy Bolt

2024-07-25
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a past AI Incident involving Cruise's autonomous vehicle causing harm (a pedestrian accident), but the main content is about the company's strategic pivot to the Chevy Bolt platform and leadership changes. No new harm or potential harm is described. The focus is on operational updates and mitigation steps, which fits the definition of Complementary Information rather than a new AI Incident or AI Hazard.
Thumbnail Image

GM Gives Up On Cruise Origin and Pivots to Chevy Bolt EUV for Autonomous Taxi Service

2024-07-23
The Truth About Cars
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in autonomous vehicles (Cruise Origin and Chevy Bolt EUV) used for taxi services. The discontinuation of the Origin due to regulatory issues and testing problems with Bolt EV autonomous taxis suggests challenges in AI system deployment. However, no actual harm (injury, rights violation, property damage, etc.) is described. The event highlights potential future risks and regulatory concerns, fitting the definition of an AI Hazard rather than an Incident. It is not merely complementary information because it focuses on the project's pivot due to regulatory and safety challenges, indicating plausible future harm if issues persist.
Thumbnail Image

Report: GM's Cruise Plans to Resume Offering Rides Before 2025

2024-07-26
The Truth About Cars
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Cruise's autonomous driving technology) and references a past incident (a crash involving a pedestrian) that led to regulatory action. However, the current report focuses on the company's efforts to improve safety, management restructuring, and plans to resume operations. There is no new harm reported, nor a new hazard identified. The content primarily provides complementary information about the AI system's development, safety improvements, and regulatory relations following a prior AI incident.
Thumbnail Image

GM's Cruise Targets Resumption of Driverless Rides This Year

2024-07-25
Transport Topics
Why's our monitor labelling this an incident or hazard?
The article explicitly references a prior AI Incident involving a Cruise autonomous vehicle causing injury to a pedestrian, which led to regulatory consequences. The current news is about the company's efforts to resume operations safely and regain regulatory approval. There is no new harm or plausible future harm described beyond the past incident. Therefore, this article is best classified as Complementary Information, providing an update on responses and remediation following a known AI Incident.
Thumbnail Image

Cruise halts development of no-steering wheel vehicle

2024-07-23
Verdict
Why's our monitor labelling this an incident or hazard?
The article explicitly references a Cruise robotaxi hitting a pedestrian, which is a direct harm to a person caused by an AI system (the autonomous driving system). This meets the criteria for an AI Incident. The halting of development and regulatory delays are contextual but do not negate the fact that an AI Incident has occurred. Therefore, the event is classified as an AI Incident due to the realized harm from the autonomous vehicle's AI system.
Thumbnail Image

General Motors is scaling back its Google-rivaling AI product

2024-07-23
The Wichita Eagle
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of autonomous vehicle technologies developed and deployed by Cruise and Waymo. The pedestrian crash caused by a Cruise robotaxi is a direct harm to a person, fulfilling the criteria for an AI Incident. The regulatory scrutiny and operational halts following the crash further confirm the harm has materialized. The investigation into Waymo's systems for causing collisions or traffic violations also indicates realized harm or risk that has prompted official action. Therefore, the event qualifies as an AI Incident due to the direct involvement of AI systems in causing harm and operational disruptions.
Thumbnail Image

GM's Self-Driving Vehicle Without Steering Wheel on Hold - Carrier Management

2024-07-23
Carrier Management
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of autonomous vehicles developed by GM's Cruise unit. There is mention of a past AI Incident: the October accident where a Cruise robotaxi struck a pedestrian, which led to investigations and permit revocation. However, this article does not report a new AI Incident or a new AI Hazard. Instead, it focuses on GM's strategic shift, regulatory status, and financial impacts related to the autonomous vehicle program. The ongoing investigations and regulatory scrutiny are complementary information about previously reported AI Incidents. The decision to pause the Origin vehicle development is a business and regulatory response, not a new hazard or incident. Therefore, this article is best classified as Complementary Information.
Thumbnail Image

GM Abandons Futuristic Cruise Origin Taxi For Cheaper Self Driving Bolt EV

2024-07-24
AutoSpies.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (autonomous vehicles) and their development and use. However, it does not describe any direct or indirect harm caused by these AI systems, nor does it describe a plausible future harm event. The issues mentioned are regulatory and testing problems, not an incident or hazard. Therefore, this is best classified as Complementary Information, providing context on the AI ecosystem and development challenges rather than reporting an AI Incident or Hazard.
Thumbnail Image

Read More

2024-07-23
BruneiDirect
Why's our monitor labelling this an incident or hazard?
The Cruise Origin is an autonomous vehicle, which by definition involves AI systems for navigation and decision-making. The article reports a pedestrian being dragged and pinned by a Cruise vehicle after a hit-and-run incident, which is a direct harm linked to the AI system's operation. Additionally, the California DMV suspended Cruise's driverless permits over safety issues, and Cruise paused driverless operations, indicating recognized risks and harms. These facts demonstrate that the AI system's use has directly or indirectly led to harm, fulfilling the criteria for an AI Incident. The shelving of the Origin and the restructuring costs further support the seriousness of the incident.
Thumbnail Image

GM delays self-driving vehicle production, shifts focus to new EV

2024-07-26
industriesnews.net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (self-driving vehicle AI) but does not describe any realized harm or incident caused by these systems. The delay and shift in production focus are responses to regulatory and technical challenges, not an AI incident or hazard causing or plausibly leading to harm. The article provides updates on industry developments, regulatory environment, and investment, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

GM's Cruise Robotaxis Return to Dallas Streets for Testing, Driverless Taxi Service To Resume 'Later'

2024-07-24
Dallas Innovates
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Cruise's autonomous driving system) whose previous malfunction indirectly caused severe injury to a pedestrian, qualifying as an AI Incident. The current resumption of testing with safety drivers is a complementary development related to that incident, focusing on safety validation and regulatory compliance. Since the article mainly reports on the testing resumption and future plans without new harm occurring, it is best classified as Complementary Information updating on a prior AI Incident and ongoing risk management.
Thumbnail Image

G.M. Will Restart Cruise Taxi Operations

2024-07-23
DNyuz
Why's our monitor labelling this an incident or hazard?
The article explicitly references a prior AI Incident where a Cruise autonomous vehicle hit and dragged a pedestrian, causing harm and regulatory consequences. This qualifies as an AI Incident due to direct harm to a person caused by the AI system's malfunction or failure. The current news about resuming test operations with safety drivers and suspending fully driverless service is a response to that incident, not a new incident or hazard. Therefore, this article is best classified as Complementary Information, as it provides an update on the company's response and ongoing testing following a known AI Incident.
Thumbnail Image

GM Pulls the Plug on Driverless, Brake Pedal-less, Steering Wheel-less Robotaxi

2024-07-23
The Daily Upside
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a self-driving taxi failing to brake and hitting a pedestrian, which is a direct harm to a person caused by the AI system's malfunction. This meets the criteria for an AI Incident as the AI system's malfunction directly led to injury. The suspension of the Origin robotaxi production is a consequence of this incident and regulatory issues, but the core harm has already occurred. Therefore, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

GM's Cruise targets resumption of driverless rides this year

2024-07-26
Automotive News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Cruise's autonomous driving technology) whose malfunction or operational failure directly caused injury to a pedestrian, a clear harm to a person. This meets the definition of an AI Incident because the AI system's use led directly to harm (a). The article discusses the aftermath and remediation efforts but centers on the prior incident of harm, not just potential future risks or general updates. Therefore, the classification is AI Incident.
Thumbnail Image

GM will restart cruise taxi operations

2024-07-24
The Seattle Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Cruise's self-driving cars) that was involved in a serious incident causing injury to a pedestrian, which is a direct AI Incident as per the definitions. The harm has already occurred, and the article focuses on the company's response and resumption of testing under controlled conditions. Since the harm event is past and the article does not describe new harm or plausible future harm beyond the ongoing cautious testing, the classification is AI Incident based on the prior incident and its consequences described here.
Thumbnail Image

General Motors is scaling back its Google-rivaling AI product

2024-07-23
TheStreet
Why's our monitor labelling this an incident or hazard?
The Cruise robotaxi crash that hit a pedestrian in San Francisco in October 2023 is a clear AI Incident as it involved an AI system's use leading directly to harm to a person and regulatory consequences. The article's mention of the NHTSA investigation into Waymo's autonomous driving systems due to collisions and traffic violations suggests ongoing safety concerns but does not confirm new harm, so it is complementary information. The vandalism against a Waymo vehicle is harm caused by humans to property but not linked to AI system malfunction or use, so it is not an AI Incident. The GM decision to discontinue the Cruise Origin vehicle is a strategic business move without direct or plausible harm, thus unrelated to incidents or hazards. Therefore, the main classification is AI Incident due to the Cruise crash and its consequences, with other elements providing complementary context.
Thumbnail Image

通用汽车旗下Cruise据悉计划年内恢复完全自动驾驶服务_手机网易网

2024-07-25
m.163.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Cruise's autonomous driving technology) whose prior use led to safety incidents causing suspension of service, indicating past AI-related harm. The planned resumption and ongoing testing relate to the use and development of the AI system with a focus on safety. Since the article describes past harm caused by the AI system's use and ongoing efforts to mitigate risks, this qualifies as an AI Incident due to the realized safety incidents linked to the AI system's operation.
Thumbnail Image

自动驾驶独角兽Cruise"复活",Robotaxi计划明年开始收费|钛度车库-钛媒体官方网站

2024-07-26
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of Cruise's autonomous driving technology (robotaxi) and recounts incidents where the AI-driven vehicles caused direct harm to a pedestrian and a collision with a fire truck, leading to injuries and regulatory actions. These are clear examples of AI Incidents as the AI system's use directly led to injury and regulatory consequences. The ongoing testing with safety drivers and plans to resume paid services are updates but do not negate the fact that the article primarily reports on realized harms caused by AI systems. Therefore, the event is classified as an AI Incident.
Thumbnail Image

马斯克点评通用汽车放弃无人驾驶:技术不行

2024-07-25
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves AI systems, specifically autonomous driving technology, which is a clear example of AI systems making real-time decisions in physical environments. The article reports on the suspension of a project due to technical and regulatory challenges but does not describe any realized harm or incident caused by the AI system. There is no mention of injury, rights violations, property damage, or other harms resulting from the AI system's development or use. The discussion about potential future deployment of Tesla's Robotaxi is forward-looking but does not indicate any current harm or incident. Therefore, this event is best classified as Complementary Information, providing context and updates on AI system development and industry perspectives without reporting an AI Incident or AI Hazard.
Thumbnail Image

【图】降成本 通用汽车暂停Cruise Origin生产_汽车之家

2024-07-24
汽车之家(Autohome.com.cn)
Why's our monitor labelling this an incident or hazard?
The Cruise Origin is an AI system designed for autonomous driving (L4/L5 level). The article explicitly mentions a safety incident where a pedestrian was dragged by a Cruise autonomous vehicle, leading to regulatory suspension and production halt. This is a direct harm to a person caused by the AI system's malfunction or failure, fulfilling the criteria for an AI Incident. The ongoing investigation and regulatory response further confirm the seriousness of the harm. The suspension of production and operations is a response to this incident, not merely a precautionary measure, indicating realized harm rather than potential harm.
Thumbnail Image

通用汽车旗下Cruise据悉计划年内恢复完全自动驾驶服务 安全性能提升成关键_中华网

2024-07-26
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system—Cruise's autonomous driving technology. The previous suspension was due to safety incidents, indicating prior AI-related harm or risk. The planned resumption with enhanced safety measures and ongoing supervised testing suggests a focus on mitigating past harms and preventing future incidents. However, no new harm is reported as occurring yet, and the article mainly discusses future plans and safety improvements. Therefore, this constitutes Complementary Information, providing an update on responses to prior AI incidents and ongoing risk management, rather than a new AI Incident or AI Hazard.
Thumbnail Image

通用汽车Cruise计划重启无人驾驶服务 预计2025年收费-汽车频道-和讯网

2024-07-26
和讯网
Why's our monitor labelling this an incident or hazard?
The article involves an AI system, specifically an autonomous driving system, which is a clear example of an AI system. The previous suspension of the service was due to a safety incident, indicating that the AI system's malfunction or use had led to harm or risk. However, this article primarily reports on the planned resumption of the service and the current regulatory status, without reporting any new harm or incident occurring at this time. Therefore, it does not describe a new AI Incident or an immediate AI Hazard but rather provides an update on the status and plans of an AI system deployment. This fits the definition of Complementary Information, as it provides context and updates related to a prior AI Incident and ongoing AI system deployment.
Thumbnail Image

通用汽车无限期暂停Cruise Origin自动驾驶汽车的生产

2024-07-24
China Finance Online
Why's our monitor labelling this an incident or hazard?
The Cruise Origin is an AI system with advanced autonomous driving capabilities. The article describes a decision to suspend production due to regulatory uncertainty, which is a development-related event. There is no indication that the AI system has caused any injury, rights violation, or other harm, nor that it poses an imminent risk of harm. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on the AI ecosystem, company strategy, and regulatory environment related to autonomous vehicles.
Thumbnail Image

通用汽车暂停Cruise Origin自动驾驶车项目,专注下一代雪佛兰Bolt生产

2024-07-24
环球网
Why's our monitor labelling this an incident or hazard?
The Cruise Origin is an AI system (an autonomous vehicle without human controls) whose development and use have been directly linked to safety concerns and an actual pedestrian accident, which constitutes harm to a person. The suspension of the project and operations is a response to these harms and regulatory scrutiny. The article describes realized harm and the AI system's role in it, meeting the criteria for an AI Incident rather than a hazard or complementary information. The focus is on the harm caused and the operational consequences, not just potential risks or responses.
Thumbnail Image

自动驾驶独角兽Cruise"复活",Robotaxi计划明年开始收费|钛度车库_手机网易网

2024-07-26
m.163.com
Why's our monitor labelling this an incident or hazard?
Cruise's autonomous driving system is explicitly mentioned and is central to the events described. The incidents include a serious pedestrian injury and a collision with a fire truck, both directly linked to the AI system's operation. The harm to the pedestrian (injury requiring hospitalization) and property (fire truck collision) meets the criteria for harm under the AI Incident definition. The regulatory response and operational suspension further confirm the severity of the incidents. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

通用无限期搁置Origin无方向盘自动驾驶汽车项目

2024-07-25
新浪车行天下
Why's our monitor labelling this an incident or hazard?
The Origin vehicle is an AI system (autonomous driving without manual controls). The event involves the development and use of this AI system. However, the article does not report a new harm or incident caused by the AI system but rather the indefinite postponement of the project due to regulatory and technical challenges. It also references past incidents and investigations but does not present new harm or imminent risk. The focus is on strategic decisions, regulatory environment, and industry developments, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

自动驾驶独角兽Cruise"复活",Robotaxi计划明年开始收费|钛度车库

2024-07-26
新浪财经
Why's our monitor labelling this an incident or hazard?
Cruise's autonomous driving system is an AI system as it enables fully autonomous vehicle operation. The article details two accidents: one involving a collision with a fire truck and another where a pedestrian was dragged and injured severely. These are direct harms to persons caused by the AI system's operation or malfunction. The regulatory response and operational suspension further confirm the severity of the incidents. Hence, this event meets the criteria for an AI Incident due to direct harm to health caused by the AI system's use and malfunction.
Thumbnail Image

【图】30-40万美元?通用汽车暂停Cruise Origin自动驾驶汽车生产 _汽车之家

2024-07-24
m.autohome.com.cn
Why's our monitor labelling this an incident or hazard?
The Cruise Origin is an autonomous vehicle with advanced AI driving capabilities (L4/L5). The article reports a pedestrian accident caused by a Cruise autonomous vehicle, leading to a regulatory suspension of its license and production halt. This is a direct harm to a person caused by the AI system's malfunction or failure, fulfilling the criteria for an AI Incident. The ongoing investigation and operational suspensions further confirm the incident's seriousness. The event is not merely a hazard or complementary information but a realized harm linked to AI use.
Thumbnail Image

通用汽车暂停Cruise Origin自动驾驶车项目 专注下一代雪佛兰Bolt生产 - cnBeta.COM 移动版

2024-07-24
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The Cruise Origin is an AI system as it is an autonomous vehicle without a driver, steering wheel, or pedals, relying on AI for navigation and control. The article mentions a pedestrian accident caused by a Cruise vehicle, which is a direct harm to a person resulting from the AI system's use or malfunction. Additionally, regulatory suspension of the autonomous vehicle license and voluntary operational pauses indicate recognized safety issues linked to the AI system. Therefore, this event qualifies as an AI Incident due to realized harm (pedestrian injury) and operational impacts stemming from the AI system's malfunction or failure.
Thumbnail Image

通用无限期搁置Origin无方向盘自动驾驶汽车项目 - 科技与交通 - cnBeta.COM

2024-07-25
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in autonomous vehicles (Cruise's Origin and other autonomous taxis) and reports a past incident where a Cruise autonomous taxi struck and dragged a pedestrian, causing harm and triggering investigations and license suspension. This meets the definition of an AI Incident due to injury to a person caused directly or indirectly by the AI system's use. The article also covers the indefinite postponement of the Origin project due to regulatory and technical challenges, which is complementary information about the AI ecosystem and company strategy. Since the incident is the primary realized harm, the classification prioritizes AI Incident over hazard or complementary information.