Study Finds AI Bias in Autonomous Vehicle Pedestrian Detection Systems

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers from King's College London and Peking University found that AI pedestrian detection systems used in autonomous vehicles are significantly less accurate at identifying children and dark-skinned individuals, especially in low-light conditions. This bias increases safety risks for these groups, highlighting a critical flaw in current AI training data and system design.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems used in driverless cars for pedestrian detection, which is explicitly mentioned. The study shows that these AI systems perform worse for certain demographic groups, which could directly lead to harm (injury) due to failure to detect pedestrians accurately. This constitutes an AI Incident because the AI system's use has directly led to a safety hazard with potential or actual harm to people. The researchers' call for regulation further supports the recognition of this as a significant harm issue rather than a mere potential hazard or complementary information.[AI generated]
AI principles
FairnessSafetyRobustness & digital securityRespect of human rightsTransparency & explainabilityAccountabilityHuman wellbeing

Industries
Mobility and autonomous vehiclesRobots, sensors, and IT hardwareReal estate

Affected stakeholders
ChildrenOther

Harm types
Physical (injury)Physical (death)Human or fundamental rights

Severity
AI incident

Business function:
Monitoring and quality controlResearch and development

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

The pedestrian detection systems in self-driving cars are less likely to detect children and people of color, study suggests

2023-08-26
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (pedestrian detection in self-driving cars) and discusses bias in their performance related to race and age. This bias could plausibly lead to harm (injury or death) if the system fails to detect certain pedestrians, which is a direct safety risk. Since the harm is potential and not reported as having occurred yet, this qualifies as an AI Hazard. The study's call for regulation further supports the recognition of plausible future harm.
Thumbnail Image

Driverless Cars Can't Detect Dark-Skinned Pedestrians as Well as Others

2023-08-26
PCMag Australia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used in driverless cars for pedestrian detection, which is explicitly mentioned. The study shows that these AI systems perform worse for certain demographic groups, which could directly lead to harm (injury) due to failure to detect pedestrians accurately. This constitutes an AI Incident because the AI system's use has directly led to a safety hazard with potential or actual harm to people. The researchers' call for regulation further supports the recognition of this as a significant harm issue rather than a mere potential hazard or complementary information.
Thumbnail Image

Driverless Cars Worse at Seeing Kids and Dark-Skinned People: Study

2023-08-24
Gizmodo
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used in autonomous vehicles that have biased pedestrian detection capabilities, which can cause harm to children and dark-skinned individuals by failing to detect them reliably. This is a direct link between AI system malfunction (bias in detection) and potential injury or harm to people, fulfilling the criteria for an AI Incident. The mention of accidents and protests related to driverless cars further supports the presence of harm or risk of harm. Although the exact software tested is not the same as that used by companies, the study reasonably infers similar issues in deployed systems, making the harm plausible and ongoing. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Driverless cars worse at detecting kids, dark-skinned individuals on street: Study

2023-08-24
Telangana Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used in autonomous vehicles for pedestrian detection, explicitly identified as AI-powered detectors. The study documents that these systems perform worse for children and dark-skinned individuals, especially at night, which can directly increase the risk of injury or death, a harm to health and safety. This constitutes a direct link between AI system malfunction (biased detection) and harm to people, meeting the definition of an AI Incident. The article reports realized disparities and risks, not just potential future harm, so it is not merely a hazard or complementary information.
Thumbnail Image

Driverless cars fail to detect dark-skinned people, kids: Study

2023-08-25
english.madhyamam.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used in autonomous vehicles for pedestrian detection. The study identifies that these AI systems perform worse for children and dark-skinned individuals, which is a malfunction or bias in the AI system. Since autonomous vehicles rely on these detections to avoid accidents, such disparities can directly lead to injury or harm to these groups, fulfilling the criteria for an AI Incident under harm to persons. The harm is either occurring or highly plausible given the detection failures, and the AI system's role is pivotal in this risk.
Thumbnail Image

Driverless cars worse at detecting kids, dark-skinned individuals on street: Study

2023-08-24
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used in autonomous vehicles for pedestrian detection. The study identifies that these AI systems perform worse for children and dark-skinned individuals, which could directly or indirectly lead to physical harm (injury or fatalities) due to detection failures. Although the article reports research findings rather than a specific incident of harm occurring, the described fairness issues and detection disparities represent a credible risk of harm in real-world use. Therefore, this qualifies as an AI Hazard because the AI system's malfunction or bias could plausibly lead to an AI Incident involving injury or harm to people.
Thumbnail Image

Children and darker-skinned pedestrians are harder for autonomous vehicles to detect

2023-08-25
MobileSyrup
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used in autonomous vehicles for pedestrian detection, which is explicitly mentioned. The research shows these AI systems perform worse for children and darker-skinned pedestrians, which is a direct safety concern that could lead to injury or harm (harm to health of persons). Although no specific incident of harm is reported, the demonstrated bias and its implications for safety constitute a plausible risk of harm. Given the direct link between AI system bias and potential physical harm in vehicle operation, this qualifies as an AI Incident due to the realized bias and its safety implications. The article also references other AI harms as context but the main focus is on the pedestrian detection bias and its safety impact.
Thumbnail Image

Researchers see bias in self-driving software

2023-08-23
Tech Xplore
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used in autonomous vehicles for pedestrian detection, which is an AI system by definition. The research shows that these systems have a measurable bias that results in lower detection rates for dark-skinned pedestrians and children, increasing their risk of injury or harm. This is a direct harm to health and safety caused by the AI system's malfunction or biased performance. The harm is realized and documented, not just potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm risks to vulnerable groups in public safety contexts.
Thumbnail Image

Are driverless cars less effective at spotting kids and people of color?

2023-08-24
Government Technology
Why's our monitor labelling this an incident or hazard?
The study identifies a bias in AI pedestrian detection systems that could plausibly lead to harm (e.g., accidents involving children or people of color) if these systems are used in autonomous vehicles without mitigation. Since no actual harm or incident is reported, but the AI system's malfunction (biased detection) could plausibly lead to injury or harm, this qualifies as an AI Hazard rather than an AI Incident. The presence of AI systems in autonomous driving and their biased performance is explicit, and the potential for harm is credible and significant.
Thumbnail Image

Autonomous cars may have a person detection bias problem with complexions | Biometric Update

2023-08-25
Biometric Update
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deep-learning pedestrian detectors) used in autonomous cars, which are explicitly mentioned. The study shows that these AI systems have a bias problem that affects detection accuracy based on skin tone and age, which can lead to harm by increasing safety risks for certain demographic groups (harm to health and safety of persons). Although no specific incident of harm is reported, the bias in detection is a direct AI system issue that could plausibly lead to harm if unaddressed. Since the article focuses on the study's findings of existing bias and its implications for safety, this constitutes an AI Incident due to realized bias causing discriminatory outcomes and potential safety harm. The mention of company responses does not shift the classification to Complementary Information because the main focus is on the bias findings and their implications for harm.
Thumbnail Image

The pedestrian detection systems in self-driving cars are less likely to detect children and people of color, study suggests

2023-08-26
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (pedestrian detection in self-driving cars) and their biased performance leading to less accurate detection of children and people of color. This bias creates a direct risk of harm (injury or worse) to these groups, fulfilling the criteria for an AI Incident as the AI system's malfunction (biased detection) has directly or indirectly led to potential harm. Although the study used open-source AI rather than proprietary systems, the systems tested are representative of those used in the industry, making the findings relevant. The harm is materialized in the form of increased safety risk, which is a form of injury or harm to persons. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Scientists: Autonomous Cars Struggle to Detect Kids, Dark-Skinned Pedestrians

2023-08-23
Mirage News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (pedestrian detection in autonomous vehicles) whose development and use have led to disparities in detection accuracy based on age and skin color. This bias can directly cause harm to children and dark-skinned pedestrians by increasing the risk of accidents or injury, fulfilling the criteria for an AI Incident under harm to health and safety. The article describes realized issues with these AI systems, not just potential risks, and highlights the direct link between AI bias and pedestrian safety.
Thumbnail Image

Autonomous cars worse at detecting child pedestrians, study finds

2023-08-25
The Next Web
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI pedestrian detection systems in autonomous vehicles, confirming AI system involvement. The study identifies biases due to unrepresentative training data, which is a development and use issue leading to reduced detection accuracy for vulnerable groups (children and dark-skinned pedestrians). This reduced accuracy plausibly increases the risk of accidents and harm, especially at night, thus meeting the criteria for an AI Hazard. Since no actual harm or incident is reported, it is not an AI Incident. The focus is on potential future harm and the need for regulatory responses, not on a realized incident or a response to one, so it is not Complementary Information. Hence, the classification is AI Hazard.