AI Chatbots Exhibit Systematic Bias in Judging Users, Study Finds

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A study by Hebrew University of Jerusalem reveals that AI chatbots like ChatGPT and Gemini systematically judge users, forming psychological profiles and trust assessments. Unlike humans, these AI systems apply rigid, fragmented criteria, leading to amplified and consistent demographic biases in decisions such as lending and hiring, raising concerns about discrimination.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (ChatGPT, Gemini) making decisions that directly impact people in areas like finance and trust, with documented biases leading to differential treatment based on demographics. This constitutes a violation of rights and harm to individuals/groups due to biased AI judgments. Since the harm is realized and linked to AI system use, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations protecting fundamental rights.[AI generated]
AI principles
FairnessRespect of human rights

Industries
Financial and insurance servicesBusiness processes and support services

Affected stakeholders
Consumers

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Human resource management

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

AI chatbots aren't just answering our queries - they're judging us

2026-04-14
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves AI systems making judgments that can lead to biased outcomes affecting individuals' access to resources or opportunities, which aligns with potential violations of human rights or harm to communities. Since the article discusses the study's findings and the plausible risk of amplified bias without reporting a concrete harm event, this constitutes an AI Hazard. The AI system's use in decision-making and the identified risk of bias could plausibly lead to incidents of harm if deployed without mitigation.
Thumbnail Image

AI chatbots aren't just answering our queries - they're judging us

2026-04-14
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT, Gemini) making decisions that directly impact people in areas like finance and trust, with documented biases leading to differential treatment based on demographics. This constitutes a violation of rights and harm to individuals/groups due to biased AI judgments. Since the harm is realized and linked to AI system use, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations protecting fundamental rights.
Thumbnail Image

Research says AI chatbots judge you, and it doesn't always end well

2026-04-14
Digital Trends
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (chatbots/large language models) that evaluate users and make decisions affecting real-world outcomes like lending money and hiring. The research reveals systematic biases in these AI judgments based on protected demographic characteristics, which is a violation of human rights and labor rights. Since these harms are occurring or have occurred as per the research findings, this qualifies as an AI Incident rather than a hazard or complementary information. The AI system's use is directly linked to discriminatory outcomes, fulfilling the criteria for an AI Incident.
Thumbnail Image

Is AI chatbot secretly judging you? Hidden truth behind your queries

2026-04-14
The News International
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) whose use has directly led to systematic biases and unfair judgments, which constitute harm to individuals and communities through discriminatory outcomes. The research highlights that AI judgments are more rigid and can amplify biases compared to humans, indicating realized harm rather than just potential risk. Therefore, this qualifies as an AI Incident due to violations of rights and harm to communities caused by AI system use.
Thumbnail Image

Large Language Models Don't Just Analyze People, They Judge Them | Sci.News

2026-04-14
Sci.News: Breaking Science News
Why's our monitor labelling this an incident or hazard?
The article discusses the development and use of AI systems (LLMs) that exhibit biased judgments, which could plausibly lead to harms such as discrimination or violations of rights in contexts like creditworthiness assessment or hiring. However, it does not document a specific incident where such harm has materialized. Therefore, the event fits the definition of an AI Hazard, as it highlights credible risks of future harm stemming from AI biases in decision-making, but no direct or indirect harm is reported as having occurred yet.
Thumbnail Image

Study finds AI systems judge people, create kind of "trust"

2026-04-14
english.news.cn
Why's our monitor labelling this an incident or hazard?
The article discusses research findings on AI decision-making and bias, which enhances understanding of AI systems and their societal implications. It does not describe any specific AI incident or hazard causing or plausibly leading to harm. Instead, it provides context and insight into AI behavior, which fits the definition of Complementary Information as it supports ongoing assessment and understanding of AI impacts without reporting a new harm or risk event.
Thumbnail Image

The Hidden Logic Behind AI's Judgments of People

2026-04-15
The Manila times
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used in real decision-making contexts (e.g., lending, hiring, medical decisions) that systematically produce biased outcomes based on demographic traits such as age, religion, and gender. These biases can lead to unfair treatment and discrimination, which are violations of human rights and harm to communities. The AI systems' role is pivotal as their judgments directly influence these outcomes. Hence, this qualifies as an AI Incident due to realized harm stemming from AI use.