AI Chatbots Provide Lower-Quality Responses to Iranian Users

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

MIT research reveals that advanced AI language models, including GPT-4, Claude 3 Opus, and Llama 3, deliver less accurate, sometimes disparaging, and lower-quality responses to users with lower English proficiency, less formal education, or those from outside the US, notably Iranians, highlighting systemic bias and informational inequality.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (large language models) whose use has directly caused harm by providing biased, less accurate, and sometimes offensive responses to certain user groups, including those from Iran. This constitutes a violation of rights and harm to communities, fulfilling the criteria for an AI Incident. The harm is realized and documented through the research findings, not merely potential. Therefore, the classification is AI Incident.[AI generated]
AI principles
FairnessRespect of human rights

Industries
Consumer servicesMedia, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

هوش مصنوعی به کاربران ایرانی پاسخ‌های ضعیف‌تری می‌دهد

2026-02-21
عصر ايران،سايت تحليلي خبري ايرانيان سراسر جهان www.asriran.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose use has directly caused harm by providing biased, less accurate, and sometimes offensive responses to certain user groups, including those from Iran. This constitutes a violation of rights and harm to communities, fulfilling the criteria for an AI Incident. The harm is realized and documented through the research findings, not merely potential. Therefore, the classification is AI Incident.
Thumbnail Image

چرا هوش مصنوعی به کاربران ایرانی پاسخ‌های ضعیف‌تری می‌دهد؟

2026-02-21
خبرآنلاین
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI language models whose outputs have directly led to harm in the form of biased, less accurate, and disparaging responses to certain user groups, including Iranians. This is a clear case of harm to communities and violation of rights due to AI system behavior. The AI systems' biased outputs and refusal to answer certain questions for specific demographics demonstrate direct harm caused by the AI's use. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

پژوهش MIT: چت‌بات‌های هوش مصنوعی به کاربران ایرانی پاسخ‌های ضعیف‌تری می‌دهند

2026-02-21
خبرآنلاین
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models like GPT-4, Claude 3 Opus, Llama 3) whose use has directly led to harm in the form of discriminatory and lower-quality responses to certain user groups, including Iranians. This constitutes a violation of rights and harm to communities through informational inequality and biased treatment. Since the harm is occurring and documented, this qualifies as an AI Incident rather than a hazard or complementary information. The study's findings demonstrate realized harm rather than just potential risk or a response to past incidents.
Thumbnail Image

محققان: چت‌ جی‌پی‌تی به ایرانی‌ها پاسخ بی‌کیفیت‌تری می‌دهد

2026-02-21
جامعه خبری تحلیلی الف
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (advanced language models) whose use has directly led to harm in the form of discriminatory and lower-quality responses to specific user groups, notably Iranian users. This results in informational harm and inequality, which falls under harm to communities and violation of rights. The research documents realized harm, not just potential, and highlights systemic bias in AI outputs affecting users' access to knowledge. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

تبعیض پنهان چت‌بات‌ها: هوش مصنوعی به کاربران ایرانی پاسخ‌های ضعیف‌تری می‌دهد

2026-02-21
خبرگزاری ایلنا
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLMs) whose use has directly led to discriminatory and biased responses that harm users by providing them with inferior information and disrespectful treatment. This constitutes a violation of rights and harm to communities, fulfilling the criteria for an AI Incident. The article describes actual harm occurring due to the AI systems' outputs, not just potential harm or general commentary, so it is not a hazard or complementary information. Therefore, the classification is AI Incident.
Thumbnail Image

رفتار تبعیض آمیز و تحقیر آمیز هوش مصنوعی با کاربران ایرانی!

2026-02-21
قدس آنلاین | پایگاه خبری - تحلیلی
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose use has directly led to harm in the form of discriminatory and disparaging treatment of certain user groups, violating principles of equal access to information and potentially human rights related to non-discrimination. The harm is realized and documented through systematic bias and disparaging language, fulfilling the criteria for an AI Incident. The article does not describe potential or future harm but actual harm occurring due to the AI systems' outputs.