ChatGPT Amplifies Regional and Racial Stereotypes in Brazil, Study Finds

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A University of Oxford study found that ChatGPT, based on 20.3 million queries, reproduces and amplifies harmful regional, racial, and cultural stereotypes in Brazil. The AI system labeled wealthier regions as more intelligent and productive, while associating poorer regions, especially in the North and Northeast, with negative traits, perpetuating discrimination.[AI generated]

Why's our monitor labelling this an incident or hazard?

The ChatGPT system, an AI language model, is explicitly involved as the source of biased and prejudiced outputs. These outputs have directly led to harm by perpetuating stereotypes and racial discrimination, which constitute harm to communities and violations of human rights. The study documents these harms as occurring through the AI's responses, thus qualifying this as an AI Incident rather than a hazard or complementary information. The harm is realized and documented, not merely potential or speculative.[AI generated]
AI principles
FairnessRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersGeneral public

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Preconceito contra o Nordeste e racismo: ChatGPT reproduz estereótipos sobre o Brasil

2026-02-04
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The ChatGPT system, an AI language model, is explicitly involved as the source of biased and prejudiced outputs. These outputs have directly led to harm by perpetuating stereotypes and racial discrimination, which constitute harm to communities and violations of human rights. The study documents these harms as occurring through the AI's responses, thus qualifying this as an AI Incident rather than a hazard or complementary information. The harm is realized and documented, not merely potential or speculative.
Thumbnail Image

ChatGPT reforça preconceitos contra as regiões Norte e Nordeste, diz estudo

2026-02-05
TecMundo
Why's our monitor labelling this an incident or hazard?
The study explicitly involves ChatGPT, an AI system, and demonstrates that its outputs have led to harm by reinforcing harmful stereotypes and prejudices against specific regions and populations. This constitutes a violation of human rights and harm to communities as defined in the framework. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's biased behavior.
Thumbnail Image

ChatGPT associa Sudeste a maior inteligência e rebaixa Norte e Nordeste, diz pesquisa

2026-02-04
Diário do Centro do Mundo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose outputs have directly led to harm in the form of reinforcing harmful stereotypes and biases related to race, region, and culture. These biases can be considered violations of human rights and cause harm to communities by perpetuating discrimination and social inequality. Since the harm is realized through the AI system's responses and is documented by the research, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

ChatGPT reproduz preconceitos regionais e classifica nordestinos como 'ignorantes' - Jornal de Brasília

2026-02-04
Jornal de Brasília
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use has led to the reproduction and amplification of harmful regional stereotypes and prejudices, which constitutes harm to communities (a form of social harm). The AI's outputs reflect biased training data and lack contextual understanding, resulting in discriminatory classifications. This meets the criteria for an AI Incident because the AI system's use has directly contributed to social harm by spreading misinformation and reinforcing negative stereotypes, which can affect public perception and social cohesion. The article does not merely discuss potential risks or responses but documents realized harm through biased AI outputs.
Thumbnail Image

ChatGPT reproduz preconceitos regionais e classifica nordestinos como 'ignorantes' - ES HOJE

2026-02-04
ESHOJE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use has directly led to harm by reproducing and amplifying prejudices and negative stereotypes about specific regions and populations. This constitutes harm to communities and a violation of human rights (dignity, equality). The article describes realized harm through biased outputs that influence public opinion and social discourse. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is occurring and linked to the AI system's outputs.