
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A University of Oxford study found that ChatGPT, based on 20.3 million queries, reproduces and amplifies harmful regional, racial, and cultural stereotypes in Brazil. The AI system labeled wealthier regions as more intelligent and productive, while associating poorer regions, especially in the North and Northeast, with negative traits, perpetuating discrimination.[AI generated]
Why's our monitor labelling this an incident or hazard?
The ChatGPT system, an AI language model, is explicitly involved as the source of biased and prejudiced outputs. These outputs have directly led to harm by perpetuating stereotypes and racial discrimination, which constitute harm to communities and violations of human rights. The study documents these harms as occurring through the AI's responses, thus qualifying this as an AI Incident rather than a hazard or complementary information. The harm is realized and documented, not merely potential or speculative.[AI generated]