AI Image Generator Alters Asian Student's Race in Headshot, Exposing Racial Bias

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Rona Wang, an Asian MIT student, used Playground AI to create a professional headshot, but the AI altered her appearance to make her look white, with lighter skin and blue eyes. The incident highlights persistent racial bias in AI image generators, sparking public concern over discrimination and misrepresentation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (image generator) was used to improve a headshot but produced outputs that lightened the subject's skin and changed her race, demonstrating racial bias. This bias in AI-generated images can perpetuate harmful stereotypes and discrimination, constituting a violation of rights. The harm is realized as the AI's outputs are unusable and offensive to the user, reflecting a direct impact on the individual's dignity and rights. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's biased outputs.[AI generated]
AI principles
FairnessRespect of human rightsTransparency & explainabilityAccountabilityHuman wellbeingSafety

Industries
Consumer servicesMedia, social platforms, and marketingArts, entertainment, and recreation

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

An Asian Woman Asked AI to Improve Her Headshot and It Turned Her White

2023-08-01
Futurism
Why's our monitor labelling this an incident or hazard?
The AI system (image generator) was used to improve a headshot but produced outputs that lightened the subject's skin and changed her race, demonstrating racial bias. This bias in AI-generated images can perpetuate harmful stereotypes and discrimination, constituting a violation of rights. The harm is realized as the AI's outputs are unusable and offensive to the user, reflecting a direct impact on the individual's dignity and rights. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's biased outputs.
Thumbnail Image

Asian MIT grad asks AI to make her photo more 'professional,' gets turned into white woman

2023-07-31
Yahoo News
Why's our monitor labelling this an incident or hazard?
An AI system (Playground AI) was used to generate a professional photo, but it returned an image with altered racial features, indicating racial bias. This constitutes a violation of human rights or a breach of obligations intended to protect fundamental rights, specifically related to racial discrimination. The harm is realized as the AI system's use led to discriminatory and harmful representation of the user. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system causing harm through racial bias in its outputs.
Thumbnail Image

An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.

2023-08-01
Business Insider
Why's our monitor labelling this an incident or hazard?
The AI image generator (Playground AI) was used to create a professional headshot but altered the subject's race to appear more white, demonstrating racial bias. This bias is a direct consequence of the AI system's training and operation, leading to harm in terms of misrepresentation and reinforcing racial stereotypes. The event involves the use of an AI system and the harm is realized, not just potential. Hence, it meets the criteria for an AI Incident due to violation of rights and harm to communities caused by the AI system's biased outputs.
Thumbnail Image

'Tech racism is real': Asian MIT student's professional headshot turns Caucasian with AI tool

2023-08-03
InqPOP!
Why's our monitor labelling this an incident or hazard?
The AI image generator is an AI system that modifies images based on prompts. The racial bias in the generated images is a direct consequence of the AI's training data and model behavior, which leads to discriminatory outputs. This constitutes a violation of rights and harm to communities by reinforcing racial stereotypes and potentially influencing decisions in professional settings. The article reports realized harm in the form of biased representation and concerns about broader societal impacts, qualifying this as an AI Incident under the framework.
Thumbnail Image

Asian Tells AI To Make Her Photo 'More Professional', Gets Turned Into White Woman

2023-08-03
Mashable SEA
Why's our monitor labelling this an incident or hazard?
An AI system (image generator) was used to modify a photo, resulting in racial bias that changed the subject's ethnicity in a way that is offensive and discriminatory. This constitutes a violation of rights and harm to the individual and community by perpetuating racial stereotyping. The harm has occurred as the AI system's output caused this offensive misrepresentation. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and output.
Thumbnail Image

Even For AI, White Is Right: Photo enhancing bot turns Asian woman white when asked to beautify portrait

2023-08-02
Firstpost
Why's our monitor labelling this an incident or hazard?
The AI system (image generator) is explicitly involved and its use led to outputs that altered racial features, which is a form of racial bias. This constitutes a violation of rights and harm to communities, as it misrepresents and marginalizes racial identity. The harm is realized and not just potential, as the user experienced the biased output and public discussion ensued. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's biased behavior.
Thumbnail Image

AI photo app branded 'racist' after 'turning Asian woman's face white in selfie'

2023-08-02
The US Sun
Why's our monitor labelling this an incident or hazard?
The AI system (Playground AI) is explicitly involved as it generated the altered images. The harm arises from the AI's biased outputs that effectively whiten the user's face, which is a form of racial bias and discrimination. This constitutes a violation of rights related to racial equality and dignity, fitting the definition of an AI Incident. The event reports actual harm experienced by the user and public concern about racial bias in AI, not just a potential or hypothetical risk. Therefore, this is classified as an AI Incident.
Thumbnail Image

Asian MIT Student Asks AI for a Pro Headshot, Gets Turned White

2023-08-03
PetaPixel
Why's our monitor labelling this an incident or hazard?
The AI system (Playground AI) was used to generate a professional headshot but produced a racially biased output that misrepresented the user's ethnicity, reflecting systemic bias in AI models. This bias can cause harm by perpetuating exclusion and misrepresentation of minority groups, which aligns with violations of rights and harm to communities as defined. The harm is realized as the user cannot obtain a usable image and the incident catalyzes broader concerns about AI bias. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

This Asian MIT Graduate Asked AI To Make Her Headshot Better, It Turned Her White - Wonderful Engineering

2023-08-02
Wonderful Engineering
Why's our monitor labelling this an incident or hazard?
The AI system's use directly led to racial bias in the generated images, which is a recognized form of harm affecting individuals and communities by perpetuating unfair stereotypes and discrimination. The event involves realized harm from the AI's outputs, not just potential harm, and the article discusses the implications for fairness and inclusivity, especially in contexts like hiring. Hence, it meets the criteria for an AI Incident due to violation of rights and harm to communities caused by the AI system's biased behavior.
Thumbnail Image

Student asked AI to turn her photo into a professional headshot and it changed her race

2023-08-02
JOE.co.uk
Why's our monitor labelling this an incident or hazard?
The AI system (Playground AI) was used to generate an image, and its output changed the user's race, demonstrating racial bias embedded in the AI model. This constitutes a violation of rights and harm to communities, as it misrepresents the user's identity and perpetuates racial bias. The event involves the use of an AI system leading directly to harm (racial bias and misrepresentation), qualifying it as an AI Incident. The article discusses the incident and its societal implications, not just potential or future harm, so it is not a hazard or complementary information.
Thumbnail Image

बेहतर भविष्य का दरवाजा खोलेगा AI लेकिन इसे छुट्टा नहीं छोड़ सकते, नियम तो बनाने होंगे

2023-08-07
Navbharat Times
Why's our monitor labelling this an incident or hazard?
The article is a general opinion and analysis piece about AI ethics, risks, and governance. It does not report any concrete AI Incident (harm caused) or AI Hazard (plausible future harm from a specific event). It mainly focuses on the need for ethical standards, transparency, and regulation to prevent potential harms. Therefore, it fits best as Complementary Information, providing context and discussion on AI's societal implications and governance rather than reporting a new incident or hazard.
Thumbnail Image

एशियाई लड़की ने लिंक्डइन के लिए AI से मांगी 'प्रोफेशनल' फोटो, और फिर...

2023-08-07
News18 India
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a professional photo, but it produced outputs that changed the user's racial appearance, indicating racial bias in the AI model. This has caused harm in terms of violation of rights related to racial discrimination and has sparked a broader conversation about AI bias and fairness. Since the harm (racial bias and its social implications) has already occurred and is directly linked to the AI system's outputs, this qualifies as an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

Zoom ने अपडेट की पॉलिसी, AI ट्रेनिंग के लिए यूज करेगी ग्राहकों का डेटा

2023-08-09
Hindustan
Why's our monitor labelling this an incident or hazard?
The article discusses Zoom's updated policy on using customer data for AI training with user consent. While it involves AI system development and use, there is no indication of any harm occurring or plausible harm that could arise from this policy update itself. The focus is on transparency, consent, and privacy assurances, which are governance and societal responses to AI data use concerns. Therefore, this is Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

कस्टमर्स डेटा पर A को प्रशिक्षित कर रहा है Zoom जानें क्या है इसके पीछे की वजह - zoom using customer data to train their Ai modal, know the details here

2023-08-08
दैनिक जागरण (Dainik Jagran)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems trained on customer data, which initially allowed training without explicit consent, raising concerns about privacy and rights violations. Although no direct harm is reported, the potential for harm through unauthorized data use and lack of consent is credible. The company's update to require consent and the public reaction are responses to this potential harm. Therefore, this situation constitutes an AI Hazard, as the development and use of AI systems with customer data without consent could plausibly lead to violations of rights or other harms if not properly managed.
Thumbnail Image

धीरे-धीरे मैन्युअल कर्मचारियों की जगह ले लेंगे एआई प्लेटफॉर्म रिपोर्ट - Significant percentage of employees believe AI platforms will gradually replace manual employees Report

2023-08-08
दैनिक जागरण (Dainik Jagran)
Why's our monitor labelling this an incident or hazard?
The article focuses on survey results and expert views about the potential for AI to replace manual employees in the future. It does not describe any realized harm, malfunction, or misuse of AI systems leading to injury, rights violations, or other harms. The content is about plausible future impacts and societal perceptions, which fits the definition of an AI Hazard since it plausibly could lead to harm (job displacement) but no incident has yet occurred. However, since the article mainly reports on opinions and survey data without describing a specific event or circumstance where AI use or malfunction has directly or indirectly caused harm, it is best classified as Complementary Information providing context on AI's societal impact rather than a direct AI Hazard or Incident.
Thumbnail Image

कस्टमर डेटा पर एआई ट्रेनिंग के लिए जूम ने सर्विस की शर्तों को बदला

2023-08-09
दैनिक भास्कर हिंदी
Why's our monitor labelling this an incident or hazard?
The article discusses Zoom's policy change regarding the use of customer data for AI training, emphasizing consent and control by customers. There is no indication of any actual harm, violation of rights, or malfunction caused by AI systems. The event is primarily about a company's response to concerns and clarifying its AI data usage practices, which fits the definition of Complementary Information as it provides governance and societal response context without reporting a new AI Incident or AI Hazard.