Austrian Employment Chatbot Reproduces Gender Bias

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Austrian Public Employment Service’s ChatGPT-powered chatbot, Berufsinfomat, has been criticized for reproducing gender stereotypes by recommending science programs to men and humanities to women. Developed by AMS and AI firm Goodguys to assist job seekers, the tool has sparked user complaints and prompted efforts to correct its discriminatory outputs.[AI generated]

Why's our monitor labelling this an incident or hazard?

The chatbot is an AI system explicitly mentioned as using ChatGPT to generate personalized employment advice. The event involves the use of this AI system, which has directly led to biased and discriminatory recommendations based on gender, a violation of fundamental rights and labor rights. The harm is realized as users receive prejudiced guidance that could affect their career choices and opportunities. The AMS acknowledges the issue and is working to mitigate it, but the incident of bias and harm has already occurred. Therefore, this event fits the definition of an AI Incident due to realized harm from the AI system's outputs.[AI generated]
AI principles
FairnessRespect of human rightsAccountabilityTransparency & explainabilityRobustness & digital securityHuman wellbeingSafety

Industries
Government, security, and defenceEducation and training

Affected stakeholders
ConsumersWomen

Harm types
Human or fundamental rightsReputationalPsychologicalEconomic/Property

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsOrganisation/recommendersContent generation


Articles about this incident or hazard

Thumbnail Image

Chatbot de empleo austriaco reproduce sesgos: Ingeniería para ellos, hostelería para ellas

2024-01-04
infobae
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system explicitly mentioned as using ChatGPT to generate personalized employment advice. The event involves the use of this AI system, which has directly led to biased and discriminatory recommendations based on gender, a violation of fundamental rights and labor rights. The harm is realized as users receive prejudiced guidance that could affect their career choices and opportunities. The AMS acknowledges the issue and is working to mitigate it, but the incident of bias and harm has already occurred. Therefore, this event fits the definition of an AI Incident due to realized harm from the AI system's outputs.
Thumbnail Image

Un chatbot de empleo austriaco reproduce sesgos: ingeniería para ellos, hostelería para ellas | Minuto30

2024-01-04
Minuto30.com
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system using ChatGPT to generate personalized employment advice. Its outputs have directly led to discriminatory recommendations based on gender, which is a violation of fundamental rights related to equality and non-discrimination. This constitutes an AI Incident because the AI system's use has directly caused harm by perpetuating gender bias in employment guidance. The incident is confirmed by user criticisms and official acknowledgment from AMS, indicating the harm is occurring and not just potential.
Thumbnail Image

Chatbot de empleo austriaco recibe críticas tras emitir sesgos laborales

2024-01-04
Portafolio.co
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system explicitly mentioned as using ChatGPT. Its use has directly led to biased recommendations based on gender, which is a form of discrimination violating labor rights and possibly human rights. The harm is realized because users have already received these biased outputs, and the platform has been publicly criticized for this. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs reproducing harmful biases.
Thumbnail Image

Chatbot de Servicio de Empleo de Austria recibe críticas por sesgo en recomendaciones; ingeniería para ellos, hostelería para ellas

2024-01-04
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system using ChatGPT to provide personalized employment and training recommendations. The event reports that the system reproduces gender biases in its outputs, which is a form of discrimination violating human rights and labor rights. The harm is actual and ongoing, as users have received biased recommendations. The AI system's use is directly linked to this harm. Although the AMS is working to improve the system, the current state of the chatbot has caused harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Un chatbot de empleo austriaco reproduce sesgos: ingeniería para ellos, hostelería para ellas

2024-01-04
Forbes México
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system using ChatGPT to generate personalized employment advice. The system's outputs have directly led to discriminatory recommendations based on gender, which is a violation of human rights and labor rights protections. The harm is actual and ongoing, as users have reported and criticized the biased responses. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's biased outputs affecting users' rights and opportunities.
Thumbnail Image

Chatbot de empleo reproduce sesgos: Ingeniería para ellos, hostelería para ellas

2024-01-04
Montevideo Portal / Montevideo COMM
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system explicitly mentioned as using ChatGPT. Its use has directly led to discriminatory recommendations based on gender, which is a violation of human rights and labor rights. The harm is occurring as users are receiving biased advice that perpetuates gender stereotypes, thus meeting the criteria for an AI Incident under violations of rights. The article also notes ongoing efforts to mitigate these biases, but the harm is already present.
Thumbnail Image

Austria sufre con chatbot

2024-01-05
El Diario de Yucatán
Why's our monitor labelling this an incident or hazard?
The chatbot explicitly uses an AI system (ChatGPT) to generate personalized recommendations. The event reports realized harm in the form of gender bias in recommendations, which is a violation of rights and social harm. The AI system's outputs directly led to discriminatory advice, fulfilling the criteria for an AI Incident. The organization's acknowledgment and efforts to improve the system do not negate the occurrence of harm. Hence, the classification is AI Incident.
Thumbnail Image

So sexistisch ist der neue AMS-"Berufsinfomat"

2024-01-06
Kronen Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AMS uses an AI system (ChatGPT) for a job information chatbot, which is an AI system. The mention of sexism indicates a bias problem, which is a known issue with AI systems and can lead to harm. However, the article does not describe any actual harm occurring, such as discriminatory outcomes or violations of rights resulting from the chatbot's use. It also does not indicate a plausible future harm scenario beyond the general concern. Thus, it does not meet the threshold for an AI Incident or AI Hazard. Instead, it provides important complementary information about the AI system's limitations and societal concerns, fitting the Complementary Information category.
Thumbnail Image

Österreichische Arbeitsagentur veröffentlicht fragwürdigen KI-Chatbot

2024-01-05
heise online
Why's our monitor labelling this an incident or hazard?
An AI system (the chatbot based on ChatGPT) is explicitly involved. Its use has directly led to harm in the form of biased and stereotypical career advice, which can negatively affect individuals' career choices and constitutes a violation of rights related to non-discrimination. The harm is realized, not just potential, as the biased recommendations are actively given to users. The article also highlights data privacy concerns and technical issues, reinforcing the problematic nature of the system's deployment. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Österreich: "Berufsinofmat" in der Kritik

2024-01-05
Süddeutsche Zeitung
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as using advanced AI (ChatGPT and RAG techniques) for career advice. The system's outputs have directly led to discriminatory recommendations based on gender, which is a violation of labor and fundamental rights. The harm is realized and documented, not merely potential. Although the AMS is attempting to mitigate bias, the incident of biased advice has already occurred. Hence, this event meets the criteria for an AI Incident due to the AI system's use causing harm through gender bias in career guidance.
Thumbnail Image

Vorurteile und zweifelhafte Umsetzung: Der AMS-KI-Chatbot trifft auf Spott und Hohn

2024-01-04
DER STANDARD
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AMS chatbot based on ChatGPT) whose use has directly led to harms: biased job recommendations that reinforce gender stereotypes constitute violations of rights and harm to communities, and the security vulnerabilities pose risks to property and data integrity. The chatbot's malfunction and poor implementation have caused these harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The article describes realized harms, not just potential risks, and the AI system's role is pivotal in causing these harms.
Thumbnail Image

Diese sexistischen Antworten liefert der KI-Chatbot des AMS

2024-01-04
futurezone.at
Why's our monitor labelling this an incident or hazard?
The AMS chatbot is an AI system based on ChatGPT, explicitly stated. Its use has resulted in sexist, stereotypical career advice, which constitutes harm to communities by perpetuating gender bias and discrimination. This is a violation of fundamental rights related to equality and non-discrimination. The harm is realized and publicly criticized, indicating the AI system's outputs have directly led to this harm. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

Diskriminierung: AMS erntet Hohn mit neuem KI-Chatbot - netzpolitik.org

2024-01-05
netzpolitik.org
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a ChatGPT-based large language model chatbot) used by a public institution for career advice. The system's outputs reproduce gender biases and stereotypes, which have been demonstrated by users and experts. This leads to discriminatory treatment of individuals seeking career guidance, a clear violation of labor and equal rights. The harm is direct and realized, as the biased recommendations affect users' career decisions and opportunities. The article also references prior similar issues with AMS algorithms causing discrimination. Given the direct link between the AI system's use and realized harm to individuals' rights, this is classified as an AI Incident.