iTutorGroup Settles Age Discrimination Lawsuit Over AI Hiring Tool

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

iTutorGroup agreed to pay $365,000 to over 200 job applicants after its AI hiring software was found to have automatically rejected female candidates over 55 and male candidates over 60, resulting in age discrimination. The U.S. EEOC alleged the AI system was deliberately trained to exclude older applicants.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system used in employment decisions that directly led to harm in the form of discrimination against older applicants, which constitutes a violation of labor rights and anti-discrimination laws. The lawsuit and settlement indicate that the AI system's use caused actual harm, qualifying this as an AI Incident under the framework.[AI generated]
AI principles
FairnessRespect of human rightsAccountabilityTransparency & explainabilityHuman wellbeing

Industries
Education and trainingBusiness processes and support services

Affected stakeholders
WorkersWomen

Harm types
Human or fundamental rightsEconomic/PropertyReputational

Severity
AI incident

Business function:
Human resource management

AI system task:
Organisation/recommendersGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

China tutoring firm settles US agency’s first bias lawsuit involving AI software

2023-08-11
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used in employment decisions that directly led to harm in the form of discrimination against older applicants, which constitutes a violation of labor rights and anti-discrimination laws. The lawsuit and settlement indicate that the AI system's use caused actual harm, qualifying this as an AI Incident under the framework.
Thumbnail Image

Tutoring firm settles US agency's first bias lawsuit involving AI software

2023-08-11
bdnews24.com
Why's our monitor labelling this an incident or hazard?
The AI system was used in employment decisions and was claimed to have illegally discriminated against certain age groups, which constitutes a violation of labor rights under applicable law. The lawsuit and settlement indicate that harm (discrimination) occurred due to the AI system's use. Therefore, this qualifies as an AI Incident involving violations of human rights/labor rights.
Thumbnail Image

Tutoring firm settles US agency's first bias lawsuit involving AI software

2023-08-11
Economic Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used in hiring decisions that led to discriminatory outcomes against older applicants, which is a violation of labor rights under applicable law. The lawsuit and settlement confirm that harm occurred due to the AI system's use. Therefore, this qualifies as an AI Incident because the AI system's use directly caused a breach of fundamental labor rights.
Thumbnail Image

Tutoring firm settles US agency's first bias lawsuit involving AI...

2023-08-10
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI-powered hiring software was used to screen out older applicants, leading to a legal claim of discrimination and a settlement payment to affected individuals. This demonstrates direct harm caused by the AI system's use, fulfilling the criteria for an AI Incident under violations of human rights and labor rights. The involvement of the AI system in the discriminatory hiring practice is clear and central to the event.
Thumbnail Image

NYC Starts Regulating Employer Use of Artificial Intelligence, Indicating a Potential Trend

2023-08-07
Lexology
Why's our monitor labelling this an incident or hazard?
The article centers on the enactment of laws and regulatory guidance addressing AI use in employment, emphasizing the need for bias audits and compliance to prevent discrimination. It does not report a concrete AI Incident where harm has occurred, nor does it describe a specific AI Hazard event where harm was narrowly avoided or is imminent. Instead, it provides complementary information about governance responses, legal frameworks, and enforcement trends related to AI in the workplace. Therefore, it fits the definition of Complementary Information, as it enhances understanding of AI's societal and legal implications without reporting a new incident or hazard.
Thumbnail Image

Check Yourself Before You Wreck Yourself: New York and Other States Have Big Plans For Employer Use of AI and Other Workplace Monitoring Tools

2023-08-10
Lexology
Why's our monitor labelling this an incident or hazard?
The article focuses on legislative and regulatory efforts concerning AI use in employment and workplace monitoring. It discusses potential future restrictions and safeguards to prevent harms related to AI-driven automated decision-making and monitoring tools. Since the bill is not yet enacted and no actual harm or incident has been reported, this event does not describe an AI Incident or AI Hazard. Instead, it provides complementary information about governance and societal responses to AI-related risks in the workplace, enhancing understanding of the evolving AI ecosystem and regulatory landscape.
Thumbnail Image

Tutoring firm settles US agency's first bias lawsuit involving AI software | Technology

2023-08-10
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used in hiring decisions that led to discriminatory outcomes against older applicants, which constitutes a violation of labor rights under applicable law. The lawsuit and settlement indicate that harm has occurred due to the AI system's use. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a breach of labor rights and legal obligations.
Thumbnail Image

NY Requires Disclosure Of Use Of AI With Job Seekers - New Technology - United States

2023-08-09
Mondaq Business Briefing
Why's our monitor labelling this an incident or hazard?
The article discusses the use of AI systems in hiring and the associated risks of bias and discrimination, which are recognized harms under the framework. However, it does not report a new AI Incident or AI Hazard but rather details regulatory and legislative responses aimed at preventing such harms. The mention of Amazon's past biased AI hiring tool serves as background context. Therefore, this is best classified as Complementary Information, as it provides information on governance and societal responses to AI harms in hiring.
Thumbnail Image

Tutoring firm settles US agency's first bias lawsuit involving AI software

2023-08-10
iTnews
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI hiring software was programmed to screen out older applicants, leading to discriminatory harm against these individuals. This is a clear violation of labor rights and anti-discrimination laws, which fits the definition of an AI Incident. The settlement and lawsuit confirm that harm has occurred due to the AI system's use in employment decisions. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tutoring firm settles claim alleging its recruiting algorithm screened out applicants over 60

2023-08-10
HR Dive
Why's our monitor labelling this an incident or hazard?
The recruiting software's algorithm, an AI system, was used in hiring decisions and directly caused harm by screening out applicants over certain ages, resulting in discriminatory treatment. The settlement and EEOC's involvement confirm that the AI system's use led to a violation of legal protections, fulfilling the criteria for an AI Incident involving violations of human rights and labor rights.
Thumbnail Image

ITutorGroup settles AI hiring lawsuit alleging age discrimination

2023-08-11
Verdict
Why's our monitor labelling this an incident or hazard?
The article explicitly states that iTutorGroup's AI hiring software was deliberately trained to reject candidates above certain ages, resulting in age discrimination against over 200 qualified applicants. This constitutes a violation of labor rights and human rights, fulfilling the criteria for an AI Incident. The AI system's use directly led to harm by automating discriminatory hiring practices, and the settlement confirms the harm was realized. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Why employers shouldn't fear NYC's new AI law

2023-08-07
Employee Benefit News
Why's our monitor labelling this an incident or hazard?
The article centers on a new regulation requiring employers to audit AI hiring tools for bias, aiming to prevent discrimination. It does not describe any realized harm or incident caused by AI, nor does it present a credible imminent risk of harm. Rather, it provides information about governance and compliance measures related to AI use in recruitment. Therefore, it fits the definition of Complementary Information as it informs about societal and governance responses to AI-related risks without reporting a new AI Incident or AI Hazard.