Eightfold AI Sued for Secretly Profiling Job Applicants with AI

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Eightfold AI, an AI hiring platform used by major companies, is being sued in California for allegedly generating and using AI-driven reports to screen job applicants without their knowledge or ability to dispute errors, violating the Fair Credit Reporting Act and California consumer protection laws.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Eightfold AI) used in hiring decisions that directly affects job applicants by scoring and profiling them without their knowledge or ability to challenge the results. This use of AI has led to alleged violations of the Fair Credit Reporting Act and California consumer protection laws, which protect fundamental labor and consumer rights. The harm is realized as job applicants claim they were unfairly evaluated and potentially denied employment opportunities based on AI-generated profiles. Therefore, this qualifies as an AI Incident due to violations of rights caused by the AI system's use.[AI generated]
AI principles
AccountabilityTransparency & explainabilityPrivacy & data governanceRespect of human rightsFairness

Industries
Business processes and support services

Affected stakeholders
Workers

Harm types
Human or fundamental rightsEconomic/Property

Severity
AI incident

Business function:
Human resource management

AI system task:
Organisation/recommendersForecasting/prediction


Articles about this incident or hazard

Thumbnail Image

AI Company Eightfold Sued For Helping Companies Secretly Score Job Seekers | Today Headline

2026-01-21
Today Headline
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Eightfold AI) used in hiring decisions that directly affects job applicants by scoring and profiling them without their knowledge or ability to challenge the results. This use of AI has led to alleged violations of the Fair Credit Reporting Act and California consumer protection laws, which protect fundamental labor and consumer rights. The harm is realized as job applicants claim they were unfairly evaluated and potentially denied employment opportunities based on AI-generated profiles. Therefore, this qualifies as an AI Incident due to violations of rights caused by the AI system's use.
Thumbnail Image

AI company Eightfold sued for helping companies secretly score job seekers

2026-01-21
1470 & 100.3 WMBD
Why's our monitor labelling this an incident or hazard?
Eightfold AI is explicitly described as an AI system used for hiring decisions by analyzing large datasets to profile and score job applicants. The lawsuit alleges that the system's use directly led to violations of the Fair Credit Reporting Act and California law protecting consumer rights, which are legal protections related to labor and privacy rights. The harm is realized as job applicants were evaluated without notice or chance to dispute, potentially affecting their employment opportunities. This meets the criteria for an AI Incident because the AI system's use has directly led to a breach of labor and consumer rights.
Thumbnail Image

Eightfold sued for helping companies secretly score job seekers

2026-01-21
iTnews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Eightfold AI) used in hiring decisions, which is explicitly described as generating profiles and predictions about job applicants. The alleged harm is a violation of legal rights (Fair Credit Reporting Act and California law) protecting individuals from undisclosed and unchallengeable evaluations, which constitutes a breach of labor and consumer rights. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human and labor rights. The lawsuit and allegations indicate realized harm rather than potential harm, so it is not merely a hazard or complementary information.
Thumbnail Image

AI company Eightfold sued for helping companies secretly score job seekers By Reuters

2026-01-22
Investing.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Eightfold AI) used in hiring decisions, which directly impacts individuals' employment opportunities. The lawsuit alleges that the AI system's use led to violations of legal rights protecting job applicants, specifically under the Fair Credit Reporting Act and California consumer protection laws. This is a clear case of harm to human rights and labor rights caused by the AI system's use, meeting the criteria for an AI Incident. The harm is realized (lawsuit filed due to alleged violations), not just potential, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

AI company Eightfold sued for helping companies secretly score job seekers

2026-01-22
The Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Eightfold AI's hiring platform) whose use in screening job applicants is alleged to have caused violations of legal rights under the FCRA and California law. The AI system's development and use in generating talent profiles and predictions about applicants without their knowledge or ability to challenge the data constitutes a breach of labor and consumer rights, which fits the definition of an AI Incident under violations of human rights or breach of legal obligations. The harm is realized and ongoing as per the lawsuit, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Job Seekers Want to Know What the Hell Is Going on With AI-Based Hiring Decisions: Lawsuit

2026-01-22
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as generating hiring match scores that influence employment decisions. The lawsuit claims that this AI system's use has directly harmed job seekers by denying them fair consideration, which is a violation of labor rights and consumer protection laws. The harm is realized and ongoing, not merely potential. Hence, the event meets the criteria for an AI Incident due to the AI system's use causing direct harm to individuals' employment opportunities and rights.
Thumbnail Image

AI Recruitment Platform Eightfold Sued for Screening Job Applicants Without Consent | AIM

2026-01-22
Analytics India Magazine
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates detailed consumer reports and ranks applicants, influencing hiring decisions. The lawsuit claims that Eightfold's use of AI breached legal obligations to obtain consent and provide mechanisms for dispute, constituting a violation of labor and consumer rights. This harm is direct and materialized, as applicants allege adverse employment outcomes linked to the AI system's assessments. Hence, this qualifies as an AI Incident due to violations of human and labor rights caused by the AI system's use.
Thumbnail Image

AI Hiring Firm Eightfold Sued Over Alleged Secret Scoring Of Job Applicants

2026-01-22
Decrypt
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used in hiring decisions that directly impacts individuals' employment opportunities, implicating violations of legal rights and consumer protection laws. The AI's secret scoring and lack of transparency have led to alleged harm to the plaintiffs, including denial of interviews and job opportunities. This constitutes a violation of rights and harm to individuals, fitting the definition of an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

Job applicants sue to open 'Black Box' of AI hiring decisions

2026-01-22
The Indian Express
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as screening and scoring job applicants, directly impacting their employment prospects. The harm includes violation of rights (lack of transparency, inability to correct errors, potential discrimination), which fits the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The lawsuit targets the AI system's use and its consequences, indicating realized harm rather than potential harm. Hence, the classification is AI Incident.
Thumbnail Image

Lawsuit targets AI hiring systems used by Microsoft and Salesforce

2026-01-22
TechSpot
Why's our monitor labelling this an incident or hazard?
The AI system (Eightfold AI) is explicitly described as using large-scale data analysis and AI models to rank and score job applicants, influencing hiring decisions. The lawsuit alleges that this use violates the Fair Credit Reporting Act by generating undisclosed reports without consent, which is a breach of legal obligations protecting fundamental rights related to employment and privacy. This constitutes an AI Incident because the AI system's use has directly led to alleged violations of rights under applicable law. The event is not merely a potential risk or a complementary update but a concrete legal challenge based on realized harm claims.
Thumbnail Image

Job hunters sue AI hiring firm for ranking them without their knowledge

2026-01-22
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using large language models and extensive data to rank job applicants. The use of this AI system has directly led to harm by potentially denying individuals employment opportunities without their knowledge or ability to dispute the AI's decisions, which is a violation of legal rights and consumer protection laws. The harm is realized and ongoing, as evidenced by the lawsuit and specific examples of applicants being rejected. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Job applicants file lawsuit against technology used to screen potential hires: 'I deserve to know'

2026-01-23
The Cool Down
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it scores applicants and influences hiring decisions. The lawsuit claims that the AI system's use violates labor laws by denying applicants transparency and feedback, which constitutes a violation of rights under applicable law protecting labor rights. This is a direct harm caused by the AI system's use in screening applicants. Therefore, this qualifies as an AI Incident due to violations of human and labor rights caused by the AI system's deployment in hiring.
Thumbnail Image

Workers challenge 'hidden' AI hiring tools in class action with major regulatory stakes.

2026-01-23
Computerworld
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Eightfold AI's hiring tool) whose use in employment decision-making has allegedly caused harm to individuals by barring them from jobs and violating legal rights. The harm is realized and directly linked to the AI system's use, fulfilling the criteria for an AI Incident. The involvement of a proprietary LLM and deep learning technology analyzing personal data for hiring decisions confirms AI system involvement. The legal claims and class action nature underscore the significance of the harm and its systemic impact on workers' rights and opportunities.
Thumbnail Image

Job Seekers File Lawsuit Challenging AI Hiring Software Under Credit Reporting Laws

2026-01-23
Breitbart
Why's our monitor labelling this an incident or hazard?
The AI system (Eightfold AI's screening software) is explicitly involved in the hiring process, making decisions that impact individuals' employment prospects. The lawsuit claims that the AI system's use leads to harm by denying candidates fair access to information and the ability to correct errors, which is a violation of legal rights under the Fair Credit Reporting Act. This harm is realized and ongoing, as evidenced by the plaintiffs' experiences and the legal challenge. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system causing harm through violation of rights.
Thumbnail Image

Eightfold AI sued for alleged covert candidate ranking

2026-01-23
HR Dive
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system used in hiring decisions that processes personal data and ranks candidates, which is an AI system by definition. The lawsuit alleges that this AI use has directly caused harm by violating legal rights related to consent, disclosure, and dispute of consumer reports, impacting job applicants' employment opportunities. This constitutes a violation of labor and consumer rights, a form of harm under the AI Incident definition. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use in candidate screening.
Thumbnail Image

Historic legal action against Indian-origin founders' startup: Why did US job seekers sue Eightfold AI?

2026-01-24
The Financial Express
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system used for hiring decisions, which is alleged to have caused harm by violating legal rights under the FCRA. The AI system's development and use in generating reports that affect job applicants' chances constitute direct involvement leading to harm (violation of labor and consumer rights). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to a breach of legal obligations protecting fundamental labor rights and privacy.
Thumbnail Image

Historic legal action against Indian-origin founders' startup Eightfold AI?

2026-01-24
The Financial Express
Why's our monitor labelling this an incident or hazard?
The lawsuit directly relates to the use of AI systems by Eightfold AI in processing applicant data without proper consent, which constitutes a violation of legal rights and privacy protections. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of obligations under applicable law intended to protect fundamental rights. The harm is realized as plaintiffs are seeking justice for these violations.
Thumbnail Image

Job Seekers Sue Company Scanning Their Résumés Using AI

2026-01-25
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using AI algorithms to score job applications and make hiring recommendations. The harm is realized and ongoing, as job seekers are adversely affected by the opaque AI decision-making process, lack of transparency, and potential misuse of personal data. This fits the definition of an AI Incident because the AI system's use has directly led to violations of labor rights and informational harm to individuals, which are harms under the framework. The lawsuit highlights these harms and challenges the legal framework, confirming the incident nature rather than a mere hazard or complementary information.
Thumbnail Image

Company whose AI hiring tool is used by Microsoft and Paypal sued compiling secretive reports termed 'illegal' - The Times of India

2026-01-27
The Times of India
Why's our monitor labelling this an incident or hazard?
The AI system (Eightfold) is explicitly mentioned and used to generate candidate scores influencing hiring decisions. The lawsuit claims that the system's secretive data collection and scoring practices deny candidates their legal rights to access and correct information affecting their employment opportunities, constituting a violation of labor and consumer protection laws. This is a direct harm to individuals' rights caused by the AI system's use, meeting the criteria for an AI Incident under violations of human rights or breach of legal obligations protecting labor rights.
Thumbnail Image

Job seekers are suing an AI hiring tool used by Microsoft and Paypal for allegedly compiling secretive reports that help employers screen candidates | Fortune

2026-01-26
Fortune
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Eightfold) used in hiring decisions, which produces candidate fit scores. The lawsuit alleges violations of laws protecting rights and transparency, indicating potential or ongoing harm related to discrimination and unfair treatment. However, the article does not document a specific incident of harm already realized but rather a legal challenge and concerns about the AI system's use. Therefore, this event is best classified as an AI Hazard, as it plausibly could lead to harm related to rights violations and discrimination if the AI system's use remains opaque and unregulated.
Thumbnail Image

Job applicants sue to open 'black box' of AI hiring decisions

2026-01-28
The Seattle Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used in hiring decisions, which directly affects individuals' employment opportunities, a fundamental labor right. The lawsuit alleges harm through opaque AI decision-making that may unfairly block candidates without explanation or recourse, constituting a violation of rights under applicable law. This meets the definition of an AI Incident because the AI system's use has directly led to harm in the form of potential discrimination and lack of transparency impacting job applicants' rights. The legal challenge and discussion of regulatory frameworks further confirm the realized harm and legal implications.
Thumbnail Image

Lawsuit Claims This AI Tool Misused Job Applicants' Credit Info

2026-01-27
Inc.
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as used for recruitment and candidate assessment, which is alleged to have misused sensitive credit information without proper notice or dispute mechanisms, violating legal rights under the FCRA and California law. This constitutes a breach of obligations intended to protect labor and privacy rights, fitting the definition of an AI Incident. The harm is realized through the alleged misuse and lack of transparency, not merely a potential risk, so it is not an AI Hazard or Complementary Information. Therefore, the classification is AI Incident.
Thumbnail Image

Lawsuit calls out AI hiring practices that many banks use

2026-01-26
American Banker
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Eightfold AI) used in hiring decisions. The lawsuit claims that the AI system's use has directly led to harm by unfairly rejecting qualified candidates, violating their rights to fair and transparent hiring processes. The harm includes discrimination, lack of transparency, and potential breaches of employment and consumer protection laws. These harms fall under violations of human rights and labor rights, meeting the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a concrete legal challenge based on realized harm caused by the AI system's use.
Thumbnail Image

Bay Area lawsuit says AI is discarding your application before a human sees it

2026-01-29
San Francisco Gate
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system used in hiring processes that scores and filters candidates, leading to the direct harm of applicants being discarded without human review and without proper consent or transparency. This constitutes a violation of legal rights and causes harm to individuals' employment opportunities, fitting the definition of an AI Incident. The involvement of AI in the development, use, and alleged misuse of the system is central to the harm described. The lawsuit and the described impacts confirm realized harm rather than just potential risk.