These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Credo AI Policy Packs: Human Resources Startup compliance with NYC LL-144
In December 2021, the New York City Council passed Local Law No. 144 (LL-144), mandating that AI and algorithm-based technologies used for recruiting, hiring, or promotion are audited for bias before being used. The law also requires employers to annually conduct and publicly post independent audits to assess the statistical fairness of their processes across race and gender.
Credo AI created a Policy Pack for NYC LL-144 that encodes the law’s principles into actionable requirements and adds a layer of reliability and credibility to compliance efforts. An AI-powered HR talent-matching startup (“HR Startup”) used Credo AI’s Responsible AI Governance Platform and LL-144 Policy Pack to address this and other emerging AI Regulations.
This approach allows organizations to govern automated decision making tools used in hiring beyond NYC’s LL-144. Organizations using the Platform can map and measure bias of their systems and apply different policy packs, including custom policy packs that allows them to align with internal policies and meet regulatory requirements in different jurisdictions.
The HR tech startup was able to produce a bias audit in compliance with New York City’s algorithmic hiring law by using Credo AI’s Platform and LL-144 Policy Pack. By using the Credo AI Platform to perform bias assessments and engage in third-party reviews, the talent-matching platform met NYC LL-144’s requirements and improved customer trust.
Aside from assessing organisations’ systems for LL-144 compliance, Credo AI’s human review of the assessment report identifies assessment gaps and opportunities that help increase its reliability and provide additional assurance to stakeholders. This third-party review provided the HR start up with insights and recommendations for bias mitigation and improved compliance.
Beyond NYC’s LL-144, this approach can be applicable to other regulatory regimes that aim to prevent discrimination from algorithm-based or automated decision tools. For example, enterprises looking to map and measure bias of protected categories under UK’s Equality Act, or produce bias audits as part of the risk management system required under the EU AI Act, can leverage Credo AI’s Platform and custom policy packs or EU AI Act’s high-risk AI system policy pack.
Benefits of using the tool in this use case
Utilizing Credo AI's Platform and NYC’s LL-144 Policy Pack allowed the HR Startup to streamline the implementation of technical evaluations for their data and models, while also facilitating the creation of compliance reports with human-in-the-loop review. This process also enabled the HR Startup to showcase their commitment to responsible AI practices to both clients and regulatory bodies, achieving full compliance with the LL-144 within two months. Furthermore, by establishing an AI Governance process, the HR Startup is able to apply additional Policy Packs to comply with other emerging regulations.
Shortcomings of using the tool in this use case
Demographic data such as gender, race, and disability is necessary for bias assessment and mitigation of algorithms. It helps discover potential biases, identify their sources, develop strategies to address them, and evaluate the effectiveness of those strategies. However, “ground-truth” demographic data is not always available for a variety of reasons.
While many organisations do not have access to datasets leading to partial fairness evaluations, the HR Startup did have access to self-reported data. While self-reported demographic data directly reflects the individual's own perspective and self-identification, has high accuracy, is explainable, and does not require proxy data it also has limitations. Such limitations include having incomplete or unrepresentative datasets due to privacy concerns or fear of discrimination, availability latency, and its potential for errors due to social desirability bias and misinterpretation.
It is important to remember that other demographic data collection approaches like human annotation and algorithmic inference also have limitations. Human-annotated demographic data source relies on a human annotator’s best perception of an individual's demographic attributes and is subject to observer bias while algorithmic inference or machine inferred demographic data can further propagate biases in training data and models and has limited explainability. Bias and fairness assessments of algorithm-based technologies used for recruiting, hiring, or promotion can only be as good as the data that is available.
Link to the full use case.
This case study was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.
About the use case
You can click on the links to see the associated use cases
Objective(s):
Impacted stakeholders:
Country of origin:
Target users: