Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

FairNow: NYC Bias Audit With Synthetic Data (NYC Local Law 144)

Oct 2, 2024

FairNow: NYC Bias Audit With Synthetic Data (NYC Local Law 144)

New York City's Local Law 144 has been in effect since July 2023 and was the first law in the US to require bias audits of employers and employment agencies who use AI in hiring or promotion. Under the law, in-scope employers and employment agencies are required to enlist an independent auditor to conduct a disparate impact analysis by race, gender, and intersectional categories thereof. This type of analysis typically requires historical data, but when sufficient historical data is not available (for example: because an AI tool hasn't launched yet or because data is otherwise unavailable), the NYC law allows for test data to be used.  

FairNow's synthetic bias evaluation technique creates synthetic job resumes that reflect a wide range of jobs, specialisations and job levels so that organisations can conduct a bias assessment with data that reflects their candidate pool. The synthetic resumes are constructed using templates where various attributes are added to connect the resume to a given race and gender. The resumes are otherwise identical in attributes related to the candidate's capability to do the job successfully. Because of this construction, differences in model scores can be attributed to the candidate's demographic attributes.

This approach can also be extended to bias testing beyond the NYC LL144 audit requirements. FairNow has leveraged this method to evaluate a leading HR recruitment software provider's AI for bias by disability status and gender identity. 

This approach addresses several of the most significant pain points companies face when conducting bias audits on their data.

  • First, companies often lack historical data to conduct a bias audit. This could be because they haven't launched the AI yet to collect data, they have some data but not enough for a statistically significant sample size, or because their demographic data collection is sparse.
  • Second, companies may have thin data on a particular segment or subtype of customers that they'd like to understand better. This approach can enable organisations to test for potential bias even where actual data is not available

Benefits of using the tool in this use case

This approach solves many of the data-related problems that companies face when they look to test for bias. Another benefit is privacy - the organisation does not need to share confidential applicant data with a third party because the data used for this bias audit is synthetic. 
This saves significant time and effort in procurement and privacy workflows and reduces privacy risks.

Shortcomings of using the tool in this use case

Because the data used for this audit is synthetically constructed, it may lack some of the nuance and variability seen in real-world job application data. Additionally, while the synthetic data can be customised to the organisation's applicant pool, it may lag real-world shifts in applicant distributions or types.

Related links: 

This case study was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.

Modify this use case

About the use case


Developing organisation(s):





Country of origin: