Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Warden AI: Continuous Bias Auditing for HR Tech



Warden AI: Continuous Bias Auditing for HR Tech

Warden AI provides independent, tech-led AI bias auditing, designed for both HR Tech platforms and enterprises deploying AI solutions in HR. As the adoption of AI in recruitment and HR processes grows, concerns around fairness have intensified. With the advent of regulations such as NYC Local Law 144 and the EU AI Act, organisations are under increasing pressure to demonstrate compliance and fairness. 

Warden’s platform continuously audits AI systems for bias across multiple categories and provides transparency through real-time dashboards and reports. This enables HR Tech platforms and enterprises to demonstrate AI fairness, build trust, and comply with regulatory requirements with minimal effort

AI systems are constantly evolving, and even minor updates can introduce bias, so annual audits are not sufficient to keep pace with rapid technological change. In addition to these technical concerns, there is a growing lack of trust from enterprises adopting HR technology and end-users affected by it. Continuous auditing addresses both issues by allowing organizations to audit their AI systems over time, ensuring that they can manage risks related to bias while also adapting to regulatory changes. Regularly assessing AI systems for fairness helps build trust with stakeholders, demonstrating that the AI is both fair and compliant to use. 

Benefits:

  • Deploy updates confidently: Get assurance that AI updates are fair and compliant, with a third-party audit trail to prove it.
  • Accelerate growth: Boost buyer confidence and win more deals, faster, by demonstrating fair and compliant AI. 
  • For companies, being compliant: Demonstrate compliance with AI regulations such as NYC LL 144 and stay ahead of upcoming requirements from others. 
  • For companies, minimizing discrimination risk: Protect organizations from accidental discrimination by identifying and mitigating AI bias issues early. 
  • For companies, protect brand reputation: Avoid negative publicity and legal repercussions by ensuring AI systems are free from bias, with an audit trail to prove it. 

Limitations:

While the platform provides comprehensive bias auditing, the quality of results is dependent on the data used in evaluations. We mitigate this by using a combination of independent data with live/historical usage, but sometimes the data available is insufficient for certain demographic groups or categories of bias. 
Related links: 

  • Link to the full use case.
  • Link to AI Assurance Dashboard.
  • Link to new release.

This tool was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.

About the tool


Developing organisation(s):




Country of origin:



Type of approach:


Usage rights:



Stakeholder group:



Geographical scope:


People involved:


Tags:

  • ai responsible
  • bias
  • ai compliance

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.