Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

FairNow: Regulatory Compliance Implementation and the NIST AI RMF / ISO Readiness

Oct 2, 2024

FairNow: Regulatory Compliance Implementation and the NIST AI RMF / ISO Readiness

FairNow's platform simplifies the process of managing compliance for the NIST AI Risk Management Framework, ISO 42001, ISO 23894, and other AI laws and regulations worldwide. Organisations can use the FairNow platform to identify which standards, laws, and regulations apply based on their AI adoption and manage the set of activities necessary to ensure compliance.  

FairNow’s platform translates complex laws and standards into actionable controls that can be executed and evidenced to track compliance. FairNow’s comprehensive library of controls covers requirements for individual AI applications – including inventorying, risk reviews, bias assessments, transparency obligations, and others – as well as requirements for an organisation’s AI governance program – including Board oversight, accountabilities, training, and culture. Organisations report on compliance through FairNow’s dashboards and set alerts for any high-impact compliance gaps.  

Wherever possible, FairNow’s platform automates control evidencing – including for risk assessments, ongoing monitoring, and documentation. All evidence is centrally stored, and approvals are tracked to ensure a robust audit trail. Automation and centralization on the FairNow platform enable organisations to simplify and streamline their AI compliance activities so that they can focus their efforts on managing their AI risks.  

Organisations can use FairNow's platform to access its existing control library and convert their internal policies into controls. After creating these controls, they can define the scope, set deadlines, and directly notify AI owners of new expectations through the platform.  

Benefits of using the tool in this use case

This approach makes it easier to break down complex laws and standards into actionable steps that organisations can follow to demonstrate compliance. By automating key parts of governance (model evaluation, document generation, evidence tracking, and more), FairNow’s platform simplifies the task of following the many existing and coming AI laws. 

FairNow helps organisations understand which laws and regulations apply to their AI. 

  • The platform breaks down laws, regulations, and standards into individual controls to which the organisation adheres to demonstrate compliance.  
  • The FairNow platform is the single command center for the organisation's AI governance program and AI regulatory tracker. It automates as much as possible to reduce the time and effort needed to become compliant.  
  • The platform lets the organisation define and customize roles and responsibilities related to AI governance, ensuring that sound accountability and ownership can be established.  

Importantly, FairNow's framework-agnostic controls enhance reusability by addressing overlapping requirements across various laws and standards. Organisations can complete a task once, and it will be applied across all relevant frameworks, eliminating the need for redundant efforts and ensuring efficient compliance management.  

Shortcomings of using the tool in this use case

The FairNow platform is designed to complement, not replace, human oversight in the risk review process. Organisations may still need to consult with legal and risk experts, depending on the specific design and use of each AI application, to make final decisions about applicable laws, regulations, and necessary actions. 

Related links: 

  • Link to the full use case.
  • FairNow and NIST AI Risk Management Framework.
  • FairNow and ISO 42001:2023.
  • Link to NIST AI Risk Management Framework.
  • Text ISO/IEC 42001:2023.
  • Information on FairNow AI regulatory compliance tools.

This case study was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.

Modify this use case

About the use case


Developing organisation(s):





Country of origin: