Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

FairNow: AI Governance Platform



FairNow: AI Governance Platform

Many organisations today are leveraging AI in a distributed way across many different teams and departments. FairNow's AI governance platform is an organisation’s AI governance command center, serving as a source of truth for managing AI governance, risk, and compliance. Risk, legal, data, technology, and business leaders can review their AI risks, track usage, monitor dependencies, and ensure compliance all within a single platform. 

Core features of FairNow’s platform include  AI inventory management; management of governance workflows, roles, and accountabilities; risk assessments; testing and ongoing monitoring; documentation and audit trails; vendor AI risk management; and regulatory compliance tracking. 
Organisations integrate the platform into their day-to-day governance activities, leveraging FairNow’s built-in functionality to automate governance tasks, simplify their compliance tracking and reporting, and centralize their oversight of AI risk. Organisations using the platform today leverage a combination of off-the-shelf capabilities and configurable features. For example, many organisations adopt FairNow’s risk, regulatory, and compliance intelligence offerings to stay informed of potential risks for each of their AI applications, as well as in-scope laws and regulations; they also add their own incremental risk assessment questions, policies, and controls to ensure that AI usage is adhering to requirements that are specific to their own organisations.

FairNow uses five main mechanisms:

  • Centralised AI governance: ensuring organisation-wide oversight, meticulous organisation, and unwavering accountability. FairNow ensures complete transparency into every aspect of an AI inventory, making it easier to track, manage, and report on compliance across all operations.
  • Audit-ready compliance: ensuring that AI projects are aligned with compliance requirements and ethical standards.  FairNow dynamically assesses risk levels based on the model’s use, type, function, and jurisdiction, ensuring continuous protection and compliance. It ensures detailed logs of AI system behaviours and decisions for increased accountability, troubleshooting, and audit purposes.
  • Automated bias audits: notification system to alert proper contact point when needed. Advanced bias detection tools are used to continuously monitor and evaluate companies' models for any signs of bias. 
  • Customisable AI governance tools: customisable governance logic to create team accountability based on risk-tolerance and regulations. Ensure the company's AI deployment is lawful, up-to-date, and in harmony with both current and upcoming regulations.
  • Seamless data integration options: allows organisations to run bias audits without any integration.

As a company with decades of combined experience in model risk management and AI governance, FairNow understands how challenging AI governance can be -- and how cumbersome it can be without the right tools and automation. Establishing workflows based on industry best practice and in alignment with standards like NIST and ISO while allowing organisations to configure key parts of their AI governance program ensures the platform can serve organisations of all sizes, industries, and maturity levels. 

Benefits:

With the many different ways that AI can be used across a wide organisation, tracking risk in a centralised manner can be cumbersome and time-consuming. Organisations without the right tools in place can end up tracking their AI inventory and risk across spreadsheets and shared drives. This results in hours spent manually managing processes, increasing the risk of human error and the possibility that something important may fall through the cracks. FairNow’s AI governance platform is designed to help organisations track their AI systems and associated risks, whether managing an inventory of five or 500 applications.  AI risk assessments allow organisations to evaluate each application's specific components and risk factors, assigning appropriate risk levels. The platform's testing and monitoring features support compliance with global AI laws and regulations, ensuring that AI systems are effective, safe, and fair. Documentation and audit trails provide transparency for internal stakeholders and impacted populations. Vendor AI risk management helps companies ensure that the AI they procure meets international standards. Additionally, regulatory and compliance tracking is essential for those who fall under the jurisdiction of any of the growing number of AI regulations that are in effect across the globe.   

Limitations:

​​ AI governance platforms and technologies are designed to simplify, streamline, and automate various aspects of the governance process. However, human oversight—both of individual AI applications and the overall governance program—is essential for effectively managing existing and emerging AI risks within an organisation. Platforms like FairNow should be used to facilitate and enhance, rather than replace, human oversight in identifying, managing, and mitigating AI risks in line with internal and external requirements. 

Related links: 

This tool was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.

About the tool


Developing organisation(s):




Type of approach:



Usage rights:


Target groups:



Stakeholder group:


Tags:

  • ai auditing
  • bias
  • ai risk management
  • ai compliance
  • regulation compliance

Modify this tool

Use Cases

FairNow: NYC Bias Audit With Synthetic Data (NYC Local Law 144)

FairNow: NYC Bias Audit With Synthetic Data (NYC Local Law 144)

New York City's Local Law 144 has been in effect since July 2023 and was the first law in the US to require bias audits of employers and employment agencies who use AI in hiring or promotion. Under the law, in-scope employers and employment agencies ...
Oct 2, 2024

FairNow: Regulatory Compliance Implementation and the NIST AI RMF / ISO Readiness

FairNow: Regulatory Compliance Implementation and the NIST AI RMF / ISO Readiness

FairNow's platform simplifies the process of managing compliance for the NIST AI Risk Management Framework, ISO 42001, ISO 23894, and other AI laws and regulations worldwide. Organisations can use the FairNow platform to identify which standards, law...
Oct 2, 2024

FairNow: Conversational AI And Chatbot Bias Assessment

FairNow: Conversational AI And Chatbot Bias Assessment

More organisations are starting to use chatbots for many purposes, including interacting with individuals in ways that could result in harm from differential treatment in terms of the user's demographic status. FairNow's chatbot bias assessment provi...
Oct 2, 2024

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.