Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Fairly AI: FAIRLY End-to-End AI Governance Platform



Fairly AI: FAIRLY End-to-End AI Governance Platform

FAIRLY provides an AI governance platform focussed on accelerating the broad use of fair and responsible AI by helping organisations bring safer AI models to market. The system provides end-to-end AI governance, risk and compliance solutions for automating model risk management, applying policies and controls throughout the entire model lifecycle.

Growing use of AI models leads to an increased need for model risk mitigation. This is a particularly daunting challenge for audit and validation teams faced with AI-based models. Several common challenges associated with AI models include a lack of transparency resulting in difficulties in explaining the relationship between model inputs and outputs, a lack of model stability under changing conditions, and difficulties in ensuring that training datasets are fair, unbiased and trustworthy. AI models are growing increasingly complex which further increases model risk. High-stake AI use cases in particular require end-to-end AI oversight.
FAIRLY bridges the gap in AI oversight by making it easy to apply policies and controls early in the development process and adhere to them throughout the entire model lifecycle. Their automation platform decreases subjectivity, giving technical and non-technical users the tools they need to meet and audit policy requirements while providing all stakeholders with confidence in model performance.

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.