Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

QuantPi Platform for AI Risk Management



QuantPi Platform for AI Risk Management

QuantPi’s platform unites AI testing with AI governance. It is the cockpit of AI-first organizations to collaborate on – to efficiently and responsibly understand, enhance and steer their individual AI models as well as their complete AI landscape. 

A modular mapping of requirements to risk measures, allows organizations to easily integrate corporate risk management frameworks into the platform or to choose from prepared rule sets, like the ‘EU AI Act Package,’ already available in the platform. 

Using these requirements, QuantPi’s proprietary mathematical framework, enables organizations to automatically calculate the likelihood of AI risks and opportunities with the highest computational efficiency. It measures each and every ML model in the same consistent way on dimensions such as bias/fairness, robustness, and performance.

Regardless of whether organizations are dealing with newly built, purchased, or existing ML models with different architectures or providers, all assessments are compared the same. And rather than adding another black box on top of a black box, outcomes of the assessments are transparent and explainable. 

Furthermore, to guarantee the privacy of sensitive data and IP protection of the models, QuantPi’s AI testing approach does not require access to customer data and can be implemented on-premise.

The platform decreases time-to-value from months to days. Saving organizations: (1) costly processes to define the relevant aspects of AI behavior to assess and monitor, and (2) time-intensive engineering work to understand the behavior of AI black-boxes. A holistic and stakeholder-appropriate oversight of all AI initiatives in one location enables companies to scale their AI-first strategies in a responsible and ROI-focused manner.

About the tool







Country of origin:



Type of approach:



Usage rights:


License:




Stakeholder group:





Geographical scope:




Technology platforms:


Tags:

  • ai ethics
  • ai assessment
  • ai governance
  • ai auditing
  • transparency
  • ai risk management
  • ai compliance
  • model validation
  • model monitoring
  • ai quality
  • performance
  • ai roi
  • data quality
  • robustness
  • explainability

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.