Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Calvin Risk



Calvin Risk

Calvin Risk develops comprehensive, quantitative solutions to assess and manage the risks of AI algorithms in commercial use. The tool helps companies create a framework for transparency, governance and standardization to manage their AI portfolios while ensuring that AI remains safe and compliant with the highest ethical standards and upcoming regulatory requirements. 

 

Consequently, the Calvin Software targets a myriad of governance objectives to facilitate the smooth interaction of AI within a firm. The increasing case of AI incidents has led to Calvin’s primary focus on second line-of-defense measures, collaborating with both technical professionals and executives to foster proactive, holistic measures of regulating AI both internally and externally. The decision-making structure is centered upon in-depth analyses of technical, ethical, and regulatory risks—identifying strong and weak points of a firm’s AI portfolio.

 

Calvin prides itself in its ability to both quantify firm’s ROI respective to their AI risk-weighted portfolios, as well the Value at Risk (VaR) that firms face, such that effective project judgements and budgeting can be concluded; at the same time, Calvin helps firms adjust their external, ethics-related risks to avoid inherent biases and improve overall turnover. The software’s economic assessment serves to determine the viability of AI projects, conducting a financial risk assessment to ensure the use-case does not pose excessive financial impact. This is supported by a dual-analysis approach, consisting of a cost-benefit analysis and financial impact assessment. 

 

Responsible AI stands at the forefront of our mission, wherein Calvin looks to create both ethical and fiscal benefits for the greater society through the use of Trustworthy Models.

About the tool


Developing organisation(s):







Country of origin:



Type of approach:



Usage rights:


License:




Stakeholder group:





Geographical scope:




Technology platforms:


Tags:

  • ai ethics
  • ai responsible
  • ai risks
  • biases testing
  • collaborative governance
  • data governance
  • digital ethics
  • evaluate
  • evaluation
  • incident database
  • quality
  • responsible ai collaborative
  • trustworthy ai
  • validation of ai model
  • ai assessment
  • ai governance
  • ai reliability
  • fairness
  • bias
  • transparency
  • trustworthiness
  • auditablility
  • ai risk management
  • ai compliance
  • model validation
  • risk register
  • performance
  • regulation compliance
  • accountability
  • ai oversight
  • business analysis
  • validation
  • ai roi
  • robustness
  • explainability
  • ethical risk

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.