These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Calvin Risk
Calvin Risk develops comprehensive, quantitative solutions to assess and manage the risks of AI algorithms in commercial use. The tool helps companies create a framework for transparency, governance and standardization to manage their AI portfolios while ensuring that AI remains safe and compliant with the highest ethical standards and upcoming regulatory requirements.
Consequently, the Calvin Software targets a myriad of governance objectives to facilitate the smooth interaction of AI within a firm. The increasing case of AI incidents has led to Calvin’s primary focus on second line-of-defense measures, collaborating with both technical professionals and executives to foster proactive, holistic measures of regulating AI both internally and externally. The decision-making structure is centered upon in-depth analyses of technical, ethical, and regulatory risks—identifying strong and weak points of a firm’s AI portfolio.
Calvin prides itself in its ability to both quantify firm’s ROI respective to their AI risk-weighted portfolios, as well the Value at Risk (VaR) that firms face, such that effective project judgements and budgeting can be concluded; at the same time, Calvin helps firms adjust their external, ethics-related risks to avoid inherent biases and improve overall turnover. The software’s economic assessment serves to determine the viability of AI projects, conducting a financial risk assessment to ensure the use-case does not pose excessive financial impact. This is supported by a dual-analysis approach, consisting of a cost-benefit analysis and financial impact assessment.
Responsible AI stands at the forefront of our mission, wherein Calvin looks to create both ethical and fiscal benefits for the greater society through the use of Trustworthy Models.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Impacted stakeholders:
Target sector(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
License:
Target groups:
Target users:
Validity:
Enforcement:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Tags:
- ai ethics
- ai responsible
- ai risks
- biases testing
- collaborative governance
- data governance
- digital ethics
- evaluate
- evaluation
- incident database
- quality
- responsible ai collaborative
- trustworthy ai
- validation of ai model
- ai assessment
- ai governance
- ai reliability
- fairness
- bias
- transparency
- trustworthiness
- auditablility
- ai risk management
- ai compliance
- model validation
- risk register
- performance
- regulation compliance
- accountability
- ai oversight
- business analysis
- validation
- ai roi
- robustness
- explainability
- ethical risk
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case