These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Resaro
Resaro offers independent, third-party assurance of mission-critical AI systems with unparalleled breadth and depth. It promotes responsible, safe and robust AI adoption for enterprises, through technical advisory and evaluation of AI systems against emerging regulatory requirements. Resaro uses expert insights; data science and engineering team and proprietary testing tools and protocols for its solutions. These are independent and tailored AI authentication to ensure compliance.
Resaro's products aim at ensure and assure that AI is:
- Responsible: Resaro verifies that there is human oversight and control to ensure the AI system performs as intended under all circumstances.
- Safe: ensuring that AI is aligned with the values that define us as humans, ensuring it does not result in harm.
- And robust: testing whether the AI system is able to withstand unforeseen failures and adversarial attacks by third party malicious actors.
Four main solutions:
- Technical AI evaluation: carry out technical validation and performance benchmarking of AI models - to allow standardised comparisons - before and after companies procure or develop.
- AI stress-testing: anticipate what could go wrong and use Resaro's advanced test protocols and tools to understand the performance limits of the AI model.
- AI assurance advisory: gain expert advice on how to govern and manage the risks of AI in a selected business context, as well as aligning with global guidelines and standards.
- and AI assurance training: training of company's executives and technology teams to understand good AI/ML practices, innovation-friendly procurement practices and ways to mitigate risks.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Objective(s):
Target sector(s):
Country of origin:
Type of approach:
Maturity:
Usage rights:
Target users:
Stakeholder group:
Tags:
- ai responsible
- ai risks
- safety
- benchmarking
Use Cases
Resaro’s Performance and Robustness Evaluation: Facial Recognition System on the Edge
Resaro’s Bias Audit: Evaluating Fairness of LLM-Generated Testimonials
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case