Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Resaro



Resaro

Resaro offers independent, third-party assurance of mission-critical AI systems with unparalleled breadth and depth. It promotes responsible, safe and robust AI adoption for enterprises, through technical advisory and evaluation of AI systems against emerging regulatory requirements. Resaro uses expert insights; data science and engineering team and proprietary testing tools and protocols for its solutions. These are independent and tailored AI authentication to ensure compliance.

Resaro's products aim at ensure and assure that AI is:

  • Responsible: Resaro verifies that there is human oversight and control to ensure the AI system performs as intended under all circumstances.
  • Safe:  ensuring that AI is aligned with the values that define us as humans, ensuring it does not result in harm.
  • And robust: testing whether the AI system is able to withstand unforeseen failures and adversarial attacks by third party malicious actors.

Four main solutions: 

  • Technical AI evaluation: carry out technical validation and performance benchmarking of AI models - to allow standardised comparisons - before and after companies procure or develop.
  • AI stress-testing: anticipate what could go wrong and use Resaro's advanced test protocols and tools to understand the performance limits of the AI model.
  • AI assurance advisory: gain expert advice on how to govern and manage the risks of AI in a selected business context, as well as aligning with global guidelines and standards.
  • and AI assurance training: training of company's executives and technology teams to understand good AI/ML practices, innovation-friendly procurement practices and ways to mitigate risks.

About the tool


Developing organisation(s):




Country of origin:


Type of approach:



Usage rights:



Stakeholder group:


Tags:

  • ai responsible
  • ai risks
  • safety
  • benchmarking

Modify this tool

Use Cases

Resaro’s Performance and Robustness Evaluation: Facial Recognition System on the Edge

Resaro’s Performance and Robustness Evaluation: Facial Recognition System on the Edge

Resaro evaluated the performance of third-party facial recognition (FR) systems that run on the edge, in the context of assessing vendors’ claims about the performance and robustness of the system in highly dynamic operational conditions. As par...
Oct 2, 2024

Resaro’s Bias Audit: Evaluating Fairness of LLM-Generated Testimonials

Resaro’s Bias Audit: Evaluating Fairness of LLM-Generated Testimonials

A government agency based in Singapore engaged Resaro’s assurance services to ensure that a LLM-based testimonial generation tool is unbiased with respect to gender and race and in-line with the agency’s requirements. The tool uses LLMs to help teach...
Oct 3, 2024

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.