Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

SAIMPLE



SAIMPLE

SAIMPLE® is developed by Numalis, a deeptech helping critical industries to confidently adopt Artificial Intelligence by providing state-of-the-art methods and tools to develop trustworthy AI systems through robustness validation and explainable decision-making processes. Saimple is also referenced in CONFIANCE.AI catalog of tools for the development of trustworthy AI systems.

SAIMPLE®, is a static analyzer based on abstract interpretations designed specifically for AI algorithm validation. It leverages state-of-the-art techniques described in ISO/IEC 24029-2:2023 standard for the assessment and validation of Machine Learning (ML) models robustness using formal methods against real-world perturbations within the domain of use. 

Additionally, it delivers human-understandable visualizations of models decisions across the input space allowing to extract relevant explainability components.

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.