Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

NayaOne’s AI Sandbox



NayaOne is a Sandbox-as-a-Service provider to tier 1 financial services institutions, world-leading regulators, and governments. Creating a sandbox that serves as a secure environment for government departments, regulators, and regulated entities to test data and models is crucial in ensuring the ethical and responsible use of artificial intelligence (AI). This sandbox is designed to address key concerns in AI deployment by providing a single environment where AI can be evaluated and procured while also enabling collaboration and access to world-leading tools.

  • The sandbox provides a controlled setting where entities can securely test data and AI models, simulating real-world operations without the risk of affecting actual operations or exposing sensitive information. 
  • The sandbox is a secure environment that does not connect to the regulator, regulated entity, or government network ensuring they can safely and rapidly procure and evaluate AI models that they wish to use in the organisation.
  • The setup comes preinstalled with performance and stress testing capabilities as well as advanced AI tools including but not limited to:
    • Detecting bias within datasets and models, helping entities identify and mitigate unfair biases that could lead to discriminatory practices.
    • Understanding how their models make decisions, ensuring transparency and accountability in AI operations.
    • Built-in functionalities to monitor for model drift, the sandbox helps ensure that AI models remain accurate and relevant over time, adjusting to new data and changing environments.
    • The prevention of label leaking is a core feature, safeguarding against the inadvertent use of information in training data that unfairly influences the model's performance.
    • For testing Generative AI models, the sandbox includes specific tools to detect and analyse hallucinations and accuracy, ensuring that the generated outputs are accurate and reliable.
  • The environment supports a wide range of AI models and data types, catering to the diverse needs of government sectors, regulatory bodies, and regulated entities.
  • Collaboration tools within the sandbox enable teams and third parties to work together on testing and refining their AI systems, fostering an environment of shared learning and improvement. This can be done privately using wider TechSprint and Hackathon models. 
  • Comprehensive reporting tools provide detailed insights into test results, highlighting areas of concern and recommending improvements. 
  • It allows for the testing of models under a variety of conditions and scenarios, including stress tests to evaluate performance under extreme or unexpected data conditions. 
  • Regular updates and support ensure that the sandbox remains at the cutting edge of AI testing technology, incorporating the latest research and tools.

The sandbox approach means that all regulated entities, regulators, and governments can test and validate AI with a single secure environment which is disconnected from their production and internal systems. 

Benefits include:

  • Technical benefits:
    • Rapid procurement of AI models.
    • Secure access to deploy and test AI models and data to evaluate them.
    • Out of the box access to leaders in the AI and GenAI space – ready to test and procure.
    • Select from the NayaOne AI testing suite or bring your own evaluation criteria.
  • Educational Benefits:
    • The tool educates developers on the best practices with AI ensuring responsibility and ethical design.
    • The tool enables execs to better understand AI and the benefits while also immersing them with hands on experience.  
  • Procedural:
    • Consistent and world-leading product development and risk evaluation on all AI tools being tested or procured through the sandbox. 

As a sandbox any techniques that the organisation wishes to add to the AI sandbox’s suite of tests can be employed. However, the sandbox requires the ability to access a version of the AI (whether it is developed in the sandbox, by the regulator/regulated organisation, or a third party). 

Link to the full use case.

This case study was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.

About the tool


Objective(s):


Country of origin:



Type of approach:


Usage rights:



Stakeholder group:


Geographical scope:



Tags:

  • biases testing
  • evaluation
  • transparency
  • accountability
  • sandbox

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.