Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

OpenEnv



OpenEnv is a framework for evaluating AI agents against real systems rather than simulations. It provides a standardised way to connect agents to real tools and workflows while preserving the structure needed for consistent and reliable evaluation.

OpenEnv is an open-source framework from Meta’s PyTorch team for defining, deploying, and interacting with environments in reinforcement learning (RL) and agentic workflows. It offers Gymnasium-style APIs (e.g., reset() and step()) to interface with environments in a standard manner, and supports running these environments as backend servers (for example, via HTTP or containerised execution). 

The framework measures agent performance across dimensions critical to trustworthy deployment: multi-step reasoning over long horizons, correct tool selection and argument formation, permission and access control handling, partial observability, and graceful error recovery. A reference environment called the Calendar Gym, contributed by Turing, exposes agents to realistic scheduling constraints including access control lists, temporal reasoning, and multi-agent coordination. Benchmarking in this environment has surfaced consistent failure patterns, success rates dropping from roughly 90% on explicit tasks to around 40% on ambiguous ones, with over half of failures stemming from malformed tool arguments rather than incorrect tool selection.

By surfacing measurable, repeatable failure modes in realistic settings, OpenEnv helps the AI community move beyond benchmark performance toward production reliability. It supports accountability by making agent limitations transparent and comparable, and provides a foundation for improving robustness and safety before deployment. 

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
Partnership on AI

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.