Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

ISACA: Digital Trust Ecosystem Framework (DTEF) application to assure AI environments



ISACA’s Digital Trust Ecosystem Framework (DTEF) offers enterprises a holistic framework that applies systems thinking – the notion that a change in one area can have an impact on another area – across an entire organisation. 

As such, the Framework encompasses six domains: Culture, Emergence, Human Factors, Direct and Monitor, Architecture, Enabling and Support.

The Framework is suitable for mid to senior level executives and practitioners who are either developing a strategy for AI or implementing AI tools, and who are seeking guidance on techniques to establish trust and trustworthiness in AI. The Framework is not prescriptive or narrow, but includes detailed practices, activities, outcomes, controls, KPIs and KRIs that a practitioner can use to implement and assess against. Additionally, it is aligned to many existing frameworks on the market so an enterprise that has already adopted a framework such as ISO 27001 or NIST CSF, is already performing many of the tasks outlined in the DTEF.
The Framework is designed and can be used to build assurance for a range of emerging technology systems and is particularly pertinent to AI, which is likely to be applied beyond an organisation’s technology or security departments and will therefore have implications that can cut across departments and business units.
Much in the mode of the principles-based approach to AI safety set out in the UK’s AI White Paper, ISACA’s Framework reflects the fluidity of AI systems and encourages organisations to examine proposals across a broad range of different perspectives. The Framework’s breadth means organisations can assess security questions that are technical, practical, and ethical, as well as manage and review the business and financial case for their AI use. The DTEF encourages organisations to revisit the metrics and outputs produced in the process of its application, and to continually review their assurance using compatible maturity assessment frameworks.

DTEF enables organisations to take a strategic view on a potential AI deployment. It encourages consideration of the target culture for use and deployment of AI (and thus has the potential to illuminate cultural inhibitors). Furthermore, it helps address the expected boundaries of AI actors, control the input variables and define the controls which will support the user experience and ultimately support the organisation to determine the resource requirements to run, control and manage an AI system. Using the Framework enables organisations to think holistically about the business and financial case for AI use. Then, organisations can decide whether it is appropriate to embed AI within their service value chain. This overtly strategic approach is more likely to surface risk that might not otherwise be identified by solely tactical or technical teams and increases the likelihood of realising the expected benefits of the implementation.

While DTEF provides a foundational starting point, to get the most value from this approach organisations will need to tailor their specific activities, outcomes and controls to their specific business and industry. This approach encourages organisations to have appropriate skillsets, not only technical, but also in risk, security, business change management and project management skills.

Link to the full use case.

This case study was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.

About the tool


Objective(s):



Country of origin:




Usage rights:





Geographical scope:



Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.