From principles to practice: tools for implementing trustworthy AI
The OECD.AI Network of Experts presents a framework to evaluate approaches to trustworthy AI. It is a major step to help AI actors move “from principles to practice” in the global effort to implement the OECD AI Principles.
Visit the OECD iLibrary to download the report
The framework helps AI practitioners determine which tool fits their use case and how well it supports the OECD AI Principles for trustworthy AI. As users apply the framework, they will submit a form to a live database with interactive features and information on the latest tools.
This report is the product of a year’s work by the OECD.AI Network’s Working Group on Implementing Trustworthy AI.
5 takeaways from the Tools for trustworthy AI report
1. AI policy discussions have moved from principles to implementation
Technical, business, academic and policy stakeholder communities are actively exploring how to encourage all actors to focus on making AI human-centred and trustworthy. In part, this means maximising AI’s benefits while minimising the risks.
However, it is still a challenge to ensure that the outcomes of AI systems promote shared wellbeing and prosperity while protecting individual rights and democratic values.
2. Efforts to implement trustworthy AI are scattered
Many tools exist and are being developed to help AI actors to navigate the challenges involved in building and deploying trustworthy AI. They include instruments and structured methods to facilitate the implementation of the OECD AI Principles. However, information about these tools is often sparse, hard to find and detached from broader international policy discussions.
3. AI actors need a common framework to compare tools for trustworthy AI
If actors who have already implemented the OECD AI Principles share their experiences and lessons learned it will favour the adoption of the Principles. Actors should share by collecting and disseminating concrete tools, practices and approaches under a common framework that is accessible and that allows for comparability.
Blog post: Trustworthy AI working group chairs explain the group’s approach to building the framework
4. The framework identifies relevant tools for developing, using and deploying trustworthy AI systems
Tools are classified according to systems’ specific needs and contexts. While the framework is not designed to assess the quality or completeness of an individual tool, it does provide the means for comparing and analysing tools in different use contexts.
5. Available soon: a database of tools for trustworthy AI on OECD.AI
The interactive database will provide AI actors and policy makers with information on the latest tools that help ensure that AI systems in different contexts abide by the principles of human rights and fairness; transparency and explainability; robustness, security, and safety and accountability.