Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Google 2022 AI Principles Progress Update



Google 2022 AI Principles Progress Update

Because AI is core to Google products, we remain committed to sharing our lessons learned and emerging responsible innovation practices, drawing upon more than 20 years of using machine learning and more than a decade of AI research. Rooted in our near 25-year-old mission to organize the world’s information and make it universally accessible and useful, Google’s innovation strategy is to iterate on the process of innovation itself. This means that we create projects that not only exemplify engineering excellence, but from their earliest moments embody the human-centered values manifested in Google’s AI Principles. 

We do so by incorporating responsible practices for fairness, safety, privacy, and transparency early in developers’ machine learning workflow and throughout the product development lifecycle. This principled approach to AI research and development is also practical: it can help avoid burning engineering cycles spent retrofitting technology if an issue emerges after launch or even much later. This aligns with our product excellence mantra to put the user first and with our focus on building for everyone. 

Since we launched our AI Principles in 2018, we’ve built and tested an industry-leading governance process to align AI projects across the company with those Principles. We center our governance on three pillars:

1. AI Principles, which serve as our ethical charter and inform our policies 

2. Education and resources, such as ethics training and technical tools to test, evaluate, and monitor the application of the Principles to all of Google’s products and services 

3. Structures and processes, which include risk assessment frameworks, ethics reviews, and executive accountability.

As our CEO has said, AI is too important not to regulate, and too important not to regulate well. AI legislation and related principles and standards should help lower risks to people without unduly stifling innovation or undermining AI’s promise for social benefit at the global level. And of course AI frameworks overlap with other important regulatory issues, including content safety, child safety, privacy, and consumer protection. A holistic approach will help keep new rules from impeding innovation and competition in AI and related emerging technologies. 

We hope that sharing metrics on our progress and lessons learned on issues such as responsible AI, algorithmic transparency, privacy-enhancing technologies, and AI R&D supports the important progress being made across the global AI community.

About the tool


Developing organisation(s):


Objective(s):


Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.