Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Proposed Model Governance Framework for Generative AI



Proposed Model Governance Framework for Generative AI

Generative AI has captured the world’s imagination. While it holds significant transformative potential, it also comes with risks. Building a trusted ecosystem is therefore critical – it helps people embrace AI with confidence, gives maximal space for innovation, and serves as a core foundation to harnessing AI for the Public Good. 

AI, as a whole, is a technology that has been developing over the years. Prior development and deployment is sometimes termed traditional AI1 . To lay the groundwork to promote the responsible use of traditional AI, Singapore released the first version of the Model AI Governance Framework in 2019, and updated it subsequently in 2020. The recent advent of generative AI2 has reinforced some of the same AI risks (e.g. bias, misuse, lack of explainability), and introduced new ones (e.g. hallucination, copyright infringement, value alignment). These concerns were highlighted in our earlier Discussion Paper on Generative AI: Implications for Trust and Governance, 3 issued in June 2023. The discussions and feedback have been instructive. 

Existing governance frameworks need to be reviewed to foster a broader trusted ecosystem. A careful balance needs to be struck between protecting users and driving innovation. There have also been various international discussions pulling in the related and pertinent topics of accountability, copyright, misinformation, among others. These issues are interconnected and need to be viewed in a practical and holistic manner. No single intervention will be a silver bullet. 

This Model AI Governance Framework for Generative AI therefore seeks to set forth a systematic and balanced approach to address generative AI concerns while continuing to facilitate innovation. It requires all key stakeholders, including policymakers, industry, the research community, and the broader public, to collectively do their part. There are nine dimensions which the Framework proposes to be looked at in totality, to foster a trusted ecosystem.

About the tool


Objective(s):



Country of origin:



Type of approach:



Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.