Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

AI Risk-Management Standards Profile for General-Purpose AI Systems (GPAIS) and Foundation Models



AI Risk-Management Standards Profile for General-Purpose AI Systems (GPAIS) and Foundation Models

Increasingly multi-purpose AI systems, such as state-of-the-art large language models or other “general purpose AI” systems (GPAI or GPAIS), “foundation models,” generative AI, and “frontier models” (typically all referred to hereafter with the umbrella term GPAIS), can provide many beneficial capabilities, but also risks of adverse events such as large-scale manipulation of people through GPAIS-generated misinformation or disinformation or other events with harmful impacts at societal scale.

This document provides an AI risk-management standards Profile, or a targeted set of risk-management practices or controls specifically for identifying, analyzing, and mitigating risks of GPAIS. This Profile document is designed to complement the broadly applicable guidance in the NIST AI Risk Management Framework (AI RMF) or a related AI risk-management standard such as ISO/IEC 23894.

We intend this Profile document primarily for use by developers of large-scale, state-of-the-art GPAIS. For GPAIS developers, this Profile document facilitates conformity with or use of leading AI risk management-related standards, and aims to facilitate compliance with relevant regulations such as the forthcoming EU AI Act, especially for aspects related to GPAIS. (However, this Profile does not provide all guidance that may be needed for GPAIS applications in particular industry sectors or applications.) Others who can benefit from the use of this guidance include: downstream developers of end-use applications that build on a GPAIS platform; evaluators of GPAIS; and the regulatory community. This document can provide GPAIS deployers, evaluators, and regulators with information useful for evaluating the extent to which developers of such AI systems have followed relevant best practices. Widespread norms for using best practices such as in this Profile can help ensure developers of GPAIS can be competitive without compromising on practices for AI safety, security, accountability, and related issues. Ultimately, this Profile aims to help key actors in the value chains of increasingly general-purpose AI systems to achieve outcomes of maximizing benefits, and minimizing negative impacts, to individuals, communities, organizations, society, and the planet. That includes protection of human rights, minimization of negative environmental impacts, and  prevention of adverse events with systemic or catastrophic consequences at societal scale.

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.