The OECD.AI Policy Navigator

Our policy navigator is a living repository from more than 80 jurisdictions and organisations. Use the filters to browse initiatives and find what you are looking for.

NIST AI Risk Management Framework


-
Added by:   National contact point
Added on:   06 May 2026
Updated by:   OECD analyst
Updated on:   06 May 2026

The NIST Artificial Intelligence Risk Management Framework (AI RMF) is a voluntary framework released by the U.S. National Institute of Standards and Technology to help organisations manage risks associated with the design, development, deployment and evaluation of AI systems, with a focus on promoting trustworthy and responsible AI.

Initiative overview

The NIST AI Risk Management Framework is a non‑binding, voluntary framework developed by NIST’s Information Technology Laboratory in collaboration with public and private stakeholders. Released in January 2023, it is intended to support organisations in systematically identifying, assessing and managing risks related to AI systems throughout their lifecycle. The framework is explicitly designed to integrate trustworthiness considerations—such as reliability, safety, security, resilience, accountability and transparency—into AI development and use.

The AI RMF provides a structured approach that organisations can adapt to different AI use cases, sectors and risk profiles. It is applicable to a wide range of AI systems, including high‑impact and emerging applications, and is meant to align with and complement other AI governance, standards and risk‑management efforts. To support practical implementation, NIST has published accompanying resources such as the AI RMF Playbook, profiles addressing specific AI contexts including generative AI, and a roadmap to support ongoing development and alignment.

From an AI relevance perspective, the framework focuses directly on AI as a socio‑technical system, recognising that risks arise not only from technical performance but also from data, human interaction, organisational context and societal impacts. By providing a common language and structure for AI risk management, the NIST AI RMF aims to support responsible AI innovation, improve trust in AI systems and enable organisations to deploy AI in ways that consider both benefits and potential harms.

About the policy initiative