Initiative overview
The NIST AI Risk Management Framework is a non‑binding, voluntary framework developed by NIST’s Information Technology Laboratory in collaboration with public and private stakeholders. Released in January 2023, it is intended to support organisations in systematically identifying, assessing and managing risks related to AI systems throughout their lifecycle. The framework is explicitly designed to integrate trustworthiness considerations—such as reliability, safety, security, resilience, accountability and transparency—into AI development and use.
The AI RMF provides a structured approach that organisations can adapt to different AI use cases, sectors and risk profiles. It is applicable to a wide range of AI systems, including high‑impact and emerging applications, and is meant to align with and complement other AI governance, standards and risk‑management efforts. To support practical implementation, NIST has published accompanying resources such as the AI RMF Playbook, profiles addressing specific AI contexts including generative AI, and a roadmap to support ongoing development and alignment.
From an AI relevance perspective, the framework focuses directly on AI as a socio‑technical system, recognising that risks arise not only from technical performance but also from data, human interaction, organisational context and societal impacts. By providing a common language and structure for AI risk management, the NIST AI RMF aims to support responsible AI innovation, improve trust in AI systems and enable organisations to deploy AI in ways that consider both benefits and potential harms.


























