The OECD.AI Policy Navigator

Our policy navigator is a living repository from more than 80 jurisdictions and organisations. Use the filters to browse initiatives and find what you are looking for.

NIST Principles for Explainable AI

undefined
The National Institute of Standards and Technology (NIST) developed a draft report intended to stimulate a conversation about what NIST should expect of decision-making devices, as part of a broader NIST effort to help develop trustworthy AI systems.

Name in original language

NIST Principles for Explainable AI

Initiative overview

The initiative has the following objective(s):To provide guidance that AI systems should deliver accompanying evidence or reasons for all their outputs.To provide guidance that AI systems should provide explanations that are meaningful or understandable to individual users.To provide guidance that such explanations correctly reflect the system's process for generating its output.To provide guidance that AI systems should only operate under conditions for which they were designed or when they reach a sufficient confidence in their output.

Name of responsible organisation (in English)

National Institute of Standards and Technology (NIST)

About the policy initiative


Organisation:

  • National Institute of Standards and Technology (NIST)

Category:

  • Regulations, guidelines and standards

Initiative type:

  • Principles/guidelines/frameworks for trustworthy AI

Status:

  • Inactive – initiative complete

Start Year:

  • 2020

End Year:

  • 2021

Binding:

  • Non-binding

Other relevant urls:


Official PDF:


Images: