Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Judgment Assurance



Judgment Assurance

As AI increasingly informs consequential decisions, a persistent gap has emerged: the governance of human judgment itself. Judgment Assurance closes this gap by treating judgment not as an individual intuition, but as a deliberate institutional asset.

This tool suite provides a technology-agnostic, scalable, governance layer that complements existing technical AI frameworks. It includes:

The Core Framework: A methodology for defining and preserving "Institutional Inheritance" and judgment maturity.

JA-UQ (Underwriting Questionnaire): An evidence-weighted instrument for evaluating accountability and oversight maturity in audit and regulatory contexts.

JAMM-PS (Maturity Model): A verifiable tiering system (Levels 0–4) that defines the assurance floor for decision reconstructibility.

Designed for high-stakes environments, Judgment Assurance ensures that when AI influences an outcome, the human "why" is captured contemporaneously rather than post-hoc. It provides a common vocabulary for boards, insurers, and regulators to measure decision-governance maturity independently of AI architecture.

Boundary Conditions: Judgment Assurance is a discipline for governing human oversight and does not assess technical model performance, data bias, or algorithmic accuracy. It is designed to complement existing technical AI controls by addressing the specific "Accountability Gap" in human-in-the-loop decision-making.

About the tool


Developing organisation(s):








Type of approach:







Stakeholder group:



Enforcement:




People involved:


Required skills:


Technology platforms:


Tags:

  • transparency
  • trustworthiness
  • auditablility
  • accountability
  • ai guardrails
  • model risk management
  • responsible ai
  • self-assessment
  • AI Governance & Policy
  • artificial intelligence governance
  • human-in-the-loop ai governance
  • ai accountability platform
  • ai assurance

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
Partnership on AI

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.