Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Automated Decision-Making Implications Tool (ADMIT)



Automated Decision-Making Implications Tool (ADMIT)

ADMIT is a research tool within a broader methodological framework combining quantitative and qualitative strategies to identify, analyse, and mitigate social implications associated with automated decision-making systems while enhancing their potential benefits. It supports comprehensive assessments of sociotechnical impacts to inform responsible design, deployment, and governance of automation technologies.

ADMIT is a research tool developed as part of a doctoral research project jointly conducted at the University of Naples Federico II and the University of Bristol within social sciences. Its development is strongly interdisciplinary, emerging from academic-industrial collaboration with developers and technology companies. ADMIT offers a structured methodological pathway for assessing the social implications of Automated Decision-Making Systems (ADMS), addressing the often conflicting interests of all stakeholders affected by the system under analysis. 

The tool consists of two main parts - each designed to evaluate and manage social implications:

  • risk assessment;
  • mitigation/support mechanisms. 

It integrates qualitative and quantitative criteria; the latter is operationalised through a seven-level scale that classifies risk severity and the strength of mitigation or protective measures.

The tool is currently undergoing continuous development and testing in industrial contexts. During this production phase, the involvement of key informants and domain experts is central to validating the scoring criteria and reinforcing the tool’s structure. This process also includes developing software applications to support the assessment workflow and ensure operational robustness.

About the tool


Developing organisation(s):




Impacted stakeholders:



Country of origin:



Type of approach:






Stakeholder group:





Technology platforms:


Tags:

  • evaluation
  • auditablility
  • ai risk management
  • accountability
  • social impact
  • impact assessment
  • risk
  • process automation
  • adms assessment

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.