Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Assessment framework for non-discriminatory AI systems



Assessment framework for non-discriminatory AI systems

The assessment framework for non-discriminatory AI systems was developed in 2022 by Demos Helsinki, University of Turku and University of Tampere as part of the Avoiding AI Biases -project for the Finnish Government's analysis, assessment and research activities. The framework helps to identify and manage risks of discrimination, especially in public sector AI systems and to promote equality in the use of AI.

The main intended audience of the framework is the public sector and civil servants who can use it to assess possible discriminatory effects of AI systems when planning, procuring or deploying them. Indirectly it also acts as a tool for AI developers to assess their systems and processes, especially if intended for public use. The framework takes into account the Finnish Non-Discrimination Act that obligates authorities, education providers and employers not only to prevent discrimination, but also to promote equality.

As per the lifecycle model, the assessment framework emphasises that the discriminatory impacts of AI systems can arise at different stages of development. As such, the framework serves as an algorithmic impact assessment process for addressing risks of discrimination and promoting equality throughout the lifecycle of an AI system:

1) Design, i.e. the initial assessment and definition of the AI system's objectives, motivations for use, necessity and equality impacts.

2) Development, covering three areas: data and its preparation, training of the algorithmic model and validation of the model.

3) Deployment, including issues such as human oversight, transparency and monitoring the system’s equality impacts in practice.

The assessment framework divides each stage of the AI lifecycle into questions, which describe the discriminatory risks throughout the stages of the AI system. It acts as a self-assessment tool which gives a risk score based on the answers inputted. We encourage users to tailor the assessment framework to specific use cases by modifying it as the risks of discrimination are always contextual.

The framework is available both in Excel and PDF format.

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.