Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Fundamental Rights and Algorithms Impact Assessment (FRAIA)



Fundamental Rights and Algorithms Impact Assessment (FRAIA)

This fundamental rights and algorithm impact assessment (‘FRAIA’) is a discussion and decisionmaking tool for government organisations. The tool facilitates an interdisciplinary dialogue by those responsible for the development and/or use of an algorithmic system. The commissioning client is primarily responsible for the (delegated) implementation of the FRAIA.

The FRAIA comprises a large number of questions about the topics that need to be discussed and to which an answer must be formulated in any instance where a government organisation considers developing, delegating the development of, buying, adjusting and/or using an algorithm (hereinafter for the sake of brevity: the use of or using an algorithm). Even when an algorithm is already being used, the FRAIA may serve as a tool for reflection. The discussion about the various questions should take place in a multidisciplinary team consisting of people with a wide range of specialisations and backgrounds. Per question, the FRAIA indicates who should be involved in the discussion. This tool pays attention to all roles within a multidisciplinary team, which are included in the diagram below. However, the list is not exhaustive. Likewise, the role or function names may differ from one organisation to another.

The discussion on the basis of the FRAIA aims to ensure that all relevant focus areas regarding the use of algorithms are addressed at an early stage and in a structured manner. This prevents the premature use of an algorithm that has not been properly assessed in terms of the consequences, entailing risks such as inaccuracy, ineffectiveness, or violation of fundamental rights. To achieve this aim, it is important to follow all the relevant steps involved in using an algorithm, and to thoroughly think through the possible consequences, whether any mitigating measures may be taken, et cetera. For each question, the answers and the key considerations and choices made are meant to be noted down. Thus, the completed FRAIA may serve as reference material and can be used to account for the decision-making process surrounding the development and implementation of an algorithm.

About the tool







Target sector(s):



Type of approach:




Target groups:



Stakeholder group:





Geographical scope:



Required skills:


Technology platforms:


Tags:

  • ai ethics
  • ai responsible
  • build trust
  • data governance
  • documentation
  • transparent
  • ai assessment
  • ai governance
  • transparency
  • data ethics

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.