Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Fujitsu AI Ethics Impact Assessment



Fujitsu AI Ethics Impact Assessment

Fujitsu AI Ethics Impact Assessment can be used to future-proof AI systems so that they comply with ethics principles and legal requirements. It can also be used to  create evidence to engage with auditors, approvers, and stakeholders.

This technology is used to assess ethical impact and risks of AI systems based on international AI ethics guidelines and previous incidents from the AI Incidents.

The methodological approach is freely available through Fujitsu’s website. Fujitsu AI Ethics Impact Assessment includes the following steps:

1) based on the list of stakeholders relevant to an AI system, an AI system model is generated to map the interactions among the components of AI system (e.g. training dataset, AI model, output); the stakeholders directly involved in the system (e.g. business users); and the stakeholders indirectly involved in the system (e.g. citizens).

2)  Fujitsu AI Ethics Impact Assessment automatically assesses the ethical requirements and potential risks emerging from each interaction.

3) Fujitsu AI Ethics Impact Assessment automatically develops a risk analysis table containing a comprehensive list of ethical risks. The risk analysis is based on the seven requirements of Trustworthy Artificial Intelligence (AI) published by the European Commission’s High-Level Expert Group on Artificial Intelligence. Fujitsu has further developed these requirements to include over a hundred sub-categories of ethical risks, which are consistent with The Assessment List for Trustworthy Artificial Intelligence (ALTAI)  developed by the European Union, and the other assessment list for software quality.

4) Fujitsu AI Ethics Impact Assessment, through the AI Ethics Risk Comprehension technology, automatically contextualises the risks identified based on the AI use case under examination and provides prompts that can help the user to define effective countermeasures. 

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.