These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Assessment for Responsible Artificial Intelligence
Z-Inspection® is a process to assess Trustworthy AI.
During this six-month pilot, the practical application of a deep learning algorithm from the province of Fryslân has been investigated and assessed.
The algorithm maps heathland grassland by means of satellite images for monitoring nature reserves. The testing of this algorithm was done in collaboration with an international interdisciplinary team, using Z-Inspection® : A process to assess Trustworthy AI.
An international team discussed critical issues from different disciplines, such as the purpose of the algorithm, the development process, ethical dilemmas and conflicts of interest.
The pilot project is a cooperation between Rijks ICT Gilde (Ministry of the Interior and Kingdom Relations, The Netherlands), the Provincie Fryslân (The Netherlands) and the Z-Inspection® Initiative.
Quoting our report:
“This report is made public. The results of this pilot are of great importance for the Dutch government, serving as a best practice with which public administrators can get started, and incorporate ethical and human rights values when considering the use of an AI system and/or algorithms. It also sends a strong message to encourage public administrators to make the results of AI assessments like this one, transparent and available to the public.”
Benefits of using the tool in this use case
The great importance of this pilot lies in the lessons learned for other AI projects, and for the application of the Z-Inspection® process.
The results of this pilot are useful for the entire government, as they will allow us to get started with reliable AI. We share the results and lessons learned from the pilot here in hopes of stimulating digital awareness and dialogue about AI within government. And to be able to use the technology with confidence for tomorrow's questions.
The main lessons learned and the results of the assement have been published:
Web page for the pilot project:
Learnings or advice for using the tool in a similar context
“Responsible use of AI” Pilot Project with the Province of Fryslân, Rijks ICT Gilde & the Z-Inspection® Initiative.
The pilot project took place from May 2022 through January 2023. During the pilot, the practical application of a deep learning algorithm from the province of Frŷslan was assessed. The AI maps heathland grassland by means of satellite images for monitoring nature reserves.
Comparison with other tools
During the pilot, the assessment team used two different approaches: the Fundamental Rights and Algorithms Impact Assessment (FRAIA) and the Ethics Guidelines for Trustworthy AI. The two go hand in hand. Both approaches provide critical insights regarding the AI system. Both ethics and human rights are about norms and fundamental values in society. Since ethical reflection and ethical guidelines influence law, experts from both fields must work together when considering the design of AI systems and their societal implications. Ethics, a branch of philosophy, considers what is right and wrong. It seeks answers to questions such as "What should we do?" or "What is the right action? In the context of AI systems, an ethics-based approach focuses on questions such as "What is the right way to design, develop, deploy and use this type of technology so that it benefits individuals and society?
About the use case
You can click on the links to see the associated use cases
Objective(s):
Impacted stakeholders:
Target sector(s):
Country of origin:
Target users: