Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Algorithm Impact Assessment Toolkit



Algorithm Impact Assessment Toolkit

Algorithms play an essential role in the work of New Zealand’s public sector by helping to streamline processes, improve efficiency and productivity, enable the faster delivery of more effective government services and support innovation. Algorithms can also help to deliver new, innovative, and well-targeted policies to achieve government aims. 

However, it is well established that the opportunities afforded by new and evolving technologies can also introduce potential risk and harm. This includes challenges associated with accuracy, bias, and a lack of transparency, explainability, reliability and accountability. At a societal level, poorly governed algorithmic and AI systems can amplify inequality, undermine democracy, and threaten both privacy and security. 

The Algorithm Charter for Aotearoa New Zealand, launched in 2020, is a commitment by government agency signatories to carefully manage how algorithms are used, and to improve consistency in transparency and accountability of algorithm use. 

The Algorithm Impact Assessment Toolkit helps government agencies meet the commitments within the Algorithm Charter. The tools help agencies identify, assess, and document any potential impacts of the algorithms they create or use, to support informed decision-making about the benefits and risks of government use of algorithms. The process takes a risk-based approach, intended to strike the right balance between ensuring agencies can use algorithms to provide better services while maintaining public trust and confidence. 

There are four tools in the Algorithm Impact Assessment Toolkit: 

  • The Algorithm Threshold Assessment Questionnaire helps those designing or using an algorithm to determine whether it presents a higher risk and therefore requires a more in-depth assessment. 
  • The Algorithm Impact Assessment Questionnaire explores how the algorithm works, what governance is in place, possible impacts, and how risks might be mitigated. 
  • The User Guide provides explanations, clarifications, and case studies to support people with completing the impact assessment, and 
  • The Report Template summarises the key risks and controls for decision-makers. 

The Algorithm Threshold Assessment Questionnaire should be completed at the planning stage of a new or different algorithm. If the threshold assessment finds the algorithm presents a higher risk, the Algorithm Impact Assessment Questionnaire should also be initiated at an early stage, noting this questionnaire may need to be continually worked on and updated throughout the development process. 

The process of using the Algorithm Impact Assessment Questionnaire and summarising findings in the Report Template is designed to apply to higher risk algorithms only. 

The Algorithm Impact Assessment Toolkit adopts a best practice approach to satisfying the Charter commitments and recognises that each agency will need to tailor the process and the ultimate risk assessments in a way that is appropriate for its own context, risk profile and role in society. 

About the tool




Impacted stakeholders:



Target sector(s):


Country of origin:



Type of approach:



Usage rights:



Target groups:



Stakeholder group:




People involved:



Technology platforms:


Tags:

  • ai ethics
  • collaborative governance
  • ethical charter
  • ai assessment
  • ai governance
  • decision support tool
  • ai risk management
  • data ethics
  • ai oversight
  • ai guardrails
  • impact assessment

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.