These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Algorithmic Accountability for the Public Sector

The Ada Lovelace Institute (Ada), AI Now Institute (AI Now), and Open Government Partnership (OGP) have partnered to launch the first global study to analyse the initial wave of algorithmic accountability policy for the public sector.
As governments are increasingly turning to algorithms to support decision-making for public services, there is growing evidence that suggests that these systems can cause harm and frequently lack transparency in their implementation. Reformers in and outside of government are turning to regulatory and policy tools, hoping to ensure algorithmic accountability across countries and contexts. These responses are emergent and shifting rapidly, and they vary widely in form and substance – from legally binding commitments, to high-level principles and voluntary guidelines.
This report presents evidence on the use of algorithmic accountability policies in different contexts from the perspective of those implementing these tools, and explores the limits of legal and policy mechanisms in ensuring safe and accountable algorithmic systems.
Read the executive summary for key findings and the full report for further detail on these findings and practical case studies of implemented policies.
About the tool
You can click on the links to see the associated tools
Objective(s):
Target sector(s):
Type of approach:
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case


























