These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
A governance framework for algorithmic accountability and transparency
Transparency and accountability are both tools to promote fair algorithmic decisions by providing the foundations for obtaining recourse to meaningful explanation, correction, or ways to ascertain faults that could bring about compensatory processes. The study develops policy options for the governance of algorithmic transparency and accountability, based on an analysis of the social, technical and regulatory challenges posed by algorithmic systems. Based on an extensive review and analysis of existing proposals for governance of algorithmic systems, the authors propose a set of four policy options each of which addresses a different aspect of algorithmic transparency and accountability. 1. Awareness raising: education, watchdogs and whistleblowers. 2. Accountability in public sector use of algorithmic decision-making. 3. Regulatory oversight and Legal liability. 4. Global coordination for algorithmic governance.
About the tool
You can click on the links to see the associated tools
Objective(s):
Type of approach:
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case