These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
AI Algorithmic Transparency Tool
In order to maximize the positive impact of AI innovation, it is essential to design and operate technical, organizational, and social systems that enable stakeholders to recognize the risks of AI and to adjust their interests appropriately and flexibly. In other words, an agile governance framework for AI. This paper challenges the overarching and systematic organization of the most controversial governance approach, transparency of AI algorithms, and proposes a practically applicable toolkit. We should avoid hampering innovative AI in social implementation with misunderstandings caused by lack of communication, and the role that transparency can play in this regard is not small. However, if regulations and norms are formed each time a problem arises in the absence of a systematic organization, regulations will be repeatedly added without a blueprint, which lacks predictability and "transparency" will become a self-objective only for formality apart from the original purpose. As a result, innovation will be stifled. This is why such regulations must be systematic based on a unified concept, while pursuing generality, clarity, and flexibility with room for discretion so that business entities and government agencies can actually apply them.
In this paper, we have constructed a toolkit as a collection of systematic disclosures and examples, taking into account not only regulations proposed by national authorities but also a wide range of new risk events arising from various AIs, as well as prior research on AI algorithms. We also incorporated the viewpoint that the transparency degree that a business entity or government agency can devise can have a positive impact not only on its social credibility, but also on different indicators such as satisfaction. In addition, the toolkit was intended to be highly convenient for business entities and government agencies responding to self- and co-regulations by putting the disclosures in a list format and making it discretionary and selective in terms of AI algorithm providers, users, risks, and other factors. We would be more than happy if business entities and government agencies could use this toolkit for communication with users, society, authorities, and experts, or for internal risk management. We believe that this is one of the best agile governance practices to maximize the impact of AI innovation.
This toolkit is version 1.0 and we will continue to update it based on the points raised in future discussions. We would be grateful for guidance from readers in pointing out excesses and deficiencies.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Impacted stakeholders:
Purpose(s):
Target sector(s):
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
Target groups:
Target users:
Stakeholder group:
Validity:
Enforcement:
Benefits:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Tags:
- collaborative governance
- data governance
- open access
- ai governance
- transparency
- accountability
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case