These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
KIDD process
Digitization processes have a steadily growing impact on the world of work. In particular, the introduction of AI applications and algorithmic decision-making systems (AES) that automatically record and analyze personal data and derive a decision (recommendation) based on the results holds both opportunities and risks for companies. On the one hand, such applications can be used to optimize work processes or open up new business models. On the other hand, the introduction and application of such systems can also lead to undesirable consequences, such as unintended discrimination against certain groups of people. Accordingly, in both rule-based and learning software applications, social presuppositions and structures may be explicitly inscribed or implicitly reproduced, producing potentially problematic and discriminatory guidelines for action and results. That is, conscious or unconscious biases may be inherent in the rules one imposes on the algorithm or in the data one uses. These biases are subsequently adopted and perpetuated by the software systems.
To minimize structural disadvantages to groups of people, potential impacts of such applications must be reflected upon and risks weighed during the design process. In order to conduct an analysis of these systems in terms of risk of biases and discrimination with the necessary breadth and depth, it is important that people with diverse backgrounds and perspectives are actively involved and heard in this process. This is where the project "KIDD - AI in the Service of Diversity" comes in, which is funded by the German Federal Ministry of Labor and Social Affairs (BMAS) under the umbrella of the New Quality of Work Initiative (INQA). In the project, a process for the participatory design and introduction of AI applications and AES in companies is being tested in four experimental spaces, focusing on the consideration of non-discrimination and ensuring diversity. The goal is to develop a standardized "KIDD process" that will enable companies to purchase or develop and implement fair, transparent and understandable software applications. At the heart of the project are pilot projects contributed by corporate partners.
To accompany the KIDD process, training courses were developed. The aim is to enable the actors involved in the process to ensure and support the introduction of non-discriminatory and diversity-sensitive AI in companies. In basic training courses, all actors involved in the process were enabled to participate critically. In addition, in-depth training courses were developed in the project to qualify future KIDD facilitators (KIM) for planning and implementing the KIDD process. Furthermore, quality criteria were formulated with the help of which the implementation of the KIDD process in different operational contexts can be successfully carried out, evaluated and improved on the basis of practice-relevant indicators. The quality criteria are supposed to ensure the development and introduction of an AI application or AES that is as non-discriminatory as possible.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Impacted stakeholders:
Target sector(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
License:
Target groups:
Target users:
Stakeholder group:
Validity:
Enforcement:
Geographical scope:
People involved:
Required skills:
Tags:
- ai ethics
- transparency
- diversity
- participation
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case