Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Guidelines for Development of Trustworthy AI



Guidelines for Development of Trustworthy AI

Artificial intelligence (AI) continues to advance, finding applications in crucial sectors like healthcare, law, and public safety, profoundly impacting our lives. AI serves as a capable assistant, swiftly diagnosing diseases, handling customer interactions, and identifying risks faster than humans. However, the widespread use of AI introduces potential risks and unintended consequences, increasing the threat to safety and property. To address these challenges, major developed countries and international organizations have released AI-related guidelines. ISO/IEC JTC1/SC42, an AI committee of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), leads standardization efforts. The trustworthiness working group (WG3) within SC42 focuses on standardizing AI trustworthiness, including transparency and explainability. In alignment with this, the Ministry of Science and ICT of South Korea introduced the Trustworthy AI Implementation Strategy for human-centric AI in May 2021.

Recommendations and regulations have been formulated to consider the ethical and trust aspects of AI development. However, due to their macroscopic and abstract nature, the methodology and criteria for applying and evaluating AI technology remain ambiguous. Consequently, there is a need for technical and engineering references for those designing or developing AI products/services. Recognizing this need, the Ministry of Science and ICT and the Telecommunications Technology Association (TTA) of South Korea collaborated on ensuring the technical trustworthiness of AI, resulting in the release of the 2022 Guidelines for Development of Trustworthy AI. Derived from AI service components, life cycle considerations, and trustworthiness requirements, these guidelines specify technical requirements and validation items. The technical requirements are based on global AI trustworthiness policies, recommendations, and standards from international organizations, technical groups, and major state governments, including cases reported in South Korea.

A wide range of technical requirements underwent careful review by the TTA research team, with redundant contents removed. Validation items were established to ensure practicality, technical feasibility, efficiency, and comprehensiveness. The iterative process involved experts from various fields, including planners, development project leaders, professors, researchers, and policymakers. Fifteen verifiable requirements and 67 corresponding qualitative/quantitative validation items were derived. AI trustworthiness remains a critical ongoing topic, requiring collective agreement from society members. However, detailed documentation of requirements and validation methods in standards or guidelines is lacking, making practical implementation challenging. This guideline, reflecting domestic and international discussions and research/policy trends, aims to assist South Korean companies in implementing trustworthy AI. Based on the guidelines, companies should establish norms or guidelines on their own and implement them to enhance trustworthiness. Each expert in the company should voluntarily share the results and experience of implementing them. We expect that effective and professional discussions will lead users to use AI technology properly and avoid false perceptions or excessive concerns about the technology, thereby contributing to the establishment of the trustworthiness of AI technology throughout society.

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.