The OECD.AI Policy Navigator

Our policy navigator is a living repository from more than 80 jurisdictions and organisations. Use the filters to browse initiatives and find what you are looking for.

Human-Centered National Guidelines for AI Ethics


Added by:   National contact point
Added on:   09 Jul 2025
Updated by:   OECD analyst
Updated on:   09 Jul 2025

The Human-Centered National Guidelines for AI Ethics set government standards to ensure AI technology is used responsibly and fairly. It aims to protect people from risks like bias, discrimination, and privacy violations while supporting AI innovation. This initiative benefits society by promoting trustworthy AI that improves quality of life and industry productivity.

Name in original language

사람이 중심이 되는 인공지능 윤리기준

Initiative overview

The Human-Centered National Guidelines for AI Ethics initiative addresses the urgent need to establish ethical standards in the rapidly growing field of artificial intelligence. As AI technologies become increasingly integrated into everyday life and industry, concerns about data bias, algorithmic discrimination, and privacy violations have emerged. These challenges risk undermining public trust in AI, which is crucial for its sustainable development and adoption. The initiative aims to create a clear and comprehensive ethical framework that guides the responsible development, deployment, and use of AI technologies across various sectors in South Korea.The main objectives of this initiative are to promote fairness, transparency, accountability, and respect for human rights within AI systems. By setting government-backed ethical standards, the guidelines seek to mitigate risks associated with AI, such as unfair treatment based on biased data or privacy breaches, while encouraging innovation and industrial growth. This approach balances the need for technological advancement with social responsibility, ensuring that AI benefits society as a whole without compromising ethical values.Looking ahead, the initiative is designed to be implemented through collaboration between government agencies, industry stakeholders, researchers, and civil society. It will be institutionalized as part of national AI policy, with ongoing updates to adapt to emerging technologies and societal needs. The initiative also includes efforts to develop and disseminate ethics education curricula to train AI professionals and raise awareness among the public. As trust in AI strengthens, these guidelines are expected to support wider adoption and integration of AI technologies in diverse fields such as healthcare, finance, education, and public services.Since its establishment as part of the broader National Strategy for Artificial Intelligence in 2019, the initiative has evolved to incorporate feedback from multiple stakeholders and align with international ethical AI frameworks. It continues to expand its scope and influence by addressing new ethical challenges posed by advancements in AI capabilities. This dynamic evolution ensures that South Korea remains at the forefront of responsible AI development, positioning the country as a leader in creating human-centered AI ecosystems that prioritize social good alongside innovation.

Name of responsible organisation (in English)

Ministry of Science and ICT

About the policy initiative


Organisation:

  • Ministry of Science and ICT

Category:

  • National – AI policy initiatives, regulations, guidelines, standards and programmes or projects

Initiative type:

  • Principles/guidelines/frameworks for trustworthy AI

Participating organisations:


Participating countries:


Status:

  • Active

Start Year:

  • 2020

Binding:

  • Non-binding