The OECD.AI Policy Navigator

Our policy navigator is a living repository from more than 80 jurisdictions and organisations. Use the filters to browse initiatives and find what you are looking for.

Ethical Decalogue for the Use of Artificial Intelligence (AI)


-
Added by:   National contact point
Added on:   12 Feb 2026
Updated by:   OECD analyst
Updated on:   13 Apr 2026

The Decálogo Ético para el Uso de la IA is a formal framework issued by Ecuador’s Superintendencia de Competencia Económica (SCE). It establishes ten mandatory principles, including human oversight, transparency, and data protection, to ensure AI tools support public service without replacing human judgment. Rooted in national law, it guides officials toward the ethical, non-discriminatory, and secure use of technology.

Name in original language

Decálogo Ético para el Uso de la Inteligencia Artificial (IA)

Initiative overview

The Ethical Decalogue for the Use of Artificial Intelligence (IA) is a mandatory framework established by Ecuador's Superintendencia de Competencia Económica (SCE). It addresses the risks of automated decision-making by ensuring that AI serves as a support tool rather than a replacement for human analysis. This initiative aims to modernise public administration while protecting citizens from algorithmic bias and ensuring that all technology use aligns with the public interest and human dignity.

The main objectives are to maintain human oversight, guarantee transparency in administrative actions, and ensure data protection through methods like anonymization. It works by requiring public servants to validate automated results, preventing any AI output from being final without institutional review. It also prohibits the use of AI for personal gain or unauthorized surveillance, making these rules a binding part of the staff's disciplinary regime.

Involving all administrative units, the initiative mandates continuous ethical training for staff to keep pace with technological changes. The framework is designed to evolve by incorporating international standards, such as the European Union's AI Act, ensuring that the SCE’s digital culture remains humanized and participative as it scales its technological capabilities

About the policy initiative


Category:

  • Regulations, guidelines and standards

Initiative type:

  • Principles/guidelines/frameworks for trustworthy AI

Status:

  • Active

Start Year:

  • 2025

Binding:

  • Non-binding

Other relevant urls: