Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Algorithmic Transparency Certification for Artificial Intelligence Systems

Organisation(s): adigital - spanish association for the digital economy

The Algorithmic Transparency Certification for Artificial Intelligence Systems, by Adigital, is a meticulously designed accreditation process that aims to usher organizations into an era of Responsible Artificial Intelligence, promoting transparency, explainability, accountability, and ethical considerations in the deployment of AI systems. Anchored in globally recognized ethical principles from organizations like the OECD and UNESCO, the certification encompasses a comprehensive questionnaire meticulously focused on the core elements of responsible AI, such as transparency and explainability.

The evaluative criteria of the certification are thorough and multifaceted, encompassing six significant domains: data sources, datasets, model building and selection, processing and minimization of harm, security measures and information access, and post-deployment monitoring of AI systems. These domains have been selected after a diligent evaluation of their essential role in the ethical deployment and functioning of AI systems.

An inherent flexibility characterizes the certification, allowing it to remain adeptly responsive to the continual advancements in the AI sector and the evolving regulatory landscape, particularly instruments like the European Union AI Act. This adaptability ensures that organizations remain perpetually prepared and aligned with the most contemporary standards and requirements in AI ethics and regulations.

One of the cardinal goals of the certification is to bolster organizational reputation and stakeholder trust. It aims to transform AI systems into paragons of transparency and reliability, thus fostering an environment where users, customers, and society at large can interact with these systems with enhanced confidence and assurance. This not only facilitates an augmentation of societal trust but also paves the way for organizational competitiveness, enabling entities to robustly differentiate themselves in a bustling marketplace.

In addition to bolstering preparedness for impending regulations, the certification is also a tool for technical readiness. It ensures that organizations are not only compliant with ethical norms but are also technically adept and resilient. By fostering a culture of continuous improvement and learning, the certification ensures that organizations remain at the pinnacle of technical innovation and excellence.

The Certification is not merely a seal but a powerful ally for organizations. It is a dynamic, evolving, and comprehensive tool that aims to harmonize the myriad facets of AI - ethical, technical, and regulatory - under a unified, robust framework. Its unwavering commitment is to spearhead a movement towards responsible AI that resonates with societal values, regulatory alignment, and technical prowess, facilitating a future where AI systems flourish with accountability and innovation in concert.

Please obtain more information about the certification at this link with a General Presentation of the Certification, a Whitepaper that explains it in detail, and a Research Paper on existing transparency and ethics certifications (please only available for OECD.AI Tools Catalogue evaluation purposes). 

In addition to the current tool's website in Spanish, we are preparing the English version of the landing site. It will be ready in a few days. 

About the tool

Type of approach:

Usage rights:



Geographical scope:

Technology platforms:


  • ai ethics
  • ai responsible
  • biases testing
  • building trust with ai
  • data governance
  • digital ethics
  • evaluation
  • trustworthy ai
  • validation of ai model
  • ai assessment
  • ai governance
  • transparency
  • trustworthiness
  • ai compliance
  • data ethics
  • model monitoring
  • regulation compliance
  • accountability
  • ml security
  • explainability
  • privacy
  • privacy

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos
Sign up for OECD artificial intelligence newsletter