Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Algorithmic Transparency Certification for Artificial Intelligence Systems



Algorithmic Transparency Certification for Artificial Intelligence Systems

The Algorithmic Transparency Certification for Artificial Intelligence Systems, by Adigital, is a meticulously designed accreditation process that aims to usher organizations into an era of Responsible Artificial Intelligence, promoting transparency, explainability, accountability, and ethical considerations in the deployment of AI systems. Anchored in globally recognized ethical principles from organizations like the OECD and UNESCO, the certification encompasses a comprehensive questionnaire meticulously focused on the core elements of responsible AI, such as transparency and explainability.

The evaluative criteria of the certification are thorough and multifaceted, encompassing six significant domains: data sources, datasets, model building and selection, processing and minimization of harm, security measures and information access, and post-deployment monitoring of AI systems. These domains have been selected after a diligent evaluation of their essential role in the ethical deployment and functioning of AI systems.

An inherent flexibility characterizes the certification, allowing it to remain adeptly responsive to the continual advancements in the AI sector and the evolving regulatory landscape, particularly instruments like the European Union AI Act. In its current version, the certification provides deep alignment, clause by clause, with the transparency and explainability requirements of the EU AI Act. This adaptability ensures that organizations remain perpetually prepared and aligned with the most contemporary standards and requirements in AI ethics and regulations.

However, the certification has also evolved to serve non-EU-focused AI systems. As such, the certification is now neutral with respect to the international regulations, providing value to companies that are not currently looking forward to complying with the EU AI Act but still need transparency to be a key part of their AI governance systems. EU AI Act specific requirements are now part of the user guidelines. 

One of the cardinal goals of the certification is to bolster organizational reputation and stakeholder trust. It aims to transform AI systems into paragons of transparency and reliability, thus fostering an environment where users, customers, and society at large can interact with these systems with enhanced confidence and assurance. This not only facilitates an augmentation of societal trust but also paves the way for organizational competitiveness, enabling entities to robustly differentiate themselves in a bustling marketplace.

In addition to bolstering preparedness for impending regulations, the certification is also a tool for technical readiness. It ensures that organizations are not only compliant with ethical norms but are also technically adept and resilient. By fostering a culture of continuous improvement and learning, the certification ensures that organizations remain at the pinnacle of technical innovation and excellence.

The Certification process now provides certifying companies an evidence management SaaS that simplifies the evidence upload process in a secure manner. This is useful during the assessment process, but also for companies to have an organized dataroom that can be used for additional assessments, etc. 

The Certification goes beyond being a seal but aims to become a powerful ally for organizations. It is a dynamic, evolving, and comprehensive tool that aims to harmonize the myriad facets of AI - ethical, technical, and regulatory - under a unified, robust framework. Its unwavering commitment is to spearhead a movement towards responsible AI that resonates with societal values, regulatory alignment, and technical prowess, facilitating a future where AI systems flourish with accountability and innovation in concert.

 

More information about the tool can be found in the English version of the landing site and the current tool's website in Spanish.

 

About the tool










Type of approach:



Usage rights:


License:






Enforcement:



Geographical scope:




Technology platforms:


Tags:

  • ai ethics
  • ai responsible
  • biases testing
  • building trust with ai
  • data governance
  • digital ethics
  • evaluation
  • trustworthy ai
  • validation of ai model
  • ai assessment
  • ai governance
  • transparency
  • trustworthiness
  • ai compliance
  • data ethics
  • model monitoring
  • regulation compliance
  • accountability
  • ml security
  • explainability
  • privacy
  • privacy

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.