Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

GRACE



GRACE

Since 2021.AI’s inception, and especially since ChatGPT's release, the need for an AI Governance, Risk and Compliance platform in the market has only increased. From our involvement with the EU on the Ethical Guidelines for Trustworthy AI, 2021.AI has been working with regulators and leaders in the AI industry, understanding the highest level of compliance required and translating that into a fully functional platform. GRACE can be used across organisations to detect and mitigate the risks of AI, unravel the complex regulatory AI landscape, integrate with your existing MLOps platforms and models to ensure that innovation can still be maximised with the fast pace of adoption of AI in the enterprise. 

The growth of AI has led to an exponential increase in AI risks, making risk management and mitigation increasingly challenging. Generative AI systems like ChatGPT are making it easier for companies to adopt AI but also bringing up serious ethical issues such as scientific misinformation, biassed images and avatars, hate speech, etc. Consider this angle on LLM’s- ChatGPT and Generative AI has brought AI to every household and everyday lives of every company. It is approaching mass adoption, however, as opposed to other technologies, is not governed and its risks are not fully understood to the same extent. With a regulatory legislative push that will hit on all sectors, alongside current heavy regulatory burden, keeping up to date and ensuring compliance poses a significant challenge.The lack of a fully operationalised and integrated AI Governance framework has led to siloed responsible AI efforts, with 70% organisations lacking such a framework, resulting in transparency challenges, poor communication, increased risks and stifled innovation. Preparing for incoming AI legislation and operationalising AI policies with the combination of the latest standards, will need constant navigation and expertise otherwise this could result in legal, ethical, and reputational consequences, like 6% on global revenue fines according to the EU AI Act.

2021.AI is an AI governance platform that addresses these challenges by providing a central registry of all AI models, either in-house or third-party models, and offers tools and workflows for risk mitigation and compliance. The platform enables collaboration across AI teams and projects, reducing compliance burden, increasing transparency and trust, and promoting AI innovation.  We take the legislation, interpret the risks and build controls for managing these with real time compliance. All of this is captured with the digital trace and building the evidence regulators require.  2021.AI's integrated MLOps and governance platform works with other AI models and systems, making it an attractive solution for Mastercard and other banks. Working across all major cloud vendors, Azure, GCP, Azure, On prem. The platform provides an out-of-the-box framework to comply with EU AI Act, AI standards, data regulations, and AI frameworks that we maintain and update according to the organisations requirements. With advanced algorithms and machine learning capabilities, 2021.AI's platform is designed to manage AI risks in real-time and mitigate potential risks before they become major issues. 

About the tool


Developing organisation(s):








Type of approach:




License:



Target users:







People involved:



Technology platforms:


Tags:

  • ai ethics
  • ai risks
  • build trust
  • building trust with ai
  • collaborative governance
  • data governance
  • digital ethics
  • documentation
  • model cards
  • validation of ai model
  • chatgpt
  • ai assessment
  • ai governance
  • ai procurement
  • ai reliability
  • machine learning testing
  • fairness
  • transparency
  • auditablility
  • data ethics
  • grc
  • model validation
  • reporting
  • dashboard
  • ai register
  • risk register
  • gap analysis
  • model monitoring
  • mlops
  • innovation

Modify this tool

Use Cases

Enterprise ChatGPT and LLM Governance

Enterprise ChatGPT and LLM Governance

Trust in AI is needed Many organisations are utilising AI in a  production setting without proper governance in place. This is now more widespread than ever, with large language models (LLMs) like ChatGPT being widely used in...
Apr 19, 2023

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.