Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

KomplyAi



KomplyAi

At KomplyAi, we are supporting the AI ecosystem with the most accessible tools in global AI compliance.  

Organisations are rapidly adopting AI technologies to streamline their business operations, and KomplyAi wants to help them gain an even greater competitive advantage by delivering safer and more responsible AI solutions. 

Compliance solutions for enterprise challenges 

Combining our skills in compliance, the law and technology, we have built a ‘customer centric’ governance, risk and compliance platform for AI. The user interface is drawable by way of an API, and seamlessly translates into an organisation's existing business architecture for risk management. 

An organisation’s product evolution requires the right structures to be in place from the outset – for iteration in emerging technology adoption to occur rapidly, but in compliance with a multitude of current and impending laws intersecting with AI. 

KomplyAi wants to redefine productivity and help our customers’ capitalise on efficiencies from AI, by accelerating lower risk AI builds, and automating tedious compliance steps for higher risk sectors, such as higher education and training, health, banking and financial services.  

We support with the best and latest knowledge insights about risk and regulatory requirements.  

Enterprise customers can easily log, review, and track their AI projects when they are operating in market. Providing a digital footprint for risk management. An intelligent workflow engine will journey stakeholders to undertake internal AI evaluations and reviews using our in-built risk assessor, and knowledge libraries, where users will collaborate with multiple stakeholders to share knowledge across teams and locations. Automatically generate governance documentation such as ‘Fundamental Rights Impact Statements’, ‘Risk Management Plans’ etc., ‘Instructional Manuals’, for approvals based on the latest global AI regulatory requirements. 

About the tool




Impacted stakeholders:



Target sector(s):






Target users:


Stakeholder group:






People involved:





Tags:

  • building trust with ai
  • documentation
  • evaluation
  • trustworthy ai
  • ai assessment
  • ai governance
  • ai procurement
  • ai auditing
  • decision support tool

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.