Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

AI Risk Intelligence System™ for biometric and high-risk AI



AI Risk Intelligence System™ for biometric and high-risk AI

Anekanta®AI identified a problem both developers and users experience when attempting to classify and assess the risk of the features of AI, especially those which process biometric and biometric-based data gathered from ‘smart building and infrastructure’ sensor technology such as video surveillance cameras used for safety and efficiency, whilst considering the impact and risks to stakeholders. Users are frequently unaware that they are using AI systems at all, nor what data they are collecting and automatically processing.

Anekanta® AI’s AI Risk Intelligence System™ which is built upon the OECD AI principles is a specialised discovery and analytical framework which comprises a questionnaire format which challenges developers and users to consider a range of detailed questions about the AI system they have developed, procured or plan to integrate into their operations. Additionally it considers the effects of combinations of AI systems used together which may produce unintended consequences such as re-identification.

The questions asked relate to transparency and explainability and encompass the level of autonomy, the origin of the inputs, the expected and sometimes unexpected outputs and the effects and impacts of the AI system. The questionnaire, soon to become an online service, is currently completed in-house by the Anekanta®AI’s team in collaboration with the developer or user. When complete, the impact and risk data set generated from the questionnaire is analysed by Anekanta® AI which leads to a thorough, wide-reaching and consistent impact and risk assessment together with risk-mitigating recommendations. The system and its outputs may readily be aligned with the UK’s AI Regulation white paper, GDPR and the pending EU AI Act. The EU AI Act is driving users and developers towards both legal and voluntary compliance obligations which set a high bar for feature and use case discovery, and which is transferrable to all other regions developing AI legislation and standards.

The Anekanta® AI, AI Risk Intelligence System™ for biometric and high-risk AI supports organisations to build consistency into their governance processes. The system leads to the creation of a new comparable data set about the AI, which can be analysed to reach conclusions about the impact and risks associated with using certain types of software. The framework is based on the requirements of the EU AI Act regarding biometric and biometric-based AI technology and their potential impacts on the health, safety, and fundamental rights of people. By using the framework, users and developers are completing a vital discovery process which steps towards their obligations under the Act, also aligning with the UK’s AI regulatory and principle-based governance frameworks, and in doing so complete crucial groundwork for the preparations needed to comply with emerging standards such as ISO/IEC 42001 and ISO/IEC 23984. 

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.