These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Type
Origin
Scope
SUBMIT A TOOL
If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!
SUBMITLiving guidelines on the responsible use of Generative AI in research
ProceduralUploaded on Mar 26, 2024These guidelines focus on one particular area of AI used in the research process, namely generative Artificial Intelligence. There is an important step to prevent misuse and ensure that generative AI plays a positive role in improving research practices.
Objective(s)
SAIF CHECK
ProceduralSaudi ArabiaUploaded on Mar 26, 2024LLM Survey for responsible and transparent and safe AI covering international compliance regulations and data/ model evaluations
Objective(s)
Teeny-Tiny Castle
EducationalUploaded on Mar 14, 2024Teeny-Tiny Castle is a collection of tutorials on how to use tools for AI Ethics and Safety research.
Objective(s)
Ethical Problem Solving
ProceduralBrazilUploaded on Mar 14, 2024Ethical Problem Solving (EPS) is a framework to promote the development of safe and ethical artificial intelligence. EPS is divided into an evaluation stage (performed via Algorithmic Impact Assessment tools) and a recommendation stage (the WHY-SHOULD-HOW method).
Objective(s)
Responsible AI Governance Framework for boards
ProceduralUnited KingdomUploaded on Feb 20, 2024This Responsible AI Governance framework (2024) provides Boards with signposts to high level areas of responsibility and accountability through a checklist of twelve principles which could be at the heart of an AI governance policy. It is for Boards who wish to start their AI journey, or for those who recognize that AI governance may be mission critical since the emergence of generative AI (Gen AI) and large language models (LLMs).
Objective(s)
Privacy Impact Risk Assessment System for Remote Biometrics and Facial Recognition
ProceduralUnited KingdomUploaded on Feb 20, 2024Anekanta® AI’s Privacy Impact Risk Assessment System™ identifies the potential privacy impact risks which arise when using remote biometric AI systems including facial recognition technology. The system creates automated reports which recommended mitigations that consider all risks, including AI governance, human rights, union law, employment rights, privacy rights, impact assessments, prohibitions, voluntary and harmonized standards, and good practice.
Objective(s)
AI Risk Intelligence System™ for biometric and high-risk AI
ProceduralUnited KingdomUploaded on Feb 20, 2024Anekanta® AI’s AI Risk Intelligence System™ solves the problem of biometrics and AI system risk classification by challenging the entire AI system lifecycle and providing risk reports which inform compliance requirements in line with the EU AI Act and international regulatory frameworks.
Objective(s)
Casebase
TechnicalProceduralGermanyUploaded on Feb 19, 2024Casebase is a platform for portfolio management of data analytics & AI use cases. It supports companies in systematically developing their ideas in the field of artificial intelligence, documenting the development process and managing their data & AI roadmap. Particularly in the context of AI governance and the EU AI Act, Casebase helps to manage the risks of AI systems over their entire life cycle.
Objective(s)
Generative AI Framework for HMG (HTML)
ProceduralUnited KingdomUploaded on Jan 24, 2024Ten core principles for generative AI use in government and public sector organisations.
Objective(s)
Related lifecycle stage(s)
Plan & designProposed Model Governance Framework for Generative AI
ProceduralSingaporeUploaded on Jan 24, 2024This Model AI Governance Framework for Generative AI therefore seeks to set forth a systematic and balanced approach to address generative AI concerns while continuing to facilitate innovation.
Objective(s)
Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models
ProceduralUploaded on Jan 24, 2024This guidance addresses one type of generative AI, large multi-modal models (LMMs), which can accept one or more type of data input and generate diverse outputs that are not limited to the type of data fed into the algorithm.
Objective(s)
Adigital´s Algorithmic Transparency Certificate
TechnicalUploaded on Jan 22, 2024The Algorithmic Transparency Certificate from Adigital offers a compliance solution for all those organizations that use AI systems in their daily activity, thus reinforcing confidence in these systems and technologies thanks to a well-understood transparency.
Objective(s)
AI Risk-Management Standards Profile for General-Purpose AI Systems (GPAIS) and Foundation Models
TechnicalEducationalProceduralUnited StatesUploaded on Jan 17, 2024This document provides risk-management practices or controls for identifying, analyzing, and mitigating risks of large language models or other general-purpose AI systems (GPAIS) and foundation models. This document facilitates conformity with or use of leading AI risk management-related standards, adapting and building on the generic voluntary guidance in the NIST AI Risk Management Framework and ISO/IEC 23894, with a focus on the unique issues faced by developers of GPAIS.
Objective(s)
Guidelines for Development of Trustworthy AI
ProceduralKoreaUploaded on Jan 17, 2024This tool serves as guidelines that can be used as a reference material for stakeholders such as data scientists and AI model developers working in the field of AI product and service development, from a practical perspective to ensure the trustworthiness of AI. The guidelines presents 15 development requirements and 67 verification items that can be checked.
Objective(s)
transformer-deploy
TechnicalFranceUploaded on Dec 15, 2023Efficient, scalable and enterprise-grade CPU/GPU inference server for Hugging Face transformer models.
Objective(s)
Text classification Keras
TechnicalChinaUploaded on Dec 15, 2023Text classification models implemented in Keras, including: FastText, TextCNN, TextRNN, TextBiRNN, TextAttBiRNN, HAN, RCNN, RCNNVariant, etc.
Objective(s)
Related lifecycle stage(s)
Build & interpret modelSpeech transformer
TechnicalChinaUploaded on Dec 15, 2023A PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.
Objective(s)
Related lifecycle stage(s)
Operate & monitorDeployVerify & validateBuild & interpret modelCollect & process dataPlan & designPyImageSearch CV/DL CrashCourse
TechnicalUploaded on Dec 15, 2023Repository for PyImageSearch Crash Course on Computer Vision and Deep Learning
Objective(s)
Polyscope
TechnicalUnited StatesUploaded on Dec 15, 2023A C++ & Python viewer for 3D data like meshes and point clouds
Objective(s)
Pixel-Perfect Structure-from-Motion
TechnicalSwitzerlandUploaded on Dec 15, 2023Pixel-Perfect Structure-from-Motion with Featuremetric Refinement (ICCV 2021, Best Student Paper Award)
Objective(s)
Related lifecycle stage(s)
Verify & validateBuild & interpret modelCollect & process dataPlan & design