These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Type
Origin
Scope
SUBMIT A TOOL
If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!
SUBMITITU-T F.749.4 - Use cases and requirements for multimedia communication enabled vehicle systems using artificial intelligence
Uploaded on Apr 23, 2024This Recommendation specifies use cases and requirements for multimedia communication enabled vehicle systems using artificial intelligence, including overview, use cases, high-layer architecture, service and network requirements, functional requirements, and non-functional requirements.
Objective(s)
ValidMind
TechnicalUnited StatesUploaded on Apr 18, 2024An end-to-end model risk management platform that automates model documentation and dramatically simplifies AI model validation.
Objective(s)
Teeny-Tiny Castle
EducationalUploaded on Mar 14, 2024Teeny-Tiny Castle is a collection of tutorials on how to use tools for AI Ethics and Safety research.
Objective(s)
Evaluating and Mitigating Discrimination in Language Model Decisions
Uploaded on Dec 14, 2023Our work enables developers and policymakers to anticipate, measure, and address discrimination as language model capabilities and applications continue to expand.
Objective(s)
SAHI: Slicing Aided Hyper Inference
TechnicalTurkeyUploaded on Dec 11, 2023Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
Objective(s)
Related lifecycle stage(s)
Build & interpret modelOptuna: A hyperparameter optimization framework
TechnicalUploaded on Dec 11, 2023A hyperparameter optimization framework
Objective(s)
Flux
TechnicalUploaded on Dec 11, 2023Relax! Flux is the ML library that doesn't make you tensor
Objective(s)
Related lifecycle stage(s)
Build & interpret modelNVIDIA Deep Learning Examples for Tensor Cores
TechnicalUnited StatesUploaded on Dec 11, 2023State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
Objective(s)
Related lifecycle stage(s)
Operate & monitorDeployVerify & validateBuild & interpret modelCollect & process dataPlan & designAnomaly Detection Learning Resources
TechnicalUnited StatesUploaded on Dec 11, 2023Anomaly detection related books, papers, videos, and toolboxes
Objective(s)
CAN/CIOSC 101: Ethical Design and Use of Automated Decisions Systems
TechnicalEducationalUploaded on Nov 29, 2023This is the First Edition of CAN/CIOSC 101:2019, Ethical design and use of automated decision systems. CAN/CIOSC 101:2019 was prepared by the CIO Strategy Council Technical Committee 2 (TC 2) on the ethical design and use of automated decision systems, comprised of over 100 thought leaders and experts in artificial intelligence, ethics, and related subjects.
Objective(s)
Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems
ProceduralCanadaUploaded on Nov 14, 2023In undertaking this voluntary commitment, developers and managers of advanced generative systems commit to working to achieve outcomes related to the OECD AI principles.
Objective(s)
Related lifecycle stage(s)
Operate & monitorDeployVerify & validateCollect & process dataPlan & designNIST AI RMF Playbook
ProceduralUploaded on Oct 26, 2023The Playbook provides suggested actions for achieving the outcomes laid out in the AI Risk Management Framework (AI RMF) Core (Tables 1 – 4 in AI RMF 1.0). Suggestions are aligned to each sub-category within the four AI RMF functions (Govern, Map, Measure, Manage).
Objective(s)
NIST Trustworthy & Responsible Artificial Intelligence Resource Center (AIRC)
ProceduralUploaded on Oct 26, 2023The AIRC supports all AI actors in the development and deployment of trustworthy and responsible AI technologies. AIRC supports and operationalizes the NIST AI Risk Management Framework (AI RMF 1.0) and accompanying Playbook and will grow with enhancements to enable an interactive, role-based experience providing access to a wide-range of relevant AI resources.
Objective(s)
Artificial Intelligence Risk Management Framework (AI RMF 1.0)
ProceduralUploaded on Oct 26, 2023The goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.
Objective(s)
Monitaur Model Governance Platform
TechnicalUploaded on Oct 26, 2023Monitaur is a model governance software company that enables you to build repeatable patterns and requirements for model development success. Those policies, templates, and applications are then also aligned with regulatory requirements. We're a full model development lifecycle governance solution, with interfaces specifically for development teams, and then also for risk management teams so that as a system of record, you gain transparency, alignment, and greater execution of success in your investments in AI.
Objective(s)
CertEye
TechnicalUploaded on Oct 13, 2023Zero-trust 360 RAIAAS platform for audit and certification of enterprise-wide AI solutions using trust indicators following ethical standards
Objective(s)
Calvin Risk
TechnicalSwitzerlandUploaded on Oct 13, 2023>1 yearCalvin Risk develops comprehensive, quantitative solutions to assess and manage the risks of AI algorithms in commercial use. The tool helps companies create a framework for transparency, governance and standardization to manage their AI portfolios while ensuring that AI remains safe and compliant with the highest ethical standards and upcoming regulatory requirements.
Objective(s)
Logically AI: Testing and monitoring AI models used to counter online misinformation
Uploaded on Sep 14, 2023Logically uses a Human in the Loop AI framework called HAMLET (Humans and Machines in the Loop Evaluation and Training) to enable the development of trustworthy and responsible AI technologies.
Objective(s)
Fairly AI: FAIRLY End-to-End AI Governance Platform
Uploaded on Sep 14, 2023<1 dayFAIRLY provides an AI governance platform focussed on accelerating the broad use of fair and responsible AI by helping organisations bring safer AI models to market.
Objective(s)
Continuous Metalearning
ProceduralUploaded on Sep 11, 2023Meta-learning is a promising strategy for learning to efficiently learn within new tasks, using data gathered from a distribution of tasks.
Objective(s)
Related lifecycle stage(s)
Plan & design