These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Type
Origin
Scope
SUBMIT A TOOL
If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!
SUBMITAI Incident Database
TechnicalEducationalUploaded on Apr 25, 2024The AI Incident Database is a free and open-source project dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems.
Objective(s)
Related lifecycle stage(s)
Operate & monitorMap Lead Scraper
TechnicalUploaded on Apr 18, 2024Tool to scrape local business data from Google Maps
Objective(s)
Living guidelines on the responsible use of Generative AI in research
ProceduralUploaded on Mar 26, 2024These guidelines focus on one particular area of AI used in the research process, namely generative Artificial Intelligence. There is an important step to prevent misuse and ensure that generative AI plays a positive role in improving research practices.
Objective(s)
Teeny-Tiny Castle
EducationalUploaded on Mar 14, 2024Teeny-Tiny Castle is a collection of tutorials on how to use tools for AI Ethics and Safety research.
Objective(s)
Ethical Problem Solving
ProceduralBrazilUploaded on Mar 14, 2024Ethical Problem Solving (EPS) is a framework to promote the development of safe and ethical artificial intelligence. EPS is divided into an evaluation stage (performed via Algorithmic Impact Assessment tools) and a recommendation stage (the WHY-SHOULD-HOW method).
Objective(s)
Responsible AI Governance Framework for boards
ProceduralUnited KingdomUploaded on Feb 20, 2024This Responsible AI Governance framework (2024) provides Boards with signposts to high level areas of responsibility and accountability through a checklist of twelve principles which could be at the heart of an AI governance policy. It is for Boards who wish to start their AI journey, or for those who recognize that AI governance may be mission critical since the emergence of generative AI (Gen AI) and large language models (LLMs).
Objective(s)
Casebase
TechnicalProceduralGermanyUploaded on Feb 19, 2024Casebase is a platform for portfolio management of data analytics & AI use cases. It supports companies in systematically developing their ideas in the field of artificial intelligence, documenting the development process and managing their data & AI roadmap. Particularly in the context of AI governance and the EU AI Act, Casebase helps to manage the risks of AI systems over their entire life cycle.
Objective(s)
Proposed Model Governance Framework for Generative AI
ProceduralSingaporeUploaded on Jan 24, 2024This Model AI Governance Framework for Generative AI therefore seeks to set forth a systematic and balanced approach to address generative AI concerns while continuing to facilitate innovation.
Objective(s)
AI Risk-Management Standards Profile for General-Purpose AI Systems (GPAIS) and Foundation Models
TechnicalEducationalProceduralUnited StatesUploaded on Jan 17, 2024This document provides risk-management practices or controls for identifying, analyzing, and mitigating risks of large language models or other general-purpose AI systems (GPAIS) and foundation models. This document facilitates conformity with or use of leading AI risk management-related standards, adapting and building on the generic voluntary guidance in the NIST AI Risk Management Framework and ISO/IEC 23894, with a focus on the unique issues faced by developers of GPAIS.
Objective(s)
Evaluating and Mitigating Discrimination in Language Model Decisions
Uploaded on Dec 14, 2023Our work enables developers and policymakers to anticipate, measure, and address discrimination as language model capabilities and applications continue to expand.
Objective(s)
Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems
ProceduralCanadaUploaded on Nov 14, 2023In undertaking this voluntary commitment, developers and managers of advanced generative systems commit to working to achieve outcomes related to the OECD AI principles.
Objective(s)
Related lifecycle stage(s)
Operate & monitorDeployVerify & validateCollect & process dataPlan & designAutomated Decision-Making Systems in the Public Sector – An Impact Assessment Tool for Public Authorities
EducationalProceduralSwitzerlandGermanyUploaded on Oct 31, 2023How can we ensure a trustworthy use of automated decision-making systems (ADMS) in the public administration? AlgorithmWatch has developed a concrete and practicable impact assessment tool for ADMS in the public sector. This publication provides a framework ready to be implemented for the evaluation of specific ADMS by public authorities at different levels.
Objective(s)
NIST AI RMF Playbook
ProceduralUploaded on Oct 26, 2023The Playbook provides suggested actions for achieving the outcomes laid out in the AI Risk Management Framework (AI RMF) Core (Tables 1 – 4 in AI RMF 1.0). Suggestions are aligned to each sub-category within the four AI RMF functions (Govern, Map, Measure, Manage).
Objective(s)
NIST Trustworthy & Responsible Artificial Intelligence Resource Center (AIRC)
ProceduralUploaded on Oct 26, 2023The AIRC supports all AI actors in the development and deployment of trustworthy and responsible AI technologies. AIRC supports and operationalizes the NIST AI Risk Management Framework (AI RMF 1.0) and accompanying Playbook and will grow with enhancements to enable an interactive, role-based experience providing access to a wide-range of relevant AI resources.
Objective(s)
Artificial Intelligence Risk Management Framework (AI RMF 1.0)
ProceduralUploaded on Oct 26, 2023The goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.
Objective(s)
Monitaur Model Governance Platform
TechnicalUploaded on Oct 26, 2023Monitaur is a model governance software company that enables you to build repeatable patterns and requirements for model development success. Those policies, templates, and applications are then also aligned with regulatory requirements. We're a full model development lifecycle governance solution, with interfaces specifically for development teams, and then also for risk management teams so that as a system of record, you gain transparency, alignment, and greater execution of success in your investments in AI.
Objective(s)
Algorithmic Transparency Certification for Artificial Intelligence Systems
ProceduralUploaded on Oct 19, 2023>1 yearThe Algorithmic Transparency Certification for Artificial Intelligence Systems, by Adigital, is a robust framework for ensuring AI systems operate with paramount transparency, explainability and accountability. Grounded in universal ethical principles, it assesses AI systems on various critical factors, preparing organizations for evolving regulations like the EU AI Act, enhancing societal trust, and fostering competitive market advantage. This certification embodies a dynamic tool for continuous improvement, driving AI innovation with a solid foundation of responsibility and ethical consideration.
Objective(s)
Artificial Intelligence in Hiring: Assessing Impacts on Equality
ProceduralUnited KingdomUploaded on Oct 6, 2023This report proposes a model for Equality Impact Assessment of AI tools. This builds on the paper research which finds that existing fairness and bias auditing solutions are inadequate to ensuring compliance with UK Equalities legislation.
Objective(s)
Understanding AI at Work Toolkit
EducationalUnited KingdomUploaded on Oct 6, 2023A toolkit for employers and workers seeking to understand the challenges and opportunities of using algorithmic systems that make or inform decisions about workers.
Objective(s)
Good Work Algorithmic Impact Assessment
ProceduralUploaded on Oct 6, 2023Guidance aimed at encouraging employee led design and development of algorithmic systems in the workplace.
Objective(s)