These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Type
Origin
Scope
SUBMIT A TOOL
If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!
SUBMITFujitsu LLM Bias Diagnosis
TechnicalProceduralUnited StatesJapanUploaded on Apr 19, 2024Diagnose bias in LLMs (Large Language Models) from various points of views, allowing users to choose the most appropriate LLM.
Objective(s)
Related lifecycle stage(s)
Plan & designDisability-Centered AI And Ethics MOOC
EducationalUploaded on Apr 2, 2024<1 hourApproaches to disability-centered data, models, systems oversight
Objective(s)
Ethical Problem Solving
ProceduralBrazilUploaded on Mar 14, 2024Ethical Problem Solving (EPS) is a framework to promote the development of safe and ethical artificial intelligence. EPS is divided into an evaluation stage (performed via Algorithmic Impact Assessment tools) and a recommendation stage (the WHY-SHOULD-HOW method).
Objective(s)
AI Risk-Management Standards Profile for General-Purpose AI Systems (GPAIS) and Foundation Models
TechnicalEducationalProceduralUnited StatesUploaded on Jan 17, 2024This document provides risk-management practices or controls for identifying, analyzing, and mitigating risks of large language models or other general-purpose AI systems (GPAIS) and foundation models. This document facilitates conformity with or use of leading AI risk management-related standards, adapting and building on the generic voluntary guidance in the NIST AI Risk Management Framework and ISO/IEC 23894, with a focus on the unique issues faced by developers of GPAIS.
Objective(s)
Evaluating and Mitigating Discrimination in Language Model Decisions
Uploaded on Dec 14, 2023Our work enables developers and policymakers to anticipate, measure, and address discrimination as language model capabilities and applications continue to expand.
Objective(s)
Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems
ProceduralCanadaUploaded on Nov 14, 2023In undertaking this voluntary commitment, developers and managers of advanced generative systems commit to working to achieve outcomes related to the OECD AI principles.
Objective(s)
Related lifecycle stage(s)
Operate & monitorDeployVerify & validateCollect & process dataPlan & designNIST AI RMF Playbook
ProceduralUploaded on Oct 26, 2023The Playbook provides suggested actions for achieving the outcomes laid out in the AI Risk Management Framework (AI RMF) Core (Tables 1 – 4 in AI RMF 1.0). Suggestions are aligned to each sub-category within the four AI RMF functions (Govern, Map, Measure, Manage).
Objective(s)
NIST Trustworthy & Responsible Artificial Intelligence Resource Center (AIRC)
ProceduralUploaded on Oct 26, 2023The AIRC supports all AI actors in the development and deployment of trustworthy and responsible AI technologies. AIRC supports and operationalizes the NIST AI Risk Management Framework (AI RMF 1.0) and accompanying Playbook and will grow with enhancements to enable an interactive, role-based experience providing access to a wide-range of relevant AI resources.
Objective(s)
Artificial Intelligence Risk Management Framework (AI RMF 1.0)
ProceduralUploaded on Oct 26, 2023The goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.
Objective(s)
Artificial Intelligence in Hiring: Assessing Impacts on Equality
ProceduralUnited KingdomUploaded on Oct 6, 2023This report proposes a model for Equality Impact Assessment of AI tools. This builds on the paper research which finds that existing fairness and bias auditing solutions are inadequate to ensuring compliance with UK Equalities legislation.
Objective(s)
Understanding AI at Work Toolkit
EducationalUnited KingdomUploaded on Oct 6, 2023A toolkit for employers and workers seeking to understand the challenges and opportunities of using algorithmic systems that make or inform decisions about workers.
Objective(s)
Good Work Algorithmic Impact Assessment
ProceduralUploaded on Oct 6, 2023Guidance aimed at encouraging employee led design and development of algorithmic systems in the workplace.
Objective(s)
The Good Work Charter Toolkit
EducationalUnited KingdomUploaded on Oct 6, 2023Provides the regulatory framework for incorporating rights, freedoms, and obligations relevant to work and people's experience of it, in particular technology specific guidance.
Objective(s)
Fairly AI: FAIRLY End-to-End AI Governance Platform
Uploaded on Sep 14, 2023<1 dayFAIRLY provides an AI governance platform focussed on accelerating the broad use of fair and responsible AI by helping organisations bring safer AI models to market.
Objective(s)
Trustworthy and Ethical Assurance Platform
ProceduralUnited KingdomUploaded on Sep 11, 2023>1 yearThe Trustworthy and Ethical Assurance platform is an open-source tool and framework to support the process of developing and communicating trustworthy and ethical assurance cases for data-driven technologies.
Objective(s)
CESIUM
United KingdomUploaded on Sep 11, 2023Using the latest advances in ethical artificial intelligence, CESIUM supports risk-assessment for safeguarding the most vulnerable children in society.
Objective(s)
AIAAIC Repository
EducationalUploaded on Jun 8, 2023The independent, open, public interest resource detailing incidents and controversies driven by and relating to artificial intelligence, algorithms, and automation
Objective(s)
AI Index
ProceduralUploaded on May 23, 2023The AI Index is an independent initiative at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), led by the AI Index Steering Committee, an interdisciplinary group of experts from across academia and industry. The annual report tracks, collates, distills, and visualizes data relating to artificial intelligence, enabling decision-makers to take meaningful action to advance AI responsibly and ethically with humans in mind.
Objective(s)
Australia’s AI Ethics Principles
ProceduralUploaded on May 23, 2023Australia’s 8 Artificial Intelligence (AI) Ethics Principles are designed to ensure AI is safe, secure and reliable.
Objective(s)
AI4People's Ethical Framework for a Good Society
ProceduralUploaded on May 22, 2023The goal of AI4People is to create a common public space for laying out the founding principles, policies and practices on which to build a “good AI society”.
Objective(s)