These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Filters
SUBMIT A TOOL USE CASE
If you have a tool use case that you think should be featured in the Catalogue of Tools & Metrics for Trustworthy AI, we would love to hear from you!
SUBMITShakers' AI Matchmaking System
Shakers' AI matchmaking tool connects freelancers and projects by analyzing experiences, skills, and behaviours, ensuring precise personal and professional talent-client matches within its vast community.
Higher-dimensional bias in a BERT-based disinformation classifier
Application of the bias detection tool on a self-trained BERT-based disinformation classifier on the Twitter1516 dataset
How You Match functionality in InfoJobs
In Infojobs, the information available in the resumées of the candidates and in the posted job offers are used to compute a matching score between a job seeker and a given job offer. This matching score is called ‘How you Match’, and is currently being used in multiple user touchpoints in Infojobs, the leading job board in Spain.
Mind Foundry: Using Continuous Metalearning to govern AI models used for fraud detection in insurance
Use case that makes use of continuous metalearning to identify, prioritise and investigate fraudulent insurance claims within the insurance industry.
British Standards Institution: EU AI Act Readiness Assessment and Algorithmic Auditing
AI providers need to ensure that their effort is correctly oriented to the full compliance with the EU AI Act. BSI is therefore meeting the needs of customers who will be regulated against the EU AI Act by offering readiness assessments and algorithm testing before the application of the regulation.
Nvidia: Explainable AI for credit risk management
This case study focuses on the use of graphics processing units (GPUs) to accelerate SHAP explainable AI models for risk management, assessment and scoring of credit portfolios in traditional banks, as well as in fintech platforms for peer-to-peer (P2P) lending and crowdfunding.
Trilateral Research: Ethical impact assessment, risk assessment, transparency reporting, bias mitigation and co-design of AI used to safeguard children
This case study is focused on the use of an AI-enabled system called CESIUM to enhance decision making regarding safeguarding of children at risk for criminal and sexual exploitation.
Assessment for Responsible Artificial Intelligence.
During this six-month pilot, the practical application of a deep learning algorithm from the province of Fryslân has been investigated and assessed.
European AI Scanner: Ensuring Compliance with the EU Artificial Intelligence Act
The European AI Scanner facilitates companies' compliance with the EU AI Act, ensuring trustworthy and responsible AI adoption in the European market.
Using BigCode Open & Responsible AI License
This case focuses on the use of the BigCode Open & Responsible AI license to share a large language model for code generation, StarCoder.
Use cases using the IBM Factsheets
Several examples of use cases developed by IBM on how Factsheets can be built in practice.
Human resource management
A sample scenario in the context of human resource management illustrates the functioning of the Fairness Compass.
Human-Robot Interaction Trust Scale (HRITS)
Translation and validation of the Human-Computer Trust Scale to human-robot interaction (HRI) application to cobots.
Uncovering bugs in a health care model
The Google What-If tool helped a software developer spot errors in their model when assessing performance metrics.
Teaching codeless machine learning to auditors
Enablement for accountants illustrates that coding isn't always necessary to harness the value of machine learning.
How SAP promotes human agency through its AI policy
SAP provides guidance to employees when building AI systems related to human agency and oversight.
Reporting Carbon Emissions on Open-Source Model Cards
Revealing the carbon emissions of a model helps normalize energy efficiency.
Enterprise ChatGPT and LLM Governance
AI governance is crucial as Generative AI systems like ChatGPT raises ethical concerns as businesses use them more extensively. 2021.AI's GRACE platform provides a solution to address these challenges.
CXPlain uncovers how certain factors impact housing prices in Boston
The AI explainability method provides insight into a model that predicts median housing prices.
FairLens detected racial bias in a recidivism prediction algorithm
FairLens assessed the bias of a dataset from an algorithm used to measure a convicted criminal’s likelihood of reoffending.