These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
AI & EQUALITY: A Human Rights Toolbox & Initiative
AI & EQUALITY: A HUMAN RIGHTS TOOLBOX & INITIATIVE
A Human Rights-based Approach to AI Development
It is a common belief in the engineering and data science community (as well as with some policy makers) that data presents neutrality and truth.
In this 5 Module foundational free online course, we unpack this myth and demonstrate that data and AI systems are both relative and contextual.
The course dives into the mechanisms and culture around AI development that can carry and transfer biases and inequalities into AI systems.
Through a human rights-based approach, this course equips its participants with the tools and common vocabulary to create technology for the dignity, equality and worth of humans. Focusing on the impact of the technology on human beings and their fundamental, inalienable rights, we relate human rights aspects to the entire AI life cycle.
The course* and the AI & Equality community that supports it aim at increasing the awareness of the interconnectedness of human rights principles and computing technology so that technology creators and deployers can actively follow human rights values rather than unwittingly harm them.
We have the long term aim to catalyze a truly global community and collaboration between disciplines, regions and sectors for the sharing of best practice and co-creation of AI policy, projects and pilots with a human rights-based approach aiming to go beyond compliance, seeking to contribute to, and ultimately empower human beings and the public good.
VALIDATION
*Developed by Women At The Table and AI & Equality Toolbox initiative, in collaboration with UN Human Rights OHCHR and the Alan Turing Institute, this method has been validated across leading academic institutions with an EPFL Masters Thesis, workshopped at EPFL (3x), Sorbonne Center for AI (3x), Tu/Eindhoven (3X), University College Dublin (2x), University of Lausanne, Gates Fellows at Cambridge, Queen Mary University, KNUST Ghana, Makerere Uganda, University of Lagos, African Centre for Technology Studies (Kenya), American University Cairo, Chile’s National Center for AI CENIA, Chulalongkorn (Thailand), Cambridge University Computer Science, Technical University Munich, AIDA the EU's AI Doctoral Academy, the 2024 European Society for Engineering Education (SEFI) Conference, and with the Canton of Geneva Public Sector.
AI & EQUALITY COMMUNITY (PLATFORM)
AI & EQUALITY is an initiative with the aim of integrating a Human Rights-based approach to AI development into academic and industry practice. More than the creators of an online course* we are also a thriving community where multi-disciplinary colleagues from industry and academia discuss questions around a human rights-based AI and work towards a portfolio of best practice examples.
The community offers an online discussion forum, Open Studios on research, Author book talks on emerging topics, and community publications.
AUDIENCE
Policy makers, public sector technologists and workers, social scientists, data and computer scientists and related technical fields,
NGOs and International cooperation and aid organisations working in deployment and data analysis,
and everyone involved in creating AI assisted systems, as well as in their regulation or affected by their impact.
In short: everyone.
The course is aimed at a multi-disciplinary audience, bridging backgrounds, origins, levels of expertise (around AI, responsible AI, or lived experience), and disciplines (from ML engineers to product managers, UX researchers and lawyers), to community activists, etc…
We believe that the creation of good technology requires a multidisciplinary effort, strengthened by consultation, consensus, and diverse disciplinary perspectives including the lived experiences of the people who will be affected by the AI systems. The course establishes common ground, e.g. by providing a basic understanding and common vocabulary to participate in this important conversation. We want to enable the confident collaboration and critical analysis that are necessary to reflect on the objectives and outcomes of any new technology products and their impact on our communities.
LEARNING OUTCOMES
-Enable participants contributing to the creation or regulation of AI systems (or considering to do so), as well as impacted communities,
to understand how their area of expertise (including their lived experience) relates to human rights impacts of AI systems.
-Equip participants with the mindset and knowledge to consider and innovate mechanisms that create and deploy AI systems
with a human rights and public good perspective.
-Employ vocabulary, critical analysis and multidisciplinary discussion methodology that aids policy makers, social scientists and community members to converse with technologists (and vice versa) on technology’s intended as well as unintended human rights consequences.
-Bridge gaps between disciplines working in the creation and regulation of AI systems by showing how the fields relate and are both complementary and necessary in the creation of human rights-compliant AI systems.
- METHOD
- Using real-world examples, accessible yet technical language, (and follow-along coding exercises for those interested or able), the course, community and workshops bridge between disciplines, enabling conversations on the concrete human rights aspects of developing and regulating AI technologies.
FORMALITIES
The course is available for free on the Sorbonne Centre for AI (SCAI) website and learning portal (link to course) and on our <AI & EQUALITY> community platform (link to platform). It can be completed in approximately eight hours and a certificate can be obtained on the SCAI website if a multiple-choice exam is passed.
The 5 modules cover:
Module 1. Human Rights & AI Systems
What are Human Rights? Core principles & legal frameworks
How AI systems can contradict the core values of Human Rights
Module 2. How Harms to Human Rights Enter the AI Lifecycle
Introducing the six stages of the AI lifecycle
For each stage:
Outlining the purpose of the stage
Demonstrating different entry points of bias
Module 3. Fairness Metrics: Technical Measures are Not Neutral
Using the example of fairness metrics to illustrate that different metrics lead to different, i.e. contradictory real-world outcomes. Which metric is the most suitable for a system has to be a conscious choice, informed by the system’s socio-technical context.
Module 4. A Human Rights-Based Approach to AI Development
For each stage of the life cycle:
Introducing considerations and reflection points that are required to create Human Rights-respecting AI systems.
Best practice examples or tools that support the practical implementation
Module 5. Putting the Human Rights-based Approach into Practice
Two case studies based on the same World Bank Findex dataset from Sub Saharan Africa
demonstrating that a Human Rights-based approach can manifest in very different measures depending on a system’s objective - even for the same dataset and functionality.
Optional: follow along in a Jupyter notebook to see how the mechanisms and different fairness metrics play out in code.
Different Formats of Taking the Course:
- Option 1 Take the course fully asynchronous via the Sorbonne website with an additional option to engage in discussions on our
- community platform Circle.
- Option 2 One hour presentation by a representative of our community that serves as an introduction to the course and the methodology.
- Bespoke presentations and workshop courses for the Public Sector are also available. Contact emma@womenatthetable.net for more details.
The AI & Equality Toolbox site shares the initiative in more detail, in addition to the
2025 White Paper: Integrating Human Rights Considerations Along the AI Lifecycle
A Framework to AI Development.
About the tool
You can click on the links to see the associated tools
Tool type(s):
Objective(s):
Impacted stakeholders:
Target sector(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
Target groups:
Target users:
Stakeholder group:
Validity:
Enforcement:
Benefits:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Tags:
- human-ai
- ai ethics
- building trust with ai
- data governance
- responsible ai collaborative
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case