These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
AI & EQUALITY Community & Online Course: A Human Rights Toolbox
The AI & Equality course demonstrates the relevance of Human Rights for the creation of AI systems - resulting in real-world impacts on the Human Rights of Individuals! The course provides an overview over mechanisms leading to negative impacts, as well as technical and non-technical solutions to achieve AI systems respecting Human Rights. The Human Rights-based approach enables us to focus on the creation of technology that has equality and inclusion at its core. It is an approach that aims beyond compliance and seeks to contribute to, and ultimately empower human beings and the public good.
OUR AI & EQUALITY COMMUNITY (PLATFORM)
AI & EQUALITY is an NGO with the aim of integrating a Human Rights-based approach to AI development into the academic and industry practice. Thus, we want to be more than the creators of an online course; instead we want to be a thriving community where multi-disciplinary people from industry and academia can discuss questions around responsible AI and work towards a portfolio of best practice examples. The community aims at people from all backgrounds, origins, levels of expertise (around AI, responsible AI, or lived experience), and disciplines (from ML engineers to product managers, UX researchers and lawyers) to enable the multi-disciplinary dialogue required to develop Human Rights-respecting systems. Our community offers biweekly discussion groups (around readings and the modules of the online course) as well as coding office hours, community publications, and an online discussion forum and resources (e.g. author talks).
AUDIENCE OF COURSE
In short: everyone. The course is aimed at a multidisciplinary audience, bridging computer science, human rights law, social science, policy aspects, as well as the inclusion of the lived experience of the communities that are impacted by an AI system.
We believe that the creation of technology should be a multidisciplinary effort, strengthened by consultation, consensus, and diverse disciplinary perspectives including the lived experiences of the people affected by AI systems. Our course establishes common ground, e.g. by providing a basic understanding and common vocabulary to participate in this important conversation. We want to enable a confident multidisciplinary collaboration and critical analysis that are necessary to reflect on the objectives and outcomes of any new technology products and their impact on our communities.
LEARNING OUTCOMES
- Obtaining a detailed understanding of a Human Rights-based approach to AI development
- Spanning technical and non-technical solutions around the human rights impacts of AI systems
- Including the mindset and knowledge that enables you to consider it for own projects and to innovate on introduced mechanisms
- Gaining a comprehensive understanding of the mechanisms currently counteracting the Human Rights-based approach
- Critical analysis and reflection on the objectives and outcomes of any new technology products and their impact on our communities
- Bridging the gap between disciplines working in the creation and regulation of AI systems
- understanding that the fields relate and are both complementary and necessary in the creation of human rights-compliant AI systems
- introduction to technical and non-technical vocabulary required to participate in the multi-disciplinary conversations around AI systems’ intended as well as unintended consequences on human rights.
FORMALITIES
The course is available for free on the Sorbonne website (link to course) and on our <AI & EQUALITY> community platform (link to platform). It can be completed in around eight hours and a certificate can be obtained on the SCAI website if a multiple-choice exam is passed.
The 5 modules cover:
- Module 1: an intro into Human Rights and their relevance in the AI sphere
- Module 2: entry points of bias along the AI lifecycle illustrated with various case studies
- Module 3: uses the example of Fairness metrics to demonstrate that technical solutions alone are insufficient
- Module 4: introduction into a human rights-based approach to AI development
- Module 5: two example case studies demonstrating how to put this approach into practice
Different Formats of Taking the Course:
There are three options on how to take the course:
- Option 1 is to take the course fully asynchronous via the Sorbonne website with the option to engage in discussions on our community platform Circle.
- Option 2 is a one hour presentation of a representative of our community that is an intro to the course after which you can decide whether you would like to take the course (can be online or in person, depending on locations, free of charge). Contact emma@womenatthetable.net for more details.
- Option 3 is to take the course in your own time, but join the monthly discussion groups for each module: each month we go over one of the modules with a quick recap / summary of its content and plenty of time for discussion.
About the tool
You can click on the links to see the associated tools
Tool type(s):
Objective(s):
Impacted stakeholders:
Target sector(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
Target groups:
Target users:
Stakeholder group:
Validity:
Enforcement:
Benefits:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Tags:
- human-ai
- ai ethics
- building trust with ai
- data governance
- responsible ai collaborative
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case