Responsible AI Working Group Report
We are delighted to report on our mandate and mission to “foster and contribute to the responsible development, use and governance of human-centred AI systems, in congruence with the UN Sustainable Development Goals, ensuring diversity and inclusivity to promote a resilient society, in particular, in the interest of vulnerable and marginalised groups.” Our Expert Working Group considers that ensuring responsible and ethical AI is more than designing systems whose results can be trusted - it is about the way we design them, why we design them, and who is involved in designing them. Responsible AI is not, as some may claim, a way to give AI systems some kind of ‘responsibility’ for their actions and decisions, and in the process, discharge people, governments and organizations of their responsibility. Rather, it is those that shape the AI tools who should take responsibility and act in accordance with the rule of law and in consideration of an ethical framework - which includes respect for human rights - in such a way that these systems can be trusted by society. In order to develop and use AI responsibly, we need to work towards technical, societal, institutional, legal methods, and tools that provide concrete support to AI practitioners and deployers, as well as awareness and training to enable the participation of all, to ensure the alignment of AI systems with our societies’ principles, values, needs, and priorities, where the human being is at the heart of the decisions and the purposes in the design and use of AI.