Présentation des Principes sur l’IA
Les Principes sur l'IA de l'OCDE encouragent une utilisation de l'IA qui soit innovante et digne de confiance et qui respecte les droits de l'homme et les valeurs démocratiques. Adoptés en mai 2019, ils établissent des normes pour l'IA suffisamment pratiques et flexibles pour résister à l'épreuve du temps.
Principes fondés sur des valeurs
Inclusive growth, sustainable development and well-being
Human-centred values and fairness
Transparency and explainability
Robustness, security and safety
Accountability
Recommandations pour les décideurs politiques
Investing in AI R&D
Fostering a digital ecosystem for AI
Providing an enabling policy environment for AI
Building human capacity and preparing for labour market transition
International co-operation for trustworthy AI
The OECD AI Principles focus on how governments and other actors can shape a human-centric approach to trustworthy AI. As an OECD legal instrument, the principles represent a common aspiration for its adhering countries.
Termes et concepts en rapport avec l’IA
Un système d'IA est un système qui fonctionne grâce à une machine et capable d'influencer son environnement en produisant des résultats (tels que des prédictions, des recommandations ou des décisions) pour répondre à un ensemble donné d'objectifs. Il utilise les données et les intrants généré par la machine et/ou apportés par l'homme afin de (i) percevoir des environnements réels et/ou virtuels ; (ii) produire une représentation abstraite de ces perceptions sous forme de modèles issus d’une analyse automatisée (ex. l'apprentissage automatisé) ou manuelle ; et (iii) utiliser les déductions du modèle pour formuler différentes options de résultats. Les systèmes d'IA sont conçus pour fonctionner de façon plus ou moins autonome.
The OECD’s work on Artificial Intelligence and rationale for
developing the OECD Recommendation on Artificial Intelligence
AI is a general-purpose technology that has the potential to improve
the welfare and well-being of people, to contribute to positive
sustainable global economic activity, to increase innovation and
productivity, and to help respond to key global challenges. It is
deployed in many sectors ranging from production, finance and
transport to healthcare and security.
Alongside benefits, AI also raises challenges for our societies and
economies, notably regarding economic shifts and inequalities,
competition, transitions in the labour market, and implications for
democracy and human rights.
The OECD has undertaken empirical and policy activities on AI in
support of the policy debate over the past two years, starting with a
Technology Foresight Forum on AI in 2016 and an international
conference on AI: Intelligent Machines, Smart Policies in 2017. The
Organisation also conducted analytical and measurement work that
provides an overview of the AI technical landscape, maps economic and
social impacts of AI technologies and their applications, identifies
major policy considerations, and describes AI initiatives from
governments and other stakeholders at national and international
levels.
This work has demonstrated the need to shape a stable policy
environment at the international level to foster trust in and adoption
of AI in society. Against this background, the OECD Committee on
Digital Economy Policy (CDEP) agreed to develop a draft Council
Recommendation to promote a human-centric approach to trustworthy AI,
that fosters research, preserves economic incentives to innovate, and
applies to all stakeholders.
An inclusive and participatory process for developing the
Recommendation
The development of the Recommendation was participatory in nature,
incorporating input from a broad range of sources throughout the
process. In May 2018, the CDEP agreed to form an expert group to scope
principles to foster trust in and adoption of AI, with a view to
developing a draft Council Recommendation in the course of 2019. The
AI Group of experts at the OECD (AIGO) was subsequently established,
comprising over 50 experts from different disciplines and different
sectors (government, industry, civil society, trade unions, the
technical community and academia) –
see the full list
(Note: this is not the Working Party on Artificial Intelligence Governance, the official working party under the supervision of Committee on Digital Economy Policy.)
Between September 2018 and February 2019 the group held four meetings:
in Paris, France, in September and November 2018, in Cambridge, MA,
United States, at the Massachusetts Institute of Technology (MIT) in
January 2019, back to back with the MIT AI Policy Congress, and
finally in Dubai, United Arab Emirates, at the World Government Summit
in February 2019. The work benefited from the diligence, engagement
and substantive contributions of the experts participating in AIGO, as
well as from their multi-stakeholder and multidisciplinary
backgrounds. Drawing on the final output document of the AIGO, a draft
Recommendation was developed in the CDEP and with the consultation of
other relevant OECD bodies. The CDEP approved a final draft
Recommendation and agreed to transmit it to the OECD Council for
adoption in a special meeting on 14-15 March 2019. The OECD Council
adopted the Recommendation at its meeting at Ministerial level on
22-23 May 2019.