Businesses are applying the OECD AI Principles. How is it going?

This is the first in a series of blog posts to highlight a soon to be published study about how business are adapting their realities to implement the OECD’s AI Principles.

In 2021, Business at OECD (BIAC), the private sector’s official voice at the OECD, launched a project to facilitate the adoption, dissemination and implementation of the OECD AI Principles. By evaluating concrete examples where businesses aimed to implement the Principles, the project complements other efforts by the OECD to implement AI that is ethical, lawful, robust and respectful of human rights and democratic values. That is to say trustworthy AI.

Our project rests on two pillars: in-depth qualitative research around use cases in order to identify the learnings and challenges companies face when implementing the OECD AI Principles, and using the OECD’s database of tools for trustworthy AI  to evaluate the AI systems used in our cases. The results of our work will also contribute to the OECD.AI database of tools for trustworthy AI, good practices and lessons learned from private sector AI initiatives.

Backtracking from practice to principle

To build on the successes of the OECD‘s database of tools for trustworthy AI, Meta (formerly Facebook) suggested that Business at OECD, with the support and engagement of the OECD’s Digital Economy Policy Committee, convene a group of businesses who have developed and are implementing AI tools and processes, to examine how they put the OECD AI Principles into practice and explore what lessons can be drawn from that exercise. As the home of some of the world’s best developers and builders of AI systems, industry is in a great position to set forth meaningful good practices for implementing the Principles.

For a different take on the usual flow from principles to practices, we decided to start with existing practices to evaluate how they could help implement the OECD’s Principles. More specifically, we evaluated how concrete practices can clarify, refine, and improve implementation efforts and therefore contribute to greater adoption and dissemination of the Principles. 

Confronting the OECD’s principles with businesses’ realities

The project leaders interviewed companies about tools, processes, design methodology and frameworks that raise important policy questions for internal business operations and for clients. We structured the interviews to focus primarily on what the tools are and can do, how to best use and leverage them, and the key challenges in doing so. During the interviews, participants shared information about the tools’ design and development and using the tools.

This method allowed us to confront OECD AI Principles head-on with each businesses’ technical and operational realities, and then to assess and document the challenges involved in applying the Principles via specific corporate practices. The results show how precise technical, contextual and organisational considerations can guide practical efforts to implement the OECD Principles. They should also contribute to clearer assessments and a better understanding of each AI Principle’s technical and operational feasibility.

Addressing challenges across industries

The project yielded deep-dive case studies from seven companies: AXA, Amazon Web Services, IBM, Meta, Microsoft, NEC and PWC. The studies cover tools to tackle a wide range of AI challenges, namely those that organisations face when developing and deploying solutions.

Naturally, each tool aimed to address challenges inherent to the use of AI in the context of the companies’ area of activity and the specific products or services featured in their respective case study. Meta discussed explainability and transparency tools, and AXA explored how to ensure fairness. NEC explained how to best achieve AI quality assurance and robustness, while IBM discussed how transparency can reinforce AI accountability. Amazon showed us a framework that guides critical thinking about important aspects of responsible AI, while Microsoft presented a tool to help other organisations understand how to develop a responsible AI system and make it work. Finally, PwC presented a toolkit that includes the technical assets and governance frameworks to plan for, develop and assess safe and responsible AI.

Effective AI governance help connect the dots between AI principles and technical challenges

It is worth taking a moment to look at how and where technical requirements meet human values and principles. Aware of the significant benefits that trustworthy AI can deliver, stakeholders from all disciplines and sectors are working hard to put the AI Principles into practice. Implementing trustworthy AI in internal and customer-facing operations is easier said than done. It involves complex decisions and is an iterative process. These decisions can present technical, operational and organisational challenges depending on the context, roles, and technological state of the art at the time.          

While many stakeholders are having discussions about making AI more trustworthy, it is still primarily the specialist communities who address the technical, operational and organisational challenges: developers, engineers, product managers, data scientists and technologists. The broader non-technical policy community that is responsible for setting up the regulatory and policy environment for AI are not always closely involved in addressing the technical challenges.

Fortunately, the OECD group that guides work around the AI Principles, the OECD.AI Network of Experts, is made up of actors from all sectors, creating a community where important broad-based dialogue takes place. With this forum, the OECD addresses the challenges of implementing the AI principles through its long-established method of evidence-based dialogue between multistakeholder groups.

To implement responsible AI, technical actors must be aware of and have access to the tools designed for this purpose. They also need a good understanding of how to leverage these tools for the best possible results. From a governance perspective, regulators should have good knowledge of what is practical and feasible for businesses to achieve, and the important trade-offs they face to comply with upcoming regulations.

International efforts to implement the OECD AI Principles are still nascent. Each contribution will, at the very least, provide a learning experience for the more complex challenges that await us. We are confident that our work will help inform the debate around future AI regulations and governance frameworks while providing practical guidance for implementing certain OECD AI Principles.

This is the first in a series of blog posts over the next few weeks to highlight the findings of a Business at OECD project that studies how businesses are putting the OECD AI Principles into practice. Business at OECD will present the findings at a roundtable event on 21 April at the OECD that will bring together participants from across sectors to talk about how to implement the OECD AI Principles and related policy considerations, particularly those highlighted by the case studies.

AI Wonk Dog
Sign up for OECD artificial intelligence newsletter

AccountabilityFostering a digital ecosystem for AIHuman-centred values and fairnessInternational co-operation for trustworthy AITransparency and explainabilityAI ethicsAI ProductivityInnovationoverviewWIPS

Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.

Sign up for OECD artificial intelligence newsletter