Business

Lessons for businesses and regulators on implementing trustworthy AI

Following the first blog post, this is the second in a series presenting lessons from the Business at OECD Special Project on the Implementation of Trustworthy AI.  

The project report will be published this month and gives practical guidance for implementing the OECD AI Principles by businesses through a series of case studies and related findings.

The study confronts the OECD AI Principles with the technical and operational reality of seven companies to assess and document what they do to implement trustworthy AI tools and processes. The report highlights each company’s objectives, benefits and challenges in developing, implementing and improving various tools to ensure the responsible use of AI.

Key lessons for businesses implementing the OECD AI Principles

The study offers an overview of the challenges that organizations face as they develop related tools and processes, and presents recommendations for how policy makers and relevant stakeholders can address these challenges.

Seven businesses in the OECD Business study

Case-studyMain OECD value-based principle
AWSHuman-centred values and Fairness; Robustness, Security and safety
AXAFairness
MetaTransparency and Explainability
IBMTransparency and Explainability; Accountability
NECRobustness, Security and Safety; Accountability
MicrosoftTransparency and Explainability
PwCInclusive growth, Sustainable development and Well-being

Good and deliberate AI governance is key to success

Responsible AI policy and tools must align with the organization’s larger governance structure and mission. Otherwise, it may be difficult to translate them into practice. Proper AI governance needs to be embedded at all levels of an organization, with clear channels for communicating about and reacting to potential risks. For this reason, it is essential that companies allocate full-time personnel to AI governance to prevent people with other pressing tasks from making suboptimal trade-offs on how they allocate their time and deal with AI governance issues, which can lead to inefficient outcomes.

A careful balance between standardization and customization based on the AI system’s context

Balancing standardization and context-dependent customization is key. Customizing tools to each project increases precision and effectiveness since AI system requirements can differ depending on the use case. However, some fundamental requirements appear to be identical for all AI projects and may benefit from some standardization. Human oversight and judgement on “overwriting” tools’ guidelines in these instances can create flexibility and help customization. Product managers’ case-by-case decisions provide the flexibility to eventually “overwrite the guidelines/tool”, when needed, to adapt to each project’s context while alerting stakeholders (e.g., clients) to potential risks related to the customization.

Transparency does not mean explainability

Conventional wisdom says that precision is the most important part of transparency. However, we found that some explanations need approximation. Systematic use of the precise, scientific terms essential to programming and algorithms, in addition to key signals and characteristics, can be overwhelming and even counterproductive for a non-expert audience. This is especially true when explaining the logic behind a specific outcome. In these instances, broader explanations of how the algorithm works is most likely the best way to achieve the twin goals of transparency and explainability in a balanced way.

Upskill teams by providing appropriate training in the technical and non-technical aspects of AI

Everyone involved in developing responsible tools and ensuring good governance needs specific skills and a certain level of digital and AI literacy. This can be ensured by offering tailored training based on educational backgrounds, experience and roles. It is also important to note that data literacy is not only about technical training.

Team diversity is critical for tackling issues related to AI governance and the development of responsible AI. Issues pertaining to the societal and ethical aspects of developing and deploying AI are just as important as AI literacy and technical skills. For example, it is important to ensure that teams are aware of the different types of biases that can be present in models and datasets, and be well versed in understanding and assessing the risks associated with the model. This type of training can be imperative to help teams to assess when a model can be considered acceptable from technical and responsibility perspectives.

Ensure that structures are in place to secure wider organizational buy-in for AI tools

To be adopted successfully, any new tool will require organization-wide buy-in. Securing buy-in can be a complex exercise, and may require a significant investment of time and effort at all levels of the organization, but it is a necessary effort. Building on existing tools and knowledge during the development of an AI tool can be beneficial, particularly to build trust and acceptance of the tool since it gives the new tool precedent  (e.g., existing tools used by engineers).

Ensure continuous improvement

Given the fast-paced evolution characteristic of AI-driven solutions, continuous improvements and updates are key to ensuring the relevance of AI technologies. Soliciting and acting on feedback from both stakeholders inside the organization (e.g., internal users of the tool) as well as the outside the organization (e.g., clients using the tool) are good means for continuous improvement. Continuous feedback from users and other specific audiences is crucial, both during and after the development of AI systems. In fact, the perspectives of different user groups, academics, and civil society, can only be understood through constant consultation and feedback. Direct exchanges with experts can are essential to fully understand the possible societal implications of tools, as well as the practical benefits of the AI principles in practice (e.g., what do AI explainability tools mean for polarization, civic society, etc.).

Lessons for regulators

While the project focused on facilitating the successful development, adoption, and use of tools to implement trustworthy AI, regulators can also benefit from some of its findings:

Allow different approaches to implementing the OECD AI Principles

For example, regulators increasingly consider transparency and explainability as regulatory requirements for AI systems. However, for transparency and explainability to be the most effective, regulators need to work with industry experts to come up with guidance on how to achieve transparency, increase explainability, and offer control to users.

Complement high-level requirements with industry best practices and technical tools to ensure AI fairness

Fairness is a complex issue and can have multiple definitions depending on the context. Consequently, just requiring “fairness” from a regulatory level is too general; actors need practical guidance and tools.

Complementing high-level requirements with industry best practices and technical tools would be an effective way of ensuring that the development and deployment of AI products and systems are as fair as possible.

Consider the limitations inherent to collecting and sharing sensitive data

Limitations concerning collecting and sharing sensitive data (e.g., gender, age, etc.) can make it more difficult to create fair AI solutions. Also consider that algorithms can “learn” biases through proxies (e.g., insurance-related data embed possible biases that the driver of certain types of cars may be more likely to be female – a frequent perception). However, those biases can only be corrected by giving access to sensitive data.

Look to established AI tools and best practices as important sources of information

Best practices are a good complement to legislation. This is an important added value where the tools themselves provide information for regulators.

Know that existing standards are being adopted by regulators

Consider that existing standards (e.g., published by IEEE, ISO, etc.) are becoming increasingly important tools in the regulators’ playbook. major international and national standardization bodies are now working on AI standards that can be an effective tool to draw on the expertise of industry. They will formulate best practices that organizations can work towards to ensure the development and deployment of responsible AI.

The first step to trustworthy AI is awareness

To conclude, implementing trustworthy AI requires stakeholders not only to be aware of the availability of tools but also to have a proper understanding of their operation in order to best leverage their potential and achieve the best possible outcomes. At the same time, regulators and policy makers need to grasp the numerous challenges and tradeoffs faced by organizations when drafting future AI policy to ensure their successful implementation by the private sector. Both of these objectives have been at the core of this important Business at OECD (BIAC) project.

While AI is not a new technology by any means, this study clearly shows that we are only scratching the surface of its potential. Implementation of the OECD AI Principles for trustworthy AI will remain important in delivering effective AI solutions across sectors.

To continue this discussion, we invite you to participate to the upcoming Business at OECD (BIAC) webinar on Trustworthy AI taking place on 21st April 2022, 15:00 – 16:30, Paris time (see registration link).



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.