The OECD Framework for Classifying AI Systems to assess policy challenges and ensure international standards in AI
AI enables new ways to learn, work, play, interact and live. In many cases, AI can make the lives of people easier (autocorrect), more secure (facial recognition on smartphones) or even more exciting (online gaming). However, as AI spreads across sectors, different types of AI systems deliver different benefits, such as financial prosperity, and risks in areas such as human rights and personal safety.
Both the risks and benefits bring policy and regulatory challenges. While most people would agree that governments should take steps to protect citizens, they probably also agree that progress should not be impeded by excessive regulation. Nor do they want to relinquish individual rights.
To add another layer of complexity, all of these considerations do not simply depend upon the technology itself. Factors such as context, and which stakeholders are affected by an AI system’s tasks and outputs, are also part of the equation.
Here are a few examples. Consider the differences between a virtual assistant, a self-driving vehicle and an algorithm that recommends videos for children. It is easy to see that the benefits and risks can be measured not only by what the technology can do but also by the context in which it acts and the stakeholders who are affected by the outcomes. This complexity is even more obvious in the example of facial recognition technology, which is great for smartphone security but can threaten human rights and liberties in other circumstances.
The OECD Framework for Classifying AI Systems
On 22 February, the OECD will release a user-friendly framework to guide policy makers, regulators, legislators and others as they characterise AI systems for specific projects and contexts. The framework links AI system characteristics with the OECD AI Principles, the first set of AI standards that governments pledged to incorporate into policy-making and that promote the innovative and trustworthy use of AI.
The framework’s key dimensions structure AI system characteristics and interactions
The dimensions help users to sort out critical details about an AI system, how it works, where responsibilities lie, and how its outcomes may affect people and the environments in which it operates.
The framework classifies AI systems along the following dimensions: People & Planet, Economic Context, Data & Input, AI Model and Task & Output. Each one has its own properties and attributes that help assess policy considerations of particular AI systems. Stakeholders are involved in or affected by AI systems, while AI actors play active roles according to each dimension and throughout an AI system’s lifecycle.
A tool to facilitate deliberations around AI policy development
The framework allows users to zoom in on specific risks that are typical of AI, such as bias, explainability and robustness, yet it is generic in nature. It facilitates nuanced and precise policy debate. The framework can also help develop policies and regulations since AI system characteristics influence the technical and procedural measures they need for implementation.
The framework promotes a common understanding of AI by identifying the features of AI systems that matter most. Both in and out of government, the framework can inform registries or inventories by guiding work to describe systems and the basic characteristics of algorithms or automated decision systems.
As mentioned, the OECD framework is generic, but there is a need for frameworks that are sector-specific in areas such as healthcare and finance. The OECD framework can provide the basis for more detailed application or domain-specific catalogues of criteria and has already done so for a UK healthcare classification exercise that will be discussed during the framework’s launch event on 22 February.
The role of an AI system’s lifecycle
The framework uses the AI system’s lifecycle as a complementary structure for understanding the key technical characteristics of a system. The dimensions of the OECD Framework for the Classification of AI Systems can be associated with different stages of the AI system lifecycle to identify relevant AI actors for a dimension, which helps to link the framework directly to accountability and risk management.
The international aspect of AI calls for harmonized approaches
Artificial intelligence crosses borders, so it is important for governments to work together to create minimum standards about how one uses, collects and stores data, and what kind of transparency should exist. This also should lead to greater collaboration between more global organisations and platforms. Classification systems can help guide that cooperation.
A first step towards impact, risk assessment and mapping AI systems
The current framework is meant to provide the basis for developing a future risk-assessment framework to help with diminishing and mitigating risks. It will also provide a baseline for the OECD, Members and partner organisations to develop a common framework for reporting about AI incidents, paving the way for consistency and interoperability in incident reporting.
If adopted broadly, we expect that the classification framework will help generate more information about the types of AI systems being deployed in the world, which can give policymakers the data needed to map this important domain and to identify interventions that can make AI more beneficial to the world and society at large.