The European Union could rethink its definition of General Purpose AI Systems (GPAIS)

The European Union (EU) is in the process of regulating artificial intelligence (AI) through the AI Act. Within the vast spectrum of issues under the Act’s aegis, the treatment of technologies classified as General Purpose AI Systems (GPAIS) merits special consideration.

As it stands today, the official EU proposals define GPAIS as AI systems that:

From our perspective, more needs to be done to differentiate GPAIS from other systems, and given the preliminary status of these definitions, we are optimistic that this will happen. As is, these definitions make it difficult to distinguish technologies that are GPAIS from those that are not. Clear differentiation between the two is particularly important when it comes to profiling risk according to the four categories established under the Act’s draft: unacceptable, high-risk, limited risk, and low-risk.

Some AI systems have a fixed purpose and some do not. The distinction is important.

Systems in the high-risk category need to comply with special requirements that are meant to protect the public. In the context of the AI Act, policy makers must follow several steps, including assessing a system’s “intended purpose,” to determine which systems need this high level of scrutiny. And this point is precisely where AI systems can differ.

On the one hand, there are systems with a fixed purpose that are designed to perform specific tasks. Examples of these technologies include those dedicated to image classification, identity validation, or voice recognition, among others. On the other hand, there are systems that can perform multiple tasks and purposes, including unforeseen ones. This means that an AI system can lack a clear and unique intended purpose, yet that is one of the elements upon which its level of risk is determined.

Including guidance and definitions to differentiate between these types of AI systems would improve the EU governance of AI. This is because it would allow actors to determine clear obligations and responsibilities but also which stakeholders involved in the design, development, and deployment of AI must take them on.

A revised definition could bring clarity

It is only fair to recognize the difficulty in nailing down AI terminology; AI itself has dozens, if not hundreds of proposed definitions. Meanwhile, prior to the EU’s usage of GPAIS, the term was seldomly employed and there is still no consensus around exactly which technologies it encompasses. This is likely due to the term’s broad nature.

This time, however, the issue is about the alternatives that exist to define GPAIS with more precision and clarity. For instance, systems under this moniker could be defined based on their abilities, domain, tasks, or outputs.

To give the discussion a focal point, we published a working paper that proposes a new definition of GPAIS centred on tasks. Our characterization of tasks is that they describe a distinct type of problem, how a goal is defined, or what actions a system must achieve. With that in mind, we propose a GPAIS definition that describes technologies that can:

  • Accomplish or be adapted to accomplish a range of distinct tasks, including some for which it was not intentionally and specifically trained.

The value of our definition is that it addresses the problem of clarity by establishing which technologies fall within the scope of GPAIS. In addition, the focus on tasks is useful because it is a standard term and structure in machine learning research, allowing for clear classification. The definition also distinguishes between two “generalities”: one that is associated with the ability to do tasks for which a system was not specifically trained, and another attained by bundling many fixed-purpose systems together into a single system so that it can perform many tasks.

We hope that EU policy makers will consider using it as a baseline to differentiate between systems with and without an intended purpose. Leaving this matter unaddressed will lead to confusion around the interpretation of GPAIS in the AI Act. 

Flexibility is key

Another advantage of our definition is its flexibility to include a variety of systems: those that have one modality (GPT-3 and BLOOM) and those that have several (Stable Diffusion and Dall-E). it also addresses systems that are trained through alternative methods such as supervised (Gato) or reinforcement learning (MuZero).

At the same time, it excludes AI systems that are fixed-purpose and where policy makers can easily identify the intended purpose on their own, even when used in diverse applications or domains. While the proposed definition is qualitative, it could in principle be made more quantitative by using task performance metrics.

The world is watching

The EU AI Act will be a transformative piece of legislation, having an impact on stakeholders globally by setting a standard for AI governance. Therefore, it is crucial to develop a definition of GPAIS that adequately identifies these technologies so that potential regulatory gaps and societal risks are mitigated. This is even more important given that the structure of the Act relies on assessing a technology’s intended purpose for classification. If GPAIS do not get direct and appropriate framing, there is a risk that responsibility for their safety will either be unassigned or assigned to parties unable to uphold their duties.

We would like to end by saying that we care about the long-term management of AI systems without an “intended purpose.” We believe that it is crucial for the EU to proactively protect consumers, create clear expectations for developers, and allow policy makers to make informed decisions. Our proposal for a GPAIS definition represents a constructive first step that we hope catalyzes action to build an effective and enduring AI risk management framework in the EU.

International co-operation for trustworthy AIShaping an enabling policy environment for AIAI ActClassificationfuturesGenerative AIInnovationEurope

Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.