Intergovernmental

As language models and generative AI take the world by storm, the OECD is tracking the policy implications

ChatGPT has become a household name thanks to its apparent benefits. At the same time, governments and other entities are taking action to contain potential risks. AI Language Models (LM) are at the heart of widely used language applications like ChatGPT, so it is worth taking a closer look at the technology and its implications on society.

At their core, language models are statistical predictors of the next word or any other language element given a sequence of preceding words. Their diverse applications include text completion, text-to-speech conversion, language translation, chatbots, virtual assistants, and speech recognition. They enable computers to process and generate human language. They are trained on vast amounts of data using techniques ranging from rule-based approaches to statistical models and deep learning. For all but the simplest models, their internal operations are opaque: there is no significant understanding of how they generate seemingly intelligent and human-like outputs over a wide range of tasks.

This week, the OECD released AI language models: Technological, socio-economic and policy considerations, an overview of the AI language model landscape. The report puts forward policy considerations associated with AI language models through the lens of the OECD AI Principles. Some of the report’s main findings focus on the risks and ethical questions posed by LMs.

AI language models promise to unlock significant opportunities to benefit people by conducting tasks in human natural language at scale

AI language models are being deployed across sectors such as public administration, healthcare, banking, and education, boosting productivity and decreasing costs. They enable language recognition, interaction, support and personalisation. They also enable interactive dialogue systems and personal virtual assistants. AI language models can help safeguard minority or endangered languages by allowing them to be heard, taught, and translated.

Follow us on LinkedIn

Policy makers want an enabling policy environment while mitigating the risks of AI language models

As individuals and organisations integrate language models into their functioning and services, questions are being raised about how policy makers can ensure these transformative models are beneficial, inclusive, and safe.  The 2019 OECD AI Principles state that “AI systems should be robust, secure and safe throughout their entire life cycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk.” Yet increasingly powerful AI language models raise significant policy challenges related to their trustworthy deployment and use.

AI language models need quality control and standards to address issues of opacity, explainability, accountability and control

Many AI language models use neural networks that are opaque and complex. The lack of understanding of their internal principles of operation and how they reach specific outputs, even by those who develop them, leads to unpredictability and an inability to constrain behaviour. Policy makers must encourage all actors, notably researchers, to develop rigorous quality control methodologies and standards for systems to meet, appropriate to the application context.

The complexity of language models also means that it can be difficult and costly to understand which parties and what data are involved in their development, training, and deployment to enable accountability by those best able to mitigate specific risks. In addition, many language technologies in use today leverage pre-existing models into which users have little visibility. Human overreliance on the outputs of language models, which readily generate false information, is another risk to accountability.

AI language models pose risks to human rights, privacy, fairness, robustness, security, and safety

AI language models are a form of “generative AI”. Generative AI models create new content in response to prompts based on their training data. The training data itself can include biases, confidential information on individuals, and information associated with existing intellectual property claims and rules. Language models can then produce discriminatory or rights-infringing outputs and leak confidential information.

AI language models can help actors manipulate opinions at scale and automate mis- and disinformation in a way that can threaten democratic values

AI language models can facilitate and amplify the production and distribution of fake news and other forms of manipulated language-based content that may be impossible to distinguish from factual information, raising risks to democracy, social cohesion, and public trust in institutions. AI “hallucinations” also occur, where models generate incorrect outputs but articulate them convincingly. The combination of AI language models and mis- and disinformation can lead to individually tailored deception at a broad scale that traditional approaches such as fact-checking, detection tools, and media literacy education cannot readily address.

Continued dialogue and research can help to mitigate the risks of complex language models

Further research is essential to understand these complex models and find risk mitigation solutions. One particular set of concerns arises from language models that can take actions directly, such as sending emails, making purchases, and posting on social media. But even so-called “passive” question-answering systems can affect the world by affecting the behaviour of humans, for example by changing their opinions, giving them inappropriate medical advice, or convincing them to take certain actions. Language use is, after all, the primary means by which human political leaders and dictators affect the world.

Language and computing resources are as crucial as language models

The limited availability of digitally readable text to train models remains an important issue for many languages. The most advanced language-specific models use the languages for which significant digital content is available, such as English, Chinese, French and Spanish. Policy makers in countries with minority languages are promoting the development of digital language repositories, plans, and research. Multilingual language models can foster inclusion and benefit a broader range of people. Access to computing hardware is also crucial but needs more R&D to reduce financial and environmental costs in favour of more efficient mechanisms to train and query large language models.

To ensure the benefits of language technologies are widely shared, actors will have to prepare for economic transitions and equip people with skills to develop and use AI language models

Language models have the potential to automate tasks in many job categories. AI language models such as OpenAI’s Generative Pre-trained Transformer 4 (GPT-4) are increasingly used to help perform tasks previously conducted by people, including high-skill tasks such as writing software code, drafting reports, and even creating literary content. GPT-4 exhibits human-level performance across several standardised tests. Policy makers will have to experiment with new social models and re-evaluate education needs in an era of ubiquitous AI language models.

International, interdisciplinary, and multi-stakeholder cooperation for trustworthy AI language models is required to address harmful uses and impacts

Stakeholders, including policy makers, are beginning to explore related societal impacts and risks. Collaboration is taking the form of sharing best practices and lessons learned in regional and international fora and developing joint initiatives in multilingual language data and models. Yet, work remains to develop viable policy and technical solutions that can effectively mitigate risks from language models and other types of generative AI while fostering their beneficial development and adoption. All actors in the AI ecosystem have key roles to play.

On 19 April, we will be at the OECD Working Party on AI Governance and the OECD.AI Network of Experts will hold two workshops on AI foresight and generative AI. Both are open to the public to attend online. If you are interested in attending, please find out more about the Expert forum on AI foresight and generative AI and register on this page.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.