gen ai initial policy considerations report cover

Generative AI came onto the scene in 2018 with deepfakes closely followed by Generative Pre-trained Transformers (GPTs) and other Large Language Models (LLMs). In 2022, it gained worldwide attention with text-to-image generators and ChatGPT. At its essence, generative AI is a class of machine learning that can create new content such as text, images, videos, music, and more.

Generative AI can produce original content beyond what it has seen during training. For example, a generative AI model trained on a dataset of cat images can generate new and unique cat images. It can do the same with videos and can create novel designs and artwork.

Generative AI has more applications than producing text, images and videos. It can be employed in data augmentation and to generate synthetic data that supplements limited training datasets in ways that may protect private data. It can facilitate many tasks, such as legal research, technical support, fixing computer bugs and fielding customer service inquiries. There are many bright sides to generative AI in the workplace, such as higher productivity and new job opportunities. However, the full effect of generative AI on labour markets, whether positive or negative, remains to be seen.

International governance of generative AI is taking shape

In April 2023, at the Hiroshima Summit, G7 countries reviewed the opportunities and challenges of generative AI and agreed to promote safety and trust as it develops. They agreed to discuss specific areas concerned by generative AI, including governance, how to safeguard intellectual property rights, copyright, transparency, disinformation and foreign information manipulation, and how to utilise these technologies responsibly. They charged the OECD and other intergovernmental organisations with promoting international co-operation and exploring relevant policy developments.

OECD efforts related to generative AI

The OECD.AI Policy Observatory (OECD.AI) is where countries come together to shape and share policies for trustworthy AI. It tracks national progress in implementing the OECD AI Principles, the first intergovernmental standards for AI.

OECD.AI provides a catalogue of tools for trustworthy AI, works on AI incidents, AI compute capacity and its impact on the environment, potential AI futures and more. All of these have a generative AI dimension. It also tracks regulatory policy developments and sandboxes for specific applications and standards. All of its work on generative AI is accessible on this site.

The OECD began working on generative AI in 2022. In September 2023, it produced a report on generative AI to inform discussions by G7 ministers. The report covers generative AI’s uses, risks and potential future evolutions.

It produced a paper on applying language models that explores their basic building blocks from a technical perspective using the OECD Framework for the Classification of AI Systems. The report also presents policy considerations associated with AI language models through the lens of the OECD AI Principles.

In April 2023, the OECD held two workshops on AI futures and generative AI. They explored the rapidly evolving landscape and technical capabilities of generative AI and how policy makers can seize the positive potential while mitigating the risks and negative consequences.

Future work on generative AI may include developing metrics and tools for trustworthy AI to foster interoperability across governance frameworks.