Generative AI Issues
Generative AI came onto the scene in 2018 with deepfakes closely followed by Generative Pre-trained Transformers (GPTs) and other Large Language Models (LLMs). In 2022, it gained worldwide attention with text-to-image generators and ChatGPT. At its essence, generative AI is a class of machine learning that can create new content such as text, images, videos, […]

Generative AI came onto the scene in 2018 with deepfakes closely followed by Generative Pre-trained Transformers (GPTs) and other Large Language Models (LLMs). In 2022, it gained worldwide attention with text-to-image generators and ChatGPT. At its essence, generative AI is a class of machine learning that can create new content such as text, images, videos, music, and more.
Generative AI can produce original content beyond what it has seen during training. For example, a generative AI model trained on a dataset of cat images can generate new and unique cat images. It can do the same with videos and can create novel designs and artwork.
Generative AI has more applications than producing text, images and videos. It can be employed in data augmentation and to generate synthetic data that supplements limited training datasets in ways that may protect private data. It can facilitate many tasks, such as legal research, technical support, fixing computer bugs and fielding customer service inquiries. There are many bright sides to generative AI in the workplace, such as higher productivity and new job opportunities. However, the full effect of generative AI on labour markets, whether positive or negative, remains to be seen.
International governance of generative AI is taking shape
In April 2023, at the Hiroshima Summit, G7 countries reviewed generative AI’s opportunities and challenges and agreed to promote safety and trust as it develops. They agreed to discuss specific areas concerned by generative AI, including governance, how to safeguard intellectual property rights, copyright, transparency, disinformation and foreign information manipulation, and how to utilise these technologies responsibly. They charged the OECD and other intergovernmental organisations with promoting international cooperation and exploring relevant policy developments.
OECD efforts related to generative AI
The OECD.AI Policy Observatory (OECD.AI) is where countries come together to shape and share policies for trustworthy AI. It tracks national progress in implementing the OECD AI Principles, the first intergovernmental standards for AI.
OECD.AI provides a catalogue of tools for trustworthy AI, works on AI incidents, AI compute capacity and its impact on the environment, potential AI futures and more. All of these have a generative AI dimension. It also tracks regulatory policy developments and sandboxes for specific applications and standards. All of its work on generative AI is accessible on this site.
The OECD began working on generative AI in 2022. In September 2023, it produced a report on generative AI to inform discussions by G7 ministers. The report covers generative AI’s uses, risks and potential future evolutions.
It produced a paper on applying language models that explores their basic building blocks from a technical perspective using the OECD Framework for the Classification of AI Systems. The report also presents policy considerations associated with AI language models through the lens of the OECD AI Principles.
In April 2023, the OECD held two workshops on AI futures and generative AI. They explored generative AI’s rapidly evolving landscape and technical capabilities and how policy makers can seize the positive potential while mitigating the risks and negative consequences.
Future OECD work on generative AI may include developing metrics and tools for trustworthy AI to foster interoperability across governance frameworks.
BENEFITS

Language translation and interpretation
Large Language Models (LLM), one type of generative AI, help transcend language barriers with translation and interpretation by improving efficiency and accuracy. Fast and accessible translation applications are becoming vital in countries with unique languages, like Korea, Hungary and Thailand, to break down language barriers to other economies in important sectors such as international trade.
Countries with many languages have begun using generative AI to communicate better between national communities. India is using language models to translate laws and other official documents into more than one hundred languages spoken across the country.
Chatbots and virtual assistants
Generative AI plays can power conversational AI systems and enable chatbots and virtual assistants to generate human-like responses to user queries and engage in more natural conversations. Chatbots have become commonplace in customer service online.
ChatGPT is a chatbot based on a large language model that responds to prompts to generate human-like text responses to a wide array of questions thanks to the vast amounts of text data used and deep learning techniques used to train it.
The biggest risks with chatbots and virtual assistants are misleading and incorrect outputs that appear to be true.
Coding and content creation
Generative AI is creating significant efficiencies by helping developers write code. It can also generate high-quality, realistic texts and videos, saving time and cutting production costs.
For both coding and content, there is always a danger of infringing upon copyright and plagiarism.
Data augmentation
In machine learning, generative AI can augment training data, leading to better models. It can help create additional synthetic data to enhance the diversity and size of datasets. That, in turn, can improve the model’s generalisation and performance.
On the downside, using generative AI for data augmentation may introduce synthetic data that does not accurately represent real-world scenarios, potentially leading to overfitting and reduced model generalisation. In anomaly detection, generative AI could generate novel anomalies not present in the actual data and cause false positives, compromising the detection system’s reliability.
Healthcare and pharmaceuticals
Generative AI can generate synthetic medical images and augment existing ones to enhance limited datasets, create training samples and validate machine learning models. This helps improve accuracy and is particularly helpful in scenarios with limited real patient data.
In pharmaceutical research, generative AI assists by creating and optimising molecular structures for new drugs. This enriches and accelerates the drug discovery process.
On the downside, generative AI models could inadvertently introduce biases or ethical concerns into medical research and decision-making processes and leak private data into the public domain.
RISKS AND UNKNOWNS

Along with the benefits, generative AI raises concerns about misuse and errors, and legal frameworks struggle to catch up with technological developments. The reality is that AI technology moves faster than most legislative and regulation processes. The OECD works with governments to mitigate these risks and ensure that AI benefits society. It helps policy makers to develop policies that ensure the ethical and responsible use of generative AI.
AI “hallucinations”, convincing but inaccurate outputs
When large language models, or textual generative AI, create incorrect yet convincing outputs, it is called a hallucination. This is unintentional and can happen if a correct answer is not found in the training data. Beyond perpetuating inaccurate information, this can interfere with the model’s ability to learn new skills and even lead to a loss of skills.
Fake and misleading content
While generative AI brings efficiencies to content creation, it also poses risks that must be considered carefully. One major concern is the potential for generating fake or misleading content. For example, generative AI can be used to create realistic-looking but entirely fabricated images or videos, which can be used to spread disinformation or deceive people. This poses challenges for the detection and verification of digital media.
Intellectual property right infringement
Generative AI raises intellectual property rights issues, particularly concerning:
- unlicensed content in training data,
- potential copyright, patent, and trademark infringement of AI creations,
- and ownership of AI-generated works.
Whether commercial entities can legally train ML models on copyrighted material is contested in Europe and the US. Several lawsuits were filed in the US against companies that allegedly trained their models on copyrighted data without authorisation to make and later store copies of the resulting images. These decisions will set legal precedents and impact the generative AI industry, from start-ups to multinational tech companies.
Job and labour market transformations
Generative AI is likely to transform labour markets and jobs, but exactly how is still uncertain and being debated among experts. Generative AI could automate tasks traditionally performed by humans, leading to job displacement in some industries and professions.
While some jobs might be automated or eliminated, generative AI could transform existing jobs. This could lead to humans performing tasks more efficiently and generating new creative possibilities. This transformation would lead to a shift in required skills.
Addressing these risks will require combining technical solutions, policy frameworks, and responsible practices to ensure that generative AI benefits society while minimising potential harm.
Energy consumption and the environment
Generative AI requires tremendous computing power and consumes natural resources, leading to a significant ecological footprint. Poorly controlled use of generative AI in climate modelling and environmental simulations could unintentionally exacerbate ecological challenges and undermine conservation efforts.
Bias, stereotype amplification and privacy concerns
AI can analyse large amounts of data to extract precious information that humans could not see otherwise. However, the risk is an amplification of existing biases present in the training data. If the training data contains biases, such as racial or gender stereotypes, the generative AI model may inadvertently produce biased outputs, such as misleading or inappropriate content. This can perpetuate and even amplify societal inequalities and discrimination.
Generative AI also raises privacy concerns. By training on large amounts of data, these models may inadvertently capture and reproduce private or sensitive information. For example, a language model trained on text data may reproduce personal details or confidential information.
Potential future risks and concerns
In the near term, generative AI can exacerbate challenges as synthetic content with varying quality and accuracy proliferates in digital spaces and is then used to train subsequent generative AI models, triggering a vicious cycle. Over the longer term, emergent behaviours such as increased agency, power-seeking, and pursuing hidden sub-goals to achieve a core objective might not align with human values and intent. If manifested, such behaviours could lead to systemic harms and collective disempowerment. These could demand solutions on a larger, more systemic scale and are the topic of ongoing OECD work on AI Futures.
Given these risks, overreliance, trust and dependency on AI could cause deep, long-term harm to societies. And a concentration of AI resources in a few multinational tech companies and governments may lead to a global imbalance.