Risks and Unknowns

smart phone with real and fake news

Along with the benefits, generative AI raises concerns about misuse and errors. There are some limitations where legal frameworks have not caught up with technological developments. To mitigate these risks and ensure the technology benefits society, the OECD works with governments to enable policies that ensure the ethical and responsible use of generative AI.

AI “hallucinations”, or convincing but inaccurate outputs

When large language models, or textual generative AI, create incorrect yet convincing outputs, it is called a hallucination. This is unintentional and can happen if a correct answer is not found in the training data. Beyond perpetuating inaccurate information, this can interfere with the model’s ability to learn new skills and even lead to a loss of skills.

Fake and misleading content

While generative AI brings efficiencies to content creation, it also poses risks that must be considered carefully. One major concern is the potential for generating fake or misleading content. For example, generative AI can be used to create realistic-looking but entirely fabricated images or videos, which can be used to spread disinformation or deceive people. This poses challenges for the detection and verification of digital media.

Intellectual property right infringement

Generative AI raises intellectual property rights issues, particularly concerning:

  • unlicensed content in training data,
  • potential copyright, patent, and trademark infringement of AI creations,
  • and ownership of AI-generated works.

Whether commercial entities can legally train ML models on copyrighted material is contested in Europe and the US. Several lawsuits were filed in the US against companies that allegedly trained their models on copyrighted data without authorisation to make and later store copies of the resulting images. These decisions will set legal precedents and impact the generative AI industry, from start-ups to multinational tech companies.

Job and labour market transformations

Generative AI is likely to transform labour markets and jobs, but exactly how is still uncertain and being debated among experts. Generative AI could automate tasks traditionally performed by humans, leading to job displacement in some industries and professions.

While some jobs might be automated or eliminated, generative AI could transform existing jobs. This could lead to humans performing tasks more efficiently and generating new creative possibilities. This transformation would lead to a shift in required skills.

Addressing these risks will require combining technical solutions, policy frameworks, and responsible practices to ensure that generative AI benefits society while minimising potential harm.

Energy consumption and the environment

Generative AI requires tremendous computing power and consumes natural resources, leading to a significant ecological footprint. Poorly controlled use of generative AI in areas like climate modelling and environmental simulations could unintentionally exacerbate ecological challenges and undermine conservation efforts.

Bias, stereotype amplification and privacy concerns

AI can analyse large amounts of data to extract precious information that humans could not see otherwise. But the risk is an amplification of existing biases present in the training data. If the training data contains biases, such as racial or gender stereotypes, the generative AI model may inadvertently produce biased outputs, such as misleading or inappropriate content. This can perpetuate and even amplify societal inequalities and discrimination.

Generative AI also raises privacy concerns. By training on large amounts of data, these models may inadvertently capture and reproduce private or sensitive information. For example, a language model trained on text data may reproduce personal details or confidential information.

Potential future risks and concerns

In the near term, generative AI can exacerbate challenges as synthetic content with varying quality and accuracy proliferates in digital spaces and is then used to train subsequent generative AI models, triggering a vicious cycle. Over the longer term, emergent behaviours such as increased agency, power-seeking, and pursuing hidden sub-goals to achieve a core objective might not align with human values and intent. If manifested, such behaviours could lead to systemic harms and collective disempowerment. These could demand solutions on a larger, more systemic scale and are the topic of ongoing OECD work on AI Futures.
Given these risks, overreliance, trust and dependency on AI could cause deep, long-term harm to societies. And a concentration of AI resources in a few multinational tech companies and governments may lead to a global imbalance.