Intergovernmental

AI’s potential futures: Mitigating risks, harnessing opportunities

fork in the road sunset or sunrise on horizon

AI’s unpredictable path

AI is evolving at an extraordinary pace, reshaping industries and societies worldwide. Its vast potential and the uncertainty of its future trajectory put AI at the forefront of global discussions about technological progress. The OECD report,  Assessing potential future AI risks, benefits and policy imperatives, explores AI’s possible futures, analysing its risks, benefits and actions to navigate its challenges responsibly.

AI today

AI is already transforming critical business functions across sectors, such as content recommendation, online sales and customer services, and is beginning to show potential in healthcare and climate action. Its capacity to analyse data, automate processes and enhance decision-making promises economic growth and societal advancement. Yet, this transformative momentum has its challenges. Concerns about trustworthy development and use, equitable access and the societal impact of AI are rising. Policymakers are critical in ensuring the trustworthy development, deployment and use of AI while supporting innovation. As AI adoption accelerates globally, future-focused activities, such as leveraging strategic foresight methods, are essential to anticipate and shape its impact.

Governments and AI foresight

The report highlights the critical need for governments to consider AI’s medium- and long-term implications. While AI’s immediate benefits and harms are increasingly evident, its longer-term societal impacts remain uncertain. Experts point out that AI can deliver significant and even revolutionary benefits, but risks could grow significantly without deliberate governance interventions. This has led to a call for governments to expand their anticipatory capabilities to consider AI’s future pathways.

Forward-looking approaches are designed to help governments prepare for challenges such as workforce transitions, regulatory gaps and AI’s growing influence in every aspect of our economies and societies. Strategic foresight enables timely interventions to increase AI’s benefits and minimise risks. Governments build strategic foresight capacities through programmes such as the OECD Government Foresight Community (GFC) and public sector foresight efforts. However,  the report stresses that more needs to be done, including further investments to strengthen AI foresight capacities.

This is critical given AI’s rapid development and unknowns and the potential costs of falling behind. Foresight can help policymakers to understand where and when to best intervene. This would help them to influence AI developments and pre-empt processes and trends that would be difficult or impossible to modify retroactively.

The benefits of AI: unlocking transformative potential

AI promises to improve lives and tackle some of the world’s most pressing challenges. Policymakers and society should strive to realise these ten key benefits of AI:

  1. Accelerated scientific breakthroughs
    AI is advancing drug discovery, climate modelling and materials science research. For example, AI models like AlphaFold have revolutionised protein structure prediction. These breakthroughs could lead to cures for diseases, sustainable energy solutions and other transformative innovations.

  2. Economic growth and productivity
    AI streamlines processes and enhances productivity, particularly in data-intensive industries like finance and logistics. Predictions for future economic gains from using AI vary, with some estimates ranging from a 1-7% rise in global GDP by 2033 to a speculative ten-fold increase over decades if hypothetical forms of artificial general intelligence are created.

  3. Poverty and inequality reduction
    While exacerbated inequality is a key risk, as discussed below, the potential economic gains from AI could also contribute substantially to poverty alleviation if these gains are shared widely. AI may also be able to support targeted interventions, such as improving agricultural yields in emerging and developing economies. Properly deployed and governed, AI has the potential to reduce poverty and bridge gaps in access to essential services.

  4. Addressing global challenges, such as mitigating climate change
    AI tools can monitor deforestation, predict climate patterns, model the impacts of different policy options and optimise energy consumption. AI has significant potential to help achieve many of the sustainable development goals.

  5. Enhanced decision-making and forecasting
    AI’s ability to analyse large datasets provides insights that improve policymaking, business strategy and crisis management. Better decision-making can lead to more effective governance and organisational success.

  6. Improved information production and distribution
    AI systems are increasingly integrated into a wide range of real-world applications, enabling automated data gathering. This could enable the creation of valuable new datasets while providing tools to distribute this data effectively. Democratised access to reliable information can empower individuals and improve societal outcomes.

  7. Revolutionised healthcare and education
    AI enables personalised healthcare treatments and tailored learning plans in education. This personalisation can lead to better health outcomes and greater educational attainment, particularly for underserved populations.

  8. Better job quality
    By automating repetitive and dangerous tasks and augmenting worker productivity, AI can allow workers to engage in more meaningful tasks. Improved job quality enhances workplace satisfaction and productivity. However, appropriate policies and measures need to be in place to ensure AI in the workplace is used in a trustworthy manner, as AI-enabled employee monitoring has the potential to undermine labour rights and job quality.

  9. Empowered citizens and civil society
    AI enables citizens to access tools for informed decision-making, while civil society organisations use AI to monitor governance and advocate for change. This empowerment strengthens democracy and promotes social accountability.

  10. Transparent and accountable institutions
    AI can analyse government data to identify inefficiencies and ensure regulatory compliance. Increased transparency fosters trust in institutions and improves the effectiveness of public policies.

These benefits represent a vision of AI as a force for good, capable of driving progress and improving lives worldwide.

Ten risks that AI poses

Seizing AI’s benefits and achieving desirable AI futures necessitates mitigating its risks. The OECD report also highlights ten critical risks that AI poses that are present today and require immediate and sustained attention:

  1. Sophisticated cyber threats
    AI can automate and enhance cyberattacks, reducing the technical expertise needed for malicious actors. For instance, AI can generate convincing phishing scams or identify vulnerabilities in critical systems such as energy grids. These cyberattacks leveraging AI could compromise national security, disrupt infrastructure and undermine trust in digital systems.

  2. Manipulation and disinformation
    AI-generated content, such as deepfakes and synthetic text, is becoming indistinguishable from reality. In political campaigns, disinformation can be scaled up rapidly using AI tools. In the future, generating entire synthetic histories or online communities may be possible. As a result, trust in media and democratic institutions could erode, destabilising societies and fuelling division.

  3. Competitive race dynamics
    Companies and countries are racing to develop AI systems, and in a competitive landscape, they may prioritise speed over safety. This race can lead to inadequate testing and deployment of potentially harmful systems. Unsafe AI applications could proliferate without robust oversight, causing societal and economic harm.

  4. Misaligned objectives
    AI systems’ objectives are difficult to specify in ways that fully align with human intent and values. For example, an AI tasked with reviewing resumes may learn from past hiring data to repeat historic patterns of discrimination. This misalignment could lead to harmful decisions or actions, particularly as AI becomes more autonomous and embedded in societies and economies, including possible catastrophic consequences.

  5. Excessive power concentration
    A few companies based in a small number of nations dominate AI research and development, including access to computing infrastructure needed to train highly advanced AI models. This concentration risks deepening economic and geopolitical inequalities and creating dependency on a few actors. If this risk were taken further, some experts argue that those with market control over key AI systems or ecosystems could use this advantage to strengthen political power, potentially facilitating widescale surveillance, subjugation and or authoritarianism.
  6. AI incidents and disasters in critical systems
    AI integration in healthcare, transportation, finance and energy increases the potential for very serious failures, such as incorrect diagnoses, traffic accidents, collapse of financial markets, or failure of water or electricity supply. Errors in these high-stakes environments could result in loss of life, financial costs and public distrust.
  7. Invasive surveillance and privacy invasions
    AI’s ability to process vast datasets enables invasive surveillance, such as tracking individuals through facial recognition or real-time online censorship. Unchecked surveillance can undermine personal freedoms and, in extreme cases, lead to authoritarian misuse.
  8. Government incapacity to keep up with technological change  
    Regulations struggle to keep pace with AI advancements, sometimes leaving critical ethical and safety issues unaddressed. Without updated policies, the public and businesses may face unclear responsibilities and liabilities.
  9. Opacity and accountability issues
    Many AI systems are “black boxes”, making their decisions hard to understand or explain. This lack of transparency erodes accountability, making identifying and correcting harmful behaviours difficult. This cross-cutting risk can contribute to and exacerbate many other AI risks.
  10. Exacerbation of inequality
    Wealthier countries and individuals can access and leverage AI more readily, leaving others behind. Without intentional policies, AI could widen the gap between the global rich and poor, reducing opportunities for equitable growth.

These risks highlight the urgent need for comprehensive policies to govern AI responsibly and mitigate its potential harms.

Policy actions to steer societies towards a better AI future

Policymakers are in a unique position to shape AI’s future. The OECD report highlights meaningful steps they can take to guide beneficial AI development. To help achieve positive outcomes while mitigating AI risks, the report proposes ten priority actions:

  1. Establish clear rules, especially on AI liability
    By clarifying who is responsible when AI systems cause harm, policymakers would reduce uncertainties for developers, businesses and consumers, ensuring accountability and trust in AI technologies.
  2. Restrict or prohibit harmful AI uses
    “Red line” applications of AI, such as those that can be used to severely infringe on human rights or are prone to misuse, should be explicitly banned. This would ensure that AI development aligns with societal values.
  1. Promote transparency through disclosure
    Transparency fosters trust and helps regulators monitor compliance. AI system developers should be required to disclose critical information about how they built and tested their models, especially for high-risk applications.
  2. Implement risk management across AI lifecycles
    Organisations deploying AI should follow rigorous risk management protocols throughout a system’s lifecycle. This includes assessing potential harms before deployment and monitoring performance against appropriate metrics over time.
  3. Mitigate competitive race dynamics
    Governments should collaborate to establish global standards for AI safety that prevent competition-driven unsafe practices. Cooperative approaches can also help guide innovation for the benefit of all.
  4. Invest in AI safety research
    Policymakers should prioritise funding for AI safety and trustworthiness research that focuses on explainability, interpretability, alignment with human values, accountability, and evaluation and assurance processes for AI capabilities that can lead to incidents or dangerous uses.
  5. Prepare the workforce for AI disruptions
    As AI transforms industries, educational and training initiatives must ensure that workers can adapt. Reskilling programmes and lifelong learning opportunities will be essential to address job displacement and emerging skill demands. Policymakers could use more considerations about potential long-term scenarios in which highly advanced AI systems fundamentally alter wage and employment levels by displacing labour.
  6. Empower civil society
    Governments should engage stakeholders, including non-governmental organisations and communities, to build the right level of trust in AI and governments to lead AI policy. Civil society is a critical advocate for using AI responsibly and monitoring its societal impacts and should also be empowered to leverage AI tools for public benefit.
  7. Prevent excessive power concentration
    A few major companies and countries play an outsized role in the AI industry and AI policy. This could negatively impact the structure and efficient function of markets and lead to the concentration of political power by state-based actors and influential tech leaders. To prevent monopolistic practices and encourage diverse perspectives in AI development, policymakers must ensure fair competition, bolster regulatory capacity, support digital public goods and technology available to all and promote international cooperation in AI governance.
  8. Target policies to specific AI benefits
    Focused actions, such as incentivising AI solutions for healthcare or climate change, can amplify AI’s positive contributions and address urgent global challenges.

We make the future of AI, and outcomes will depend on the policies and frameworks we implement today. The report emphasises the importance of proactive, transparent and inclusive policymaking to guide AI toward desirable futures. By implementing these ten priority actions, governments can ensure that AI development serves humanity’s best interests.

The empowering quality of strategic foresight

The future of AI is exciting and holds many unknowns. By reflecting on the potential futures of AI, policymakers empower themselves with a tool to anticipate future scenarios in critical areas, such as businesses and the labour market. This could pave the way for more timely government actions that steer AI development away from less desired scenarios and towards more positive outcomes.

This report sets the stage for more desirable AI futures. By focusing on long-term thinking, equitable distribution and robust governance, policymakers can mitigate risks and amplify benefits, potentially empowering everyone.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.