Products driven by Artificial Intelligence (AI) are already a crucial part of most people’s daily routines, sometimes in ways they may not realise. As AI rapidly permeates our economies, societies and environments, it brings tremendous opportunities.
But there are also risks. AI systems like GPT4, online recommendations and smartphone facial recognition were designed with good intentions. But when used the wrong way, they can cause harm.
Unchecked, AI can create real risks and mistrust
AI risks fuel social anxieties worldwide, with some already materialising: bias and discrimination, the polarisation of opinions at scale, the automation of highly skilled jobs, and the concentration of power in the hands of a few. Failure to address these risks in a timely manner could exacerbate the trust issues our societies already face, increasing the pressure on our strained democracies.
AI developments outpace policy
Rapid developments in AI make it hard for policymakers, regulators and other governance bodies to keep pace and ensure that the right policies and governance frameworks are in place. The recent rise of ChatGPT and the mainstream use of generative AI is just one example of this.
AI is global and impacts everyone
As a general-purpose technology, no single country or economic actor can tackle AI-related issues alone. The world needs to coordinate responses to the use of AI-based on international, multidisciplinary and multi-stakeholder cooperation to ensure that the development and use of AI benefits people and the planet. This is where the OECD plays a critical role.