A roadmap for an AI learning campaign
Artificial Intelligence (AI) holds so much promise to advance global prosperity and social good, including addressing climate change and improving healthcare. NATO recognised AI’s importance for defence and security. The need to counterbalance China’s aggressive AI strategy looms large. However, despite these compelling benefits, if left unchecked, AI can threaten safety and fundamental rights and pose other harms.
These concerns have escalated with the meteoric rise of ChatGPT and other large language models (LLMs) and AI’s pervasiveness throughout society. Thousands have signed an open letter calling for a pause in developing certain powerful AI models. Regulators have been asked to halt or investigate ChatGPT and other generative AI. The Italian Data Protection Authority temporarily banned ChatGPT and acted against Replika. French, Spanish, and Canadian authorities launched ChatGPT investigations. The European Data Protection Board formed a ChatGPT task force, and the OECD unveiled policy considerations. This adds to the already mounting AI legal cases and policy developments, including proposed Chinese generative AI rules, renewed US legislative efforts, and a call for an international summit.
Society faces the grand challenge of developing frameworks and tools that unlock AI’s many benefits and safeguard individual rights, security, and overall well-being. Policy makers and other luminaries debate the best approach. Meanwhile, AI’s expanding consequential impacts, both positive and negative, intensify the urgency to find solutions.
To tackle this grand challenge, society must unite immediately for a global AI learning campaign that draws together diverse expertise and viewpoints, including historically under-represented people. The campaign would optimise existing trustworthy AI tools and accelerate the creation of new ones, all in a way that better aligns law and policy with science and promotes competition and responsible investment. Additionally, the campaign would increase society’s knowledge about AI, empowering and incentivizing individuals and organisations to make good choices and advance socio-technical development. It also would prepare the workforce to pivot toward new AI jobs, while others are displaced. Finally, it would better equip policy makers to realise the benefits of cross-border harmonisation, even as regulatory pathways diverge.
Advancing AI Tools
While views may differ about an “AI pause,” consensus has emerged on the urgent need for more tools and frameworks to mitigate AI’s risks and harness its benefits. The open letter urges AI labs to “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.” This makes perfect sense. The challenge is achieving it. And a global AI learning campaign can help.
To begin, collective focus should sharpen on optimising and improving existing tools. The OECD.AI Catalogue of Tools and Metrics for Trustworthy AI (OECD AI Tools Catalogue) provides an excellent foundation for this work. Organisations should promptly add their AI tools to this catalogue. Tools can be just about anything that advances trustworthy AI, including standards, measurement techniques, benchmarks, impact assessments, and other similar processes and methodologies. The US Commerce Department has also requested comments on AI accountability, including currently available resources. Organisations with relevant information should respond.
In parallel, independent experts must evaluate available tools to determine their suitability and identify priorities for modifications and new tools. The evaluation should also assess whether the AI tools can be reasonably accessed and deployed by governments, non-profits, and commercial organisations, including Small and Medium-sized Enterprises (SMEs). This is essential for the tools to be effective in practice. The OECD, the Global Partnership on AI, standards organizations, and other similar institutions could organize impartial sandboxes or other platforms for this work. The technical experts should collaborate with their policy counterparts and other specialists to ensure that the evolving evaluation criteria factor in policy and other priorities.
Aligning Law and Policy with Science
The open letter also calls for new AI laws. While essential, crafting new laws for emerging technologies is difficult and often time-consuming. For instance, the United States has not yet addressed the pressing need for federal privacy legislation. The European Union has led AI regulatory development with the draft AI Act, introduced in April 2021. Although proactive, the EU AI Act probably will not pass before late 2023 or become effective before 2026. Already, some EU parliamentarians have called for updates to address very powerful AI models. In short, laws struggle to keep pace with technology.
Collaboration among policy makers, technical experts, and other stakeholders should increase to help reduce this lag. As UNESCO and others have highlighted, policy makers desire more AI education, and collaboration can help. Additionally, multistakeholder collaboration should increase the likelihood that the resulting AI laws and policies will successfully protect individuals and advance the OECD AI Principles. Collaboration can take many forms, including consultations, public meetings, and advisory committees.
Increasing Society’s AI Knowledge
Along with advancing AI laws and tools, AI literacy must improve. A global AI learning campaign can provide organisations and individuals with more information, empowering and incentivising them to make better decisions. As the FTC recently emphasised, the focus must shift to how AI should be used and not remain simply on how it can be used. Teaching and encouraging responsible behaviour has the potential added benefit of reducing the need for enforcement. This is important since enforcement can be expensive and time-consuming, and aggrieved parties often face uncertainty and delay when seeking remedies. With fewer cases, enforcement authorities also should have more resources to address significant violations.
Educating Organizations About LLMs and More Broadly
The AI ecosystem has grown quickly and should embrace more SMEs, non-profits, governments, and other organisations. Diversity within this ecosystem is important as it expands economic opportunities, adds more viewpoints, and fosters competition. While motivated and well-intentioned, many organisations who desire to participate in the AI ecosystem may lack sufficient information about the OECD AI Principles and emerging tools and policies that can assist them in developing responsible AI.
A global AI learning campaign can reduce this knowledge gap. It can provide organisations with understandable information about responsible AI, such as the OECD AI Tools Catalogue, the OECD dashboards of emerging national policies, and the NIST Trustworthy and Responsible AI Resource Center. The campaign can highlight opportunities to strengthen AI trust across sectors. It also can explain how responsible AI can lead to broader adoption and other benefits, further motivating good behaviour. Investors and others must understand how responsible AI advances ESG and how due diligence can shed light on AI trustworthiness.
The global AI learning campaign should engage AI users too. The prevalence of LLMs illustrates this need, particularly given their capacity to generate large amounts of content and software. For instance, organisations must learn how to assess whether LLM outputs are sufficiently safe, accurate, fair, and reliable for their intended purposes. Organisations should inform recipients when AI creates content and comply with applicable attribution requirements. Procedures should exist for reviewing LLM disclosures, fact-checking AI-generated content, and confirming that such content does not violate applicable law or third-party rights. The importance of these steps increases when AI-generated content is integrated into products, may be unlawful, or imperil individual rights, property, or well-being.
Organisations should develop usage guidelines to help translate important LLM learnings into practice. The guidelines should outline how LLMs and AI-generated content can be used, in addition to having documentation, training, and other relevant procedures. Organisations have faced similar challenges when deciding how to use open-source software. Open-source software policies, which are now commonplace, may provide insights for managing LLM usage.
AI increasingly touches many aspects of people’s lives. Though many AI applications are beneficial, others contain deepfakes, misinformation or disinformation, threaten privacy or mental health, or pose other harms. Additionally, many children use AI-enabled smart-toys and platforms. As with adult-oriented AI applications, the safety and trustworthiness of these offerings vary.
Technical experts are working on AI safeguards, such as watermarking and other techniques, to help identify AI-generated content. Policy makers have also responded, as seen in proposed amendments to the draft EU AI Act and FTC warnings about deceptive AI and keeping “AI claims in check.” President Biden discussed AI risks in a recent Wall Street Journal editorial and in other remarks calling for bipartisan technology legislation.
While not a substitute for technical tools and laws, a global AI learning campaign can help empower individuals to protect themselves against harms and responsibly enjoy AI’s benefits. The Pew Research Center recently confirmed that public understanding of AI is still evolving. The campaign can increase this understanding and guide people toward responsible AI applications and uses. People also need to know how to protect and enforce their rights, and the campaign can convey this information too. Additionally, increasing AI public awareness could mobilise more support for much-needed AI legislation and make the political process more inclusive.
Workforce and Student Preparation
Policy makers and many others recognise the need to prepare the workforce for AI. AI may give rise to new jobs. However, Goldman Sachs recently projected that automation could replace up to 300 million jobs, while increasing “global GDP by 7% annually for a 10 year-period.” This projection anticipates that most affected workers will be able to redirect at least some of their time to more productive tasks.
The global AI learning campaign can help society adapt. Specifically, it can expand awareness of new AI job opportunities. Additionally, It can publicise and possibly enhance ways to re-skill workers. Educational institutions should participate in the campaign to teach students how to use AI responsibly and prepare them to succeed in the workforce.
Achieving Cross-Border Harmonization
The campaign should encourage cross-border harmonisation, which can be challenging since AI regulatory approaches differ among jurisdictions. Cross-border harmonisation has been a major stumbling block for trans-Atlantic data flows, despite the estimated (US) $7.1 trillion US-EU economic opportunity at stake. Policy makers must learn from this experience. These lessons should be applied to ongoing AI cross-border harmonisation efforts organised by the US-EU Trade and Technology Council, the OECD, GPAI, and other organisations.
Humans can make AI and other technologies human-centric
Although AI presents many challenges, human ingenuity holds the keys to unlocking its benefits in a safe and trusted way. A global AI learning campaign organised by and for people should foster greater human-centric AI development and empower and incentivise better AI decision-making. It should also inform law-making and foster competition. Given the urgency, the campaign must start now. Stakeholders should unite to fund and implement the campaign, leveraging the many good ongoing initiatives. The campaign must be sustained to help society keep pace with technology and inspire more socio-technical development.
Finally, while AI has captivated world attention, it is just one of many transformative technologies. Quantum computing, space technology, and other developments also hold great promise for society, but require more multi-disciplinary collaboration, policy, and guardrails. Lessons learned through a global AI learning campaign can inform strategies for unlocking the benefits and mitigating the risks of these breakthroughs, too.
As language models and generative AI take the world by storm, the OECD is tracking the policy implications
ChatGPT has become a household name thanks to its apparent benefits. At the same time, governments and other entities are taking action to contain potential risks. AI Language Models (LM) are at the h...