Civil society

Bridging the gap in AI governance and the rule of law: The Athens Roundtable 2023

keyboard with justice scales

As the landscape of Artificial Intelligence (AI) evolves at an accelerated pace, we urgently need to consider how it interacts with the rule of law. With foundation models that can process vast amounts of information and churn out different forms of credible content across a range of domains, broadly referred to as generative AI, the quest for aligning AI with fundamental values and legal norms has come to a critical juncture. 

Events such as the launch of the GPT-4 accelerated radical changes, including increased concentration of power by big technology corporations, shifts in geopolitical dynamics, regulatory capture, pervasiveness of generative AI in consumer digital offerings, and increased threats to information ecosystems and the democratic order.

On November 30th and December 1st, The Athens Roundtable on AI and the Rule of Law will unpack developments that have redefined AI governance in 2023. We, The Future Society, organize the convening and co-hosted by prominent organizations: OECD, UNESCO, the NIST-NSF Institute for Trustworthy AI in Law & Society (TRAILS); the Institute for International Science and Technology Policy (IISTP); IEEE; HomoDigitalis; the Center for AI and Digital Policy; Paul, Weiss; the Greek Embassy in the United States; and the Patrick J.McGovern Foundation. The Athens Roundtable promotes cross-stakeholder dialogues to enhance understanding of AI risks and responsible innovation among policymakers, AI developers and deployers, legal and judicial actors, civil society, and others.

Since 2019, The Athens Roundtable has advanced its mission to inspire future-proof policies, effective implementation, and enforcement that uphold fundamental rights, the democratic order, and the rule of law across jurisdictions. Now more than ever, assessing how the underlying context of AI governance proposals affects those objectives is critical. 

In recent years, governments and corporations have dominated spaces where policy discussions take place and steered the course of AI governance, each operating with its own incentives. Governments strive for economic prosperity and national security against the backdrop of fast-paced geopolitical dynamics that could influence their future global standing. Companies, on the other hand, prioritize shareholder value. Those incentives are not necessarily aligned with broader societal interests, which could relegate social well-being to the margins.

Follow us on LinkedIn

We need civil society at the heart of global dialogues on AI governance.

Governments and corporations are critical actors in AI governance, but we need civil society at the centre. Civil society organizations’ main purpose is to advocate for and protect the public interest by actively engaging with those powers and exercising oversight. Civil society voices can have a powerful role in counterbalancing government and corporations’ positions by expanding the dialogue to include broader societal interests, including the rule of law, and factor in developing legal and regulatory frameworks that will shape AI governance.

In this spirit, The Athens Roundtable on AI and the Rule of Law has been a powerful platform to convene stakeholders from across sectors and jurisdictions and help shape robust and sustainable solutions. We need diverse voices to scrutinize how foundation models and generative AI intersect with our legal, judicial, and regulatory systems. 

Join us for this year’s convening to take stock of recent initiatives on the national and international landscape across jurisdictions and discuss their impact on the future of AI governance.

Beyond convening, we must ask the hard questions.

As national and international actors rush to catch up with the speed of AI development, there are looming questions about effective governance. What safeguards, liability structures, or potential restrictions should ensure AI is safe and beneficial to society? As a broad, multi-stakeholder collective, how can we build tangible checks and balances for generative AI tools from development to deployment? Moreover, in a global arena with mounting incentives for competition and division, how should international coordination and multistakeholder engagement be harnessed to counter the geopolitical tensions and market forces shaping AI’s trajectory?

The Fifth Edition of The Athens Roundtable is framed around those questions to help stakeholders leverage current policy windows and pass effective governance mechanisms. We share the vision that AI has the potential to improve the welfare and well-being of people, increase innovation and productivity, and help respond to key global challenges, such as advancing the Sustainable Development Goals and combating the climate crisis. We must urgently examine the risks foundation models and generative AI pose to fundamental human values, human safety, and the rule of law. Discussions will probe into proposals for transatlantic coordination on AI safety, metrological approaches, liability regimes, potential restrictions on AI use, and the potential role of international treaties, illuminating the multi-faceted regulatory, legal, and compliance considerations at play, among other topics.

Our goal is to forge paths toward impactful AI governance through collective intelligence and multistakeholder action. We hope to facilitate direct and indirect impact towards the following outcomes:

  • First, to promote evidence-based, national and international strategies for governing foundation AI models from development to deployment.
  • Second, to drive cross-border, multi-stakeholder coordination focused on institutional innovation and a rights-based approach to AI safety. 
  • Third, to align AI development and deployment with the rule of law through proactive commitments, accountability, regulation, and legal compliance.

Promote evidence-based national and international AI strategies.

This year’s edition will consider how national and international AI developments impact political, judicial, and legislative action and how that will affect the rule of law, for better or worse. This year, the governance of AI technologies has become more prominent on national policy and legislative agendas. 

In the United States, there have been AI-related congressional hearings, proposed bills and draft legislation, and voluntary commitments made by AI labs in partnership with the White House. In Europe, the Commission, the Council, and the Parliament are nearing final discussions on a comprehensive AI Act, with important changes proposed by the Parliament’s version, such as a governance regime for foundation models and an AI office

In Brazil, the Senate has proposed a comprehensive AI draft bill inspired by the European text and has paid increased attention to generative AI’s potential to exacerbate misinformation. In China,  specific regulations for generative AI have been introduced. In Africa, several countries have launched national AI policies, including most recently, the Cabinet of the Republic of Rwanda approving the National Artificial Intelligence Policy.

As jurisdictions grapple with the challenge of governing AI through their legislative and executive powers, courts face pivotal cases that could transform how we develop and deploy AI technology. Emblematic cases include a US class action against Microsoft and OpenAI in the US alleging that Copilot reproduces parts of licensed codes without due attribution and a UK case against companies that web scrape copyrighted images to train their image generation tools. These cases demonstrate the judiciary’s role in protecting the fundamental rights of individuals even while countries debate more clear and specific AI-centric laws.

Drive cross-border, multistakeholder coordination

Throughout the convening, we will examine ongoing efforts for international coordination, including the challenges these efforts face, and effective cross-border solutions to AI risks. The potential impact of AI tools has drawn global attention, with governments coming together to discuss how to face the mounting challenges posed by the diffusion of foundation models and generative AI. 

For example, Japan leveraged its G7 Presidency to establish the G7 Hiroshima AI Process, to develop a concerted approach and harmonized rules for AI governance, focusing on foundation models and generative AI. The group proposed a “code of conduct” for organizations developing these cutting-edge AI technologies and called it “one of the most urgent priorities for global society.” This code is expected to include guiding principles for safety protocols, risk mitigation strategies, public reporting of model capabilities, and international standard-setting. Questions remain about how to create it, who will do so, the scope of reach, and whether it will be binding.

In the field of international security, the United Nations Security Council made history on July 18 by bringing AI to the spotlight under the UK’s Presidency. The Council held a high-level briefing focused on the opportunities and risks of AI for peace and security, featuring insights from distinguished experts. Calls to create a UN high-level advisory body to tackle AI risks to global peace surfaced the need for a pivotal forum involving the United States and China. 

In addition, experts pointed to the inherent fragility of technologies emanating from a limited circle of industry players and the hazards they pose to global security, stressing the need for robust evaluation as a potential solution to regulatory capture. The discussion brought to light the need for ethical and responsible frameworks to govern AI internationally and for operationalization and enforcement. 

Finally, the UK’s upcoming Global Summit on AI Safety in November 2023 has also gained a lot of media attention. The Summit should bring together “[representatives of] key countries, leading tech companies and researchers to agree on safety measures to evaluate and monitor the most significant risks from AI”. It is worth noting there is no mention of civil society representatives in the statement.

For these initiatives and others to be effective, it is critical to look at international responses to past challenges and whether existing intergovernmental governance structures can serve as inspiration for international AI governance—such as TFS’s proposal dating back to 2018 of an “IPCC for AI”, corroborated by a recent proposal by Eric Schmidt et al. Additionally, such discussions must urgently be brought to spaces where civil society has a seat at the table. 

Align AI development and deployment with the rule of law

Through a multistakeholder approach, a unique feature of The Athens Roundtable is elevating the rule of law to the heart of discussions, a rights-based approach to AI governance in a field obfuscated by conflicting interests, from geopolitical to market ones. The rule of law is the bedrock of any flourishing society and essential to fundamental rights and freedoms, democracy, social justice, security, and economic development.

Therefore, when discussing holistic AI governance solutions, it is important to do so through the lens of the rule of law to ensure this principle is upheld and protected, now and into the future.

Register now for The Athens Roundtable 2023: 30 November – 1 December

We hope you can join us for this timely dialogue. This convening will be held in Washington, D.C. and live-streamed to a global audience. Online participation is open for all, and we have limited seats for in-person attendance. Register now in the link below to secure a spot in person and online. 

[REGISTER HERE]



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.