2023: A pioneering year for AI and the OECD

oecd-ai logo and neural network with 2023 to 2024

2023 was pivotal, with countries marking significant strides in AI policy as the technology continues to reshape our world. New developments reflected a growing consensus on the need for robust, ethical, and inclusive policies and frameworks to guide AI’s development and deployment.

In 2023, the OECD.AI team helped OECD members and other countries to further their commitment to shaping artificial intelligence in a responsible and beneficial way. In case you missed it, here is a quick review of 2023’s achievements.

OECD governments came together for interoperability and research

AI legislation and regulation need a definition to establish a solid foundation and promote interoperability between jurisdictions. The OECD’s definition of AI systems, updated in November 2023, is indispensable to shaping ongoing work in Brazil, the EU, the US and other countries.

A significant part of our work is about providing research and analysis that guides the work of governments and policymakers. This year, we focused on generative AI, large language models, regulatory sandboxes, and the biennial report on the state of implementation of the OECD AI Principles.

Responses to generative AI

As ChatGPT took centre stage, we responded by launching a portal dedicated to our work on generative AI. We also have one on AI compute and climate, and published 48 blog posts on the AI Wonk, many thanks to the contributions of the OECD AI Network of Experts.

We helped support the Hiroshima Process to develop G7 Guiding Principles and Code of Conduct for AI, developed a G7 report on opportunities and challenges related to generative AI, and launched a call for partners in September for the Global Challenge to Build Trust in the Age of Generative AI.

Tools, metrics and an incident monitor to support efforts for trustworthy AI

The OECD AI Policy Observatory added new and enhanced features. This year, the focus was on providing the AI community with tools, metrics and an incident monitor to make AI more trustworthy.

In March, we released our Catalogue of Tools & Metrics for Trustworthy to help actors build and deploy trustworthy AI systems. Anyone can submit a tool or a tool case to the catalogue. To date, the catalogue already hosts 700 tools and over 100 metrics, ranging from code for developers to teaching methodologies.

More recently, we launched AIM: The AI Incidents Monitor, a significant first step in providing policymakers with the facts and data they need to design policies for trustworthy AI, launched at the Paris Peace Forum.

AIM fills a critical space in the AI policymaker’s toolbox and is a resource for reporting, documenting, and researching incidents and potential hazards related to AI. Using a media monitoring platform that scans over 150,000 news sources globally in real-time, AIM collects upwards of one million news articles daily to identify incidents. This tool is instrumental in documenting AI incidents to provide policymakers, AI practitioners, and all stakeholders worldwide with crucial insights into the incidents and hazards that actualise AI risks​. AIM is a significant initiative developed by the OECD.AI Expert Group on AI Incidents, with support from the Patrick J. McGovern Foundation.

We launched a new project with our MNE colleagues, Responsible Business Conduct (RBC) for Trustworthy AI, that will guide companies on practically applying RBC in the AI value chain.

Much of our work revolves around supporting our countries’ commitments in conferences and meetings. To highlight a few, we participated in the UK AI Safety Summit,  the Athens Roundtable and hosted a session at COP28. We hosted two in-person AIGO and OECD.AI Network of Experts meetings, of which we published discussions from the new AI Futures group on our YouTube channel.

More to come

2024 will surely give way to more benefits and risks from AI, and we are poised to work with our members to stay at the forefront of the trustworthy AI journey. We plan to enhance the quantitative and qualitative resources on the OECD.AI Policy Observatory. This includes taking the AI Incidents Monitor (AIM) to the next level, enriching the Catalogue of Tools & Metrics, expanding the database of national AI policies and more. With collaborative spirits, we will venture further into AI safety and governance and work to strengthen the international partnerships that make up the foundation of the OECD’s work.

Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.