Call for Partners: the Global Challenge to Build Trust in the Age of Generative AI

Click here to download the Global Challenge flyer.
Join us in building a global competitive challenge to promote trust by equipping governments, organisations, and individuals to be resilient in the era of scalable synthetic content. Ready to get involved? Complete our Global Challenge Partner Inquiry form. Have questions? Contact us at

In just a few months, generative AI has gone from technical lab discussions to daily front-page news. It has the potential to revolutionise industries and society and is already used in a variety of sectors to create individualised and scalable content, automate tasks, and improve productivity.

However, generative AI can be misused with severe negative consequences through disinformation, deepfakes, and other manipulated content. This can provoke serious social, political, and economic repercussions at scale, such as distorting public discourse, creating and spreading conspiracy theories and other disinformation, influencing elections, distorting markets, and inciting violence. It is crucial to mitigate these risks and build resilience against generative AI’s misuse. One of the most critical threats of generative AI is to the protection and promotion of information as a public good, as conceptualised in the Windhoek+30 Declaration, whose principles were adopted by UNESCO Member States in 2021.

Recognising the transformative and disruptive potential of generative AI, the G7 has encouraged international organisations, such as the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on Artificial Intelligence (GPAI), to promote international co-operation and explore relevant policy developments and practical projects, including on issues related to disinformation.

In this context, the OECD,  IDB, GPAI, the IEEE Standards Association and UNESCO are joining forces with AI Commons and VDE to advance global collaboration, trust, and transparency with regard to generative AI. A problem of this magnitude requires collective action, and we are seeking partners to join us in forging a path for innovative solutions.

Follow us on LinkedIn

A global challenge to build trust

Generative AI’s potential impact and risks transcend national borders, demanding a global scope for new policy and technology solutions. The OECD and its initial design partners AI Commons and VDE are now working with GPAI, IDB, IEEE SA, and UNESCO to form an open, competitive Global Challenge to Build Trust in the Age of Generative AI that will be conducted by a unique coalition of multi-lateral organisations, governments, companies, academic institutions, and civil society organisations.

This challenge will bring together technologists, policy makers, researchers, experts, and practitioners to put forth and test innovative ideas that promote trust and counter the spread of disinformation exacerbated by generative AI. In reaching its goals, the challenge will provide tangible evidence about what works and what doesn’t, yielding proven approaches that can be adapted and scaled across the world.

The Global Challenge differentiates itself from similar efforts by the uniqueness of scope relative to disinformation and by its global focus in terms of solutions applicability and who is encouraged to get involved.

Join us in creating this global initiative

In addition to the Challenge Teams who submit ideas and prototype solutions, our success depends upon the collaboration of a variety of partners. We are seeking partners from governments, non-profits, companies, universities, and foundations to catalyse our efforts: Potential roles include:

  • Implementers. Work with the design team to build out the challenge’s functional components; provide structure, know-how, coordination, and ongoing operations.
  • Advisors. Provide ongoing advice on the challenge’s structure, execution, and refinement based on your experience in AI, disinformation, public or private policy development, or holding challenges.
  • Communications and amplification partners. Help raise awareness through relevant networks and communities to find candidates for the challenge and diffuse results and lessons learned.
  • Jury members. Leverage relevant expertise to evaluate ideas and proposals to help identify those that are both feasible and innovative, with high potential to make an impact.
  • Sponsors. Drive progress by offering an operational budget, prize money, and in-kind contributions (e.g., staff support, mentorship, software, compute power).

Does this sound like a good fit for your organisation? Complete our Global Challenge Partner Inquiry form. Questions? Get in touch at See additional details on our flyer.

Human-centred values and fairnessInternational co-operation for trustworthy AIShaping an enabling policy environment for AIfuturesGenerative AIInnovation

Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.