Democratising trust: Open source tooling for safe, secure and trustworthy AI

An official side event of the India AI Impact Summit 2026

Date and time: 17 February, 1:30 PM to 2:25 PM (IST)
Venue: Room 16, Bharat Mandapam Convention Centre, New Delhi, India
Recording available on YouTube

Overview

As AI systems grow in sophistication and scale, the tools we use to ensure their safety should be equally robust and, crucially, universally accessible. The India AI Impact Summit emphasised that meaningful innovation depends on a global ecosystem where every nation has the building blocks to design, measure, and evaluate AI with confidence.

Towards an open source ‘trustworthiness’ layer

In the early days of personal computing, open-source antivirus software helped make digital security accessible. By turning a complex technical challenge into a practical tool, it gave non-experts the confidence to use computers for work, finances, and everyday life.

Today, we face a similar moment with AI- though the stakes are different. Trustworthy AI is not only about preventing security breaches, but about understanding and verifying how systems behave and what outputs they produce. An AI system that offers biased medical advice, generates inappropriate content, or fabricates legal precedents may not be “hacked” but it is still unreliable and unsafe to use.

While tools to assess AI systems do exist, many are proprietary or require significant technical expertise. This session argued that if AI systems are to be truly trustworthy, these capabilities must be more widely accessible. There is a need for open-source tools that allow a broad range of users – not only technology companies and specialists – to test, measure, and assess whether AI systems behave as intended and respect legal frameworks and fundamental rights.

Session focus

Co-hosted by the OECD, the India AI Impact Summit, Mozilla, ROOST, the UK AI Security Institute, and Mistral AI, the panel explored the practical landscape of open source tooling for trustworthy AI. The experts:

  • Took stock of the current open source tooling ecosystem, highlighting key gaps and challenges.
  • Showcased open source tools that enable both technical and non-technical stakeholders to monitor and assess AI safety, security, and trustworthiness.
  • Examined how open source approaches can help build capacity in underrepresented regions and communities.
  • Presented the OECD.AI Catalogue of Tools and Metrics for Trustworthy AI and launched an open call for submissions of open source tools, inviting AI practitioners worldwide to contribute. Selected tools are to be featured on OECD.AI and promoted through social media and other communication channels.

Panellists:

The session was moderated by OECD’s Deputy Head of Division on Artificial Intelligence and Emerging Digital Technologies, Karine Perset, and fellow panellists joined the conversation:


Call for Submissions: Open source tools for Trustworthy AI

As AI systems scale, trust can’t remain locked behind proprietary tools or limited to technical experts. It needs to be open, accessible, and global.

Together, the OECD, Mozilla, the India AI Impact Summit, Mistral AI, the UK AI Security Institute and ROOST are inviting AI practitioners, researchers, and builders worldwide to submit open source tools that help assess, measure, and monitor AI safety, security, and trustworthiness.

Why Submit?

  • Selected tools will be featured on OECD.AI‘s Catalogue of tools
  • Promoted globally through OECD and partner channels
  • Contribute directly to shaping the upcoming Trusted AI Commons – a key deliverable of the India AI Impact Summit

Background

This event is a cornerstone of the Summit’s “Safe and Trusted AI” Chakra. Tools submitted through the call for submissions will inform the forthcoming “Trusted AI Commons”.

The Catalogue of Tools & Metrics for Trustworthy AI is a platform where AI practitioners from around the world can share and compare tools, build on each other’s efforts, and create global best practices to accelerate the implementation of the OECD AI Principles. The catalogue allows users to submit their experiences as use cases, where they can give guidance, insights and a general appreciation of the tool. The use cases are linked to the tools they evaluate for easy access.

Background reading: the OECD report “AI Openness: A Primer for Policymakers” explores the concept of AI openness, including relevant terminology and the different degrees of openness. It explains why the term “open source,” a term rooted in software, does not fully capture the complexities specific to AI. This report analyses current trends in open-weight foundation models using experimental data, illustrating both their potential benefits and associated risks. By presenting information clearly and concisely, the report seeks to support policy discussions on how to balance the openness of generative AI foundation models with responsible governance.