Democratising trust: Open source tooling for safe, secure and trustworthy AI

An official side event of the India AI Impact Summit 2026

Date and time: 17 February, 1:30 PM to 2:25 PM (IST)

Venue: Room 9, Bharat Mandapam Convention Centre, New Delhi, India

Overview

As AI systems grow in sophistication and scale, the tools we use to ensure their safety should be equally robust and – crucially – universally accessible. The India AI Impact Summit emphasizes that meaningful innovation depends on a global ecosystem where every nation has the building blocks to design, measure, and evaluate AI with confidence.

Towards an open source ‘trustworthiness’ layer

In the early days of personal computing, open-source antivirus software helped make digital security accessible. By turning a complex technical challenge into a practical tool, it gave non-experts the confidence to use computers for work, finances, and everyday life.

Today, we face a similar moment with AI- though the stakes are different. Trustworthy AI is not only about preventing security breaches, but about understanding and verifying how systems behave and what outputs they produce. An AI system that offers biased medical advice, generates inappropriate content, or fabricates legal precedents may not be “hacked” but it is still unreliable and unsafe to use.

While tools to assess AI systems do exist, many are proprietary or require significant technical expertise. This session argues that if AI systems are to be truly trustworthy, these capabilities must be more widely accessible. We need open-source tools that allow a broad range of users – not only technology companies and specialists – to test, measure, and assess whether AI systems behave as intended and respect legal frameworks and fundamental rights.

Session focus

Co-hosted by the OECD, the India AI Impact Summit, Mozilla, ROOST, the UK AI Security Institute, and Mistral AI, this panel will explore the practical landscape of open source tooling for trustworthy AI. Our experts will:

  • Take stock of the current open source tooling ecosystem, highlighting key gaps and challenges.
  • Showcase open source tools that enable both technical and non-technical stakeholders to monitor and assess AI safety, security, and trustworthiness.
  • Examine how open source approaches can help build capacity in underrepresented regions and communities.
  • Present the OECD.AI Catalogue of Tools and Metrics for Trustworthy AI and launch an open call for submissions of open source tools, inviting AI practitioners worldwide to contribute. Selected tools will be featured on OECD.AI and promoted through social media and other communication channels.

Call for Submissions: Open source tools for Trustworthy AI

As AI systems scale, trust can’t remain locked behind proprietary tools or limited to technical experts. It needs to be open, accessible, and global.

Together, the OECD, Mozilla, the India AI Impact Summit, Mistral AI, The UK AI Security Institute and ROOST are inviting AI practitioners, researchers, and builders worldwide to submit open source tools that help assess, measure, and monitor AI safety, security, and trustworthiness.

Why Submit?

  • Selected tools will be featured on OECD.AI‘s Catalogue of tools
  • Promoted globally through OECD and partner channels
  • Contribute directly to shaping the upcoming Trusted AI Commons – a key deliverable of the India AI Impact Summit

Background

This event is a cornerstone of the Summit’s “Safe and Trusted AI” Chakra. Tools submitted through the call for submissions will inform the forthcoming “Trusted AI Commons”.

The Catalogue of Tools & Metrics for Trustworthy AI is a platform where AI practitioners from all over the world can share and compare tools and build upon each other’s efforts to create global best practices and speed up the process of implementing the OECD AI Principles. The catalogue allows users to submit their experiences as use cases, where they can give guidance, insights and a general appreciation of the tool. The use cases are linked to the tools they evaluate for easy access.

Background reading: the OECD report “AI Openness: A Primer for Policymakers” explores the concept of openness in AI, including relevant terminology and how different degrees of openness can exist. It explains why the term “open source,” a term rooted in software, does not fully capture the complexities specific to AI. This report analyses current trends in open-weight foundation models using experimental data, illustrating both their potential benefits and associated risks. By presenting information clearly and concisely, the report seeks to support policy discussions on how to balance the openness of generative AI foundation models with responsible governance.