Date and time: 17 February, 1:30 PM to 2:25 PM (IST)
Venue: Room 16, Bharat Mandapam Convention Centre, New Delhi, India
Overview
As AI systems grow in sophistication and scale, the tools we use to ensure their safety should be equally robust and, crucially, universally accessible. The India AI Impact Summit emphasises that meaningful innovation depends on a global ecosystem where every nation has the building blocks to design, measure, and evaluate AI with confidence.
Towards an open source ‘trustworthiness’ layer
In the early days of personal computing, open-source antivirus software helped make digital security accessible. By turning a complex technical challenge into a practical tool, it gave non-experts the confidence to use computers for work, finances, and everyday life.
Today, we face a similar moment with AI- though the stakes are different. Trustworthy AI is not only about preventing security breaches, but about understanding and verifying how systems behave and what outputs they produce. An AI system that offers biased medical advice, generates inappropriate content, or fabricates legal precedents may not be “hacked” but it is still unreliable and unsafe to use.
While tools to assess AI systems do exist, many are proprietary or require significant technical expertise. This session argues that if AI systems are to be truly trustworthy, these capabilities must be more widely accessible. We need open-source tools that allow a broad range of users – not only technology companies and specialists – to test, measure, and assess whether AI systems behave as intended and respect legal frameworks and fundamental rights.
Session focus
Co-hosted by the OECD, the India AI Impact Summit, Mozilla, ROOST, the UK AI Security Institute, and Mistral AI, this panel will explore the practical landscape of open source tooling for trustworthy AI. Our experts will:
- Take stock of the current open source tooling ecosystem, highlighting key gaps and challenges.
- Showcase open source tools that enable both technical and non-technical stakeholders to monitor and assess AI safety, security, and trustworthiness.
- Examine how open source approaches can help build capacity in underrepresented regions and communities.
- Present the OECD.AI Catalogue of Tools and Metrics for Trustworthy AI and launch an open call for submissions of open source tools, inviting AI practitioners worldwide to contribute. Selected tools will be featured on OECD.AI and promoted through social media and other communication channels.
Panellists:
The session will be moderated by OECD’s Deputy Head of Division on Artificial Intelligence and Emerging Digital Technologies, Karine Perset, and fellow panellists will join the conversation:
- Amanda Brock, CEO of OpenUK
- Audrey Herblin-Stoop, Senior Vice President Global Public Affairs and Communications at Mistral AI
- Oliver Jones, Deputy Director for International AI Policy at the UK AI Security Institute
- Balaraman Ravindran, Head of Department of Data Science and Artificial Intelligence, IIT Madras
- Mark Surman, President at Mozilla