Scaling Responsible AI Solutions Challenges and Opportunities

May 18, 2025

Artificial intelligence (AI) seems to present solutions to many challenges across different domains. However, there is now a widespread understanding of the range of potential risks and harms to people and the planet that AI can produce if conceived, designed, and governed in irresponsible ways. In response to this, many proposals, frameworks and laws have been advanced for the responsible development and use of AI systems. In tandem, more and more AI ‘solutions’ are emerging around the world, which attempt to contribute to the public good, whilst upholding best-practice standards of responsibility. It is important that AI systems that meet responsible AI (RAI) best practices and have positive socio-environmental impacts are supported to grow and reach potential users and communities who could benefit from them. However, nascent AI projects have encountered challenges when it comes to practically implementing RAI principles, as well as scaling. Key RAI challenges include mitigating bias and discrimination, ensuring representativeness and contextual appropriateness, transparency and explainability of processes and outcomes, upholding human rights, and ensuring that AI does not reproduce or exacerbate inequities. Frameworks for RAI have proliferated, but tend to remain at a high-level, without technical guidelines for implementation in various uses and contexts. At the same time, the process of scaling itself can introduce obstacles and complications to realising or preserving RAI adherence.


Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD, the GPAI or their member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.