Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

BigScience BLOOM Responsible AI License (RAIL) 1.0



BigScience BLOOM Responsible AI License (RAIL) 1.0

BigScience is an ongoing collaborative open science initiative, where a large number of researchers from all over the world work together to train a large language model. Everything happens completely in the open, anyone can participate, and all research artifacts are shared with the entire research community. Consequently, BigScience would like to ensure free worldwide access to its Large Language Models (“LLMs”) by taking a multicultural and responsible approach to the development and release of these artifacts.

We feel that there is a balance to be struck between maximizing access and use of LLMs on the one hand, and mitigating the risks associated with use of these powerful models, on the other hand, which could bring about harm and a negative impact on society. The fact that a software license is deemed “open” ( e.g. under an “open source” license ) does not inherently mean that the use of the licensed material is going to be responsible. Whereas the principles of ‘openness’ and ‘responsible use’ may lead to friction, they are not mutually exclusive, and we strive for a balanced approach to their interaction.

Being conscious about LLMs’ capabilities and promoting responsible development and use of the latter, we designed a Responsible AI License (“RAIL”) for the use (in the broadest sense of the word) of the model. Open and responsible licensing can be a very impactful tool towards trustworthy AI. In the BLOOM RAIL case, the license provides with a permissive IP grant whilst restricting the use of the model for a set of use-cases where the BigScience community was reluctant to enable use of the model, either due to its technical limitations or due to ethical and legal concerns, both based on the community Ethical Charter and upcoming AI regulations such as the EU AI Act. RAIL licenses can have a positive impact in protecting human rights such as privacy or dignity whilst promoting an open access and use of AI.

Literature references of interest:

Behavioral Use Licensing for Responsible AI: https://arxiv.org/pdf/2011.03116.pdf

Limits and Possibilities for “Ethical AI” in Open Source: A Study of Deepfakes: https://davidwidder.me/files/widder-ossdeepfakes-facct22.pdf

Implementing Responsible AI: Proposed Framework for Data Licensing: https://www.gmu.edu/news/2022-04/no-10-implementing-responsible-ai-proposed-framework-data-licensing

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
Partnership on AI

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.