Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Living guidelines on the responsible use of Generative AI in research



Different institutions, including universities, research organisations, funding bodies and publishers have issued guidance on how to use these tools appropriately to ensure that benefits of those tools are fully utilised. The proliferation of guidelines and recommendations has created a complex landscape that makes it difficult to decide which guidelines should be followed in a particular context. 

For this reason, the European Research Area Forum (composed of European countries and research and innovation stakeholders), decided to develop guidelines on the use of generative AI in research for: funding bodies, research organisations and researchers, both in the public and private research ecosystems. 

These guidelines focus on one particular area of AI used in the research process, namely generative Artificial Intelligence. There is an important step to prevent misuse and ensure that generative AI plays a positive role in improving research practices. One of the goals of these guidelines is that the scientific community uses this technology in a responsible manner. Yet, the development of a robust framework for generative AI in scientific research cannot be the sole responsibility of policymakers (at European and national levels). Universities, research organisations, funding bodies, research libraries, learned societies, publishers and researchers at all stages of their careers are essential in shaping the discussion on AI and how it can serve the public interest in research. They should all actively engage in discussions about the responsible and effective deployment of AI applications, promoting awareness and cultivating a responsible use of AI as part of a research culture based on shared values. Rules and recommendations must go hand in hand with a broad engagement of those involved in public and private research, both organisations and individuals, to develop a culture of using generative AI in research appropriately and effectively. 

These guidelines intend to set out common directions on the responsible use of generative AI. They have to be considered as a supporting tool for research funding bodies, research organisations and researchers, including the ones applying to the European Framework Programme for Research and Innovation. They are not binding They take into account key principles on research integrity as well as already existing frameworks for the use of AI in general and in research specifically. Users of these guidelines are encouraged to adapt them to their specific contexts and situations, keeping proportionality in mind. 

These guidelines complement and build on the EU AI policy, including the Artificial Intelligence Act. They complement other policy activities on the impact of AI in science. These include the opinion of the Scientific Advice Mechanism (SAM) on AI and a policy brief published by the European Commission’s Directorate-General for Research and Innovation, framing challenges and opportunities.

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.