Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Evaluating recommender systems in relation to illegal and harmful content



Evaluating recommender systems in relation to illegal and harmful content

Recommender systems are widely used on user-to-user services (U2U), helping people find the content they love while also helping creators find their audiences. In this way, recommender systems improve allocative efficiency in the digital marketplace, reducing the time and cost of matching creators with their audiences.

Yet, depending on how they are designed, recommender systems may disseminate illegal and harmful material, where that has not been detected and removed by content moderation procedures. As such, it is important that online services evaluate their recommender systems in a diligent way to uncover risks associated with certain design choices.

Ofcom commissioned Pattrn Analytics & Intelligence (Pattrn.AI) (affiliated with Oxford University) to examine possible methods for conducting such evaluations. Their report:

  • Explains how recommender systems operate, and the various design choices that engineers make as they build and maintain these systems
  • Sets out a number of methods for evaluating the impact of those design choices (e.g. via A/B testing, “debugging” exercises, and user surveys)
  • Assesses those methods according to several criteria, including the quality of the insights they generate, their costs, and any ethical risks involved
  • Suggests several points of best practice in the evaluation of recommender systems, regardless of the measures used (e.g. committing to regular evaluations rather than one-off exercises)

The research involved a combination of desk research, expert interviews and workshops with data scientists and engineers.

The findings will inform the development of Ofcom’s policy guidance for the new Online Safety regime. The report does not constitute official guidance in its own right.

See the Evaluating recommender systems in relation to illegal and harmful content (PDF, 1.3 MB) report for more information.

About the tool



Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
Partnership on AI

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.