Transparency Mechanisms for Social Media Recommender Algorithms: From Proposals to Action

May 18, 2025

Social media platforms rely on several kinds of AI technology for their operation. Much of the appeal of social media platforms comes from their ability to deliver content that is tailored to individual users. This ability is provided in large part by AI systems called recommender systems: these systems are the focus of our project. Recommender systems curate the ‘content feeds’ of platform users, using machine learning techniques to tailor each user’s feed to the kinds of item they have engaged with in the past. They essentially function as a personalised newspaper editor for each user, choosing which items to present, and which to withhold. They rank amongst the most pervasive and influential AI systems in the world today. The starting point for our project is a concern that recommender systems may lead users in the direction of harmful content of various kinds. This concern is at origin a technical one, relating to the AI methods through which recommender systems learn. But it is also a social and political one, because the effects of recommender systems on platform users could potentially have a significant influence on currents of political opinion. At present, there is very little public information about the effects of recommender systems on platform users: we know very little about how information is disseminated to users on social media platforms. It is vital that governments, and the public, have more information about how recommender systems steer content to platform users, particularly in domains of harmful content. In the first phase of our project, we reviewed possible methods for studying the effects of recommender systems on user platform behaviour. We concluded the best methods available for studying these effects are the methods that companies use themselves. These are methods that are only available internally to companies. We proposed transparency mechanisms, in which these company-internal methods are used to address questions in the public interest, about possible harmful effects of recommender systems. We focussed on the domain of Terrorist and Violent Extremist Content (TVEC), because this type of content is already the focus of discussion in several ongoing initiatives involving companies, including the Global International Forum to Counter Terrorism (GIFCT) and the Christchurch Call to Eliminate TVEC Online. Our proposal was for a form of fact-finding study, that we argued would surface relevant information about recommender system effects in this area, without compromising the rights of platform users, or the intellectual property of companies. We presented and argued for this proposed fact-finding study at last year’s GPAI Summit. Over the past year, our project has pursued the practical goal of piloting our proposed fact finding study in one or more social media companies. This has involved discussions with several companies, often mediated by governments; and participation in several international initiatives relating to TVEC, in particular the Christchurch Call and GIFCT. At the recent Christchurch Call Summit, a scheme for running a pilot project of the kind we advocate was announced: the initiative involves two governments (the US and New Zealand) and two tech companies (Twitter and Microsoft), and centres on the trialling of ‘privacy-enhancing technologies’ developed by a third organisation, OpenMined. In this report, we will summarise the discussions that led to this initiative, in the context of other ongoing discussions around transparency mechanisms for recommender systems.


Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD, the GPAI or their member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.