Transparency Mechanisms for Social Media Recommender Algorithms: From Proposals to Action

May 27, 2025

Social media platforms rely on several kinds of AI technology for their operation. Much of the appeal of social media platforms comes from their ability to deliver content that is tailored to individual users. This ability is provided in large part by AI systems called recommender systems: these systems are the focus of our project. Recommender systems curate the ‘content feeds’ of platform users, using machine learning techniques to tailor each user’s feed to the kinds of item they have engaged with in the past. They essentially function as a personalised newspaper editor for each user, choosing which items to present, and which to withhold. They rank amongst the most pervasive and influential AI systems in the world today. The starting point for our project is a concern that recommender systems may lead users in the direction of harmful content of various kinds. This concern is at origin a technical one, relating to the AI methods through which recommender systems learn. But it is also a social and political one, because the effects of recommender systems on platform users could potentially have a significant influence on currents of political opinion. At present, there is very little public information about the effects of recommender systems on platform users: we know very little about how information is disseminated to users on social media platforms. It is vital that governments, and the public, have more information about how recommender systems steer content to platform users, particularly in domains of harmful content. In the first phase of our project, we reviewed possible methods for studying the effects of recommender systems on user platform behaviour. We concluded the best methods available for studying these effects are the methods that companies use themselves. These are methods that are only available internally to companies. We proposed transparency mecha nisms, in which these company-internal methods are used to address questions in the public interest, about possible harmful effects of recommender systems.


Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD, the GPAI or their member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.