
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Whistleblowers revealed that Meta and TikTok deliberately weakened content moderation and prioritized engagement, allowing their AI recommendation algorithms to amplify harmful content such as violence, exploitation, and extremism. Internal research showed these decisions directly exposed users to harm, as companies competed for user attention and business growth.[AI generated]






















































