
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Multiple reports reveal that Twitter's AI-driven recommendation algorithm has promoted child sexual abuse imagery (CSAM), leading to widespread viewing and sharing of illegal content. Despite Elon Musk's assurances and some content removals, significant CSAM persists and is amplified by the platform's AI systems, causing ongoing harm.[AI generated]




























/cdn.vox-cdn.com/uploads/chorus_asset/file/23951431/acastro_STK050_05.jpg)
/cloudfront-ap-southeast-2.images.arcpublishing.com/nzme/VFLVJJ2F7ZHBNFCV3EZCT2W6XU.jpg)




