
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Researchers found that X (formerly Twitter) swiftly removes AI-generated deepfake nude images when reported as copyright violations, but largely ignores reports of nonconsensual nudity. This highlights a significant failure in the platform's AI-assisted moderation, allowing harmful synthetic content to remain accessible and exposing gaps in user protection.[AI generated]



























