
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Voice actors have publicly denounced the use of AI voice cloning tools, such as ElevenLabs, to create non-consensual pornographic mods for Skyrim. These AI-generated deepfakes exploit actors' voices without permission, violating their rights and causing reputational and personal harm, while modding platforms have not banned such content.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated voice acting using voice cloning technology (an AI system) that impersonates real voice actors without their consent, especially for pornographic content. This unauthorized use directly violates the actors' rights and causes harm to them. The AI system's use has directly led to this harm, fulfilling the criteria for an AI Incident under violations of human rights and intellectual property rights. The presence of the AI system is clear, the harm is realized, and the event is not merely a potential risk or complementary information but a current incident.[AI generated]