
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Investigations revealed that several AI-powered content detection tools falsely label genuine human-written texts as AI-generated, leading to reputational harm and extortion attempts. These tools mislead users, damage credibility, and exploit individuals financially by offering paid services to 'humanize' content, exacerbating misinformation and trust issues online.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as AI-based text detection tools. Their malfunction (false positives) and deceptive use (charging for 'humanizing' texts) have directly caused harm by misleading users, damaging reputations, and contributing to misinformation. The harms include violation of rights (reputational harm), harm to communities (misinformation), and financial exploitation. The article documents realized harms, not just potential risks, and the AI systems' role is pivotal. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]