
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
An Indiana lawyer named Mark Zuckerberg repeatedly had his Facebook business page disabled by Meta's AI-driven moderation system, which misidentified him as impersonating the company's CEO. Despite providing proof of identity, his account was banned five times over eight years, causing financial loss and business disruption.[AI generated]
Why's our monitor labelling this an incident or hazard?
The Facebook account disabling is due to Meta's automated review system, which likely uses AI for content and identity verification. The repeated wrongful disabling of the lawyer's account has directly harmed his business and finances, constituting harm to property and economic interests. Therefore, this qualifies as an AI Incident because the AI system's malfunction (misclassification) has directly led to harm.[AI generated]