
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Mercor, a $10 billion AI startup supplying training data to firms like OpenAI, Anthropic, and Meta, faces at least seven class-action lawsuits after a third-party data breach exposed sensitive contractor information, including biometrics and computer screenshots. Plaintiffs allege improper data collection, monitoring, and sharing practices in violation of privacy and labor laws.[AI generated]
Why's our monitor labelling this an incident or hazard?
Mercor's AI training operations involve collecting and processing extensive personal and proprietary data from contractors, which is integral to AI system development. The data breach and alleged unauthorized sharing of sensitive information have directly harmed individuals' privacy and potentially violated intellectual property rights. These harms fall under violations of human rights and legal obligations, meeting the criteria for an AI Incident. The involvement of AI systems in data collection, training, and monitoring (e.g., AI proctoring, screenshot capturing software) and the resulting lawsuits confirm direct harm linked to AI system use and malfunction (data breach).[AI generated]