
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Figma faces a class action lawsuit in California for allegedly using proprietary user-uploaded data without consent to train its generative AI models, violating intellectual property rights. The suit claims Figma contradicted its promises not to use customer data for AI training, leading to potential economic harm.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI models) trained on customer data without consent, leading to alleged violations of intellectual property rights and trade secret protections. This fits the definition of an AI Incident because the development and use of AI systems have directly led to a breach of obligations under applicable law intended to protect intellectual property rights. The harm is realized in the form of alleged unauthorized use and potential economic damage to customers, as reflected in the lawsuit. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]