Figma Sued for Using User Data Without Consent to Train AI

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Figma faces a class action lawsuit in California for allegedly using proprietary user-uploaded data without consent to train its generative AI models, violating intellectual property rights. The suit claims Figma contradicted its promises not to use customer data for AI training, leading to potential economic harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (generative AI models) trained on customer data without consent, leading to alleged violations of intellectual property rights and trade secret protections. This fits the definition of an AI Incident because the development and use of AI systems have directly led to a breach of obligations under applicable law intended to protect intellectual property rights. The harm is realized in the form of alleged unauthorized use and potential economic damage to customers, as reflected in the lawsuit. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityPrivacy & data governanceTransparency & explainability

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Economic/PropertyReputational

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Figma sued for allegedly misusing customer data for AI training - The Economic Times

2025-11-21
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI models) trained on customer data without consent, leading to alleged violations of intellectual property rights and trade secret protections. This fits the definition of an AI Incident because the development and use of AI systems have directly led to a breach of obligations under applicable law intended to protect intellectual property rights. The harm is realized in the form of alleged unauthorized use and potential economic damage to customers, as reflected in the lawsuit. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Figma Trained AI on User Data Without Consent, Class Action Says

2025-11-21
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
The complaint alleges that Figma used user-uploaded proprietary data without permission to train AI models, which is a violation of intellectual property rights and possibly other legal obligations. This is a direct harm related to the development and use of an AI system, fulfilling the criteria for an AI Incident under violations of intellectual property rights. The event is not merely a potential risk but an actual alleged misuse that has led to legal action, indicating realized harm.
Thumbnail Image

Figma hit with class action for using customer designs to train AI

2025-11-22
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI models) trained on proprietary customer data without consent, leading to alleged violations of intellectual property and trade secret rights. This constitutes a breach of obligations under applicable law protecting intellectual property rights, fulfilling the criteria for an AI Incident. The harm is not hypothetical but alleged to have already occurred, with legal action underway. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.