Patreon CEO Criticizes AI Firms for Using Creators' Work Without Consent or Compensation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Patreon CEO Jack Conte has condemned AI companies for training generative AI models on creators' work without consent, credit, or payment. He argues this practice harms creators' rights and economic interests, as their content is used to fuel AI systems without fair compensation or acknowledgment.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly discusses how AI systems trained on artists' work are causing harm by undermining creators' livelihoods and violating their rights through uncompensated use of their content. The harm is realized and ongoing, not just potential. The involvement of AI systems in generating content based on training data sourced from creators is clear. The lack of regulation and fair compensation mechanisms is a contributing factor to this harm. Hence, this is an AI Incident involving violations of intellectual property rights and economic harm to creators.[AI generated]
AI principles
FairnessAccountability

Industries
Arts, entertainment, and recreation

Affected stakeholders
Workers

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Patreon CEO Sounds Off on AI: 'I'm Both Amazed and Furious': As a Creator, 'I'm Angry That We Aren't Being Paid' for Value of Contributing to AI Models

2026-03-10
Variety
Why's our monitor labelling this an incident or hazard?
The article centers on the broader societal and economic implications of AI use in creative work, focusing on the lack of compensation and consent for creators whose work may be used to train AI models. While it involves AI systems conceptually, it does not report a concrete incident of harm or a specific hazard event. It also includes information about Patreon's policies and stance on AI, which is informative but not indicative of an incident or hazard. Therefore, this is best classified as Complementary Information, as it provides context and insight into AI's impact on creators and ongoing governance and ethical discussions.
Thumbnail Image

Patreon CEO says AI can unlock creativity, but lack of regulation is destroying creative people

2026-03-11
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses how AI systems trained on artists' work are causing harm by undermining creators' livelihoods and violating their rights through uncompensated use of their content. The harm is realized and ongoing, not just potential. The involvement of AI systems in generating content based on training data sourced from creators is clear. The lack of regulation and fair compensation mechanisms is a contributing factor to this harm. Hence, this is an AI Incident involving violations of intellectual property rights and economic harm to creators.
Thumbnail Image

Patreon CEO Slams AI for Stealing From Artists

2026-03-11
Digital Music News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLMs) using artists' works as training data without consent or compensation, which is a breach of intellectual property and labor rights. This harm is ongoing and directly linked to the development and use of AI systems. The CEO's statements confirm that creators are currently harmed by these practices, fulfilling the criteria for an AI Incident. The event is not merely a discussion or potential risk but highlights realized harm to creators' rights and economic interests due to AI use.
Thumbnail Image

Patreon CEO Urges AI Firms to Pay Creators Royalties for Training Data

2026-03-11
WebProNews
Why's our monitor labelling this an incident or hazard?
The article centers on a call for fair compensation to creators whose works are used to train AI systems, reflecting concerns about intellectual property rights and economic fairness. While it involves AI systems (e.g., OpenAI's ChatGPT, Meta's Llama) and their training data, it does not report any realized harm or incident caused by AI systems. Nor does it describe a specific event where AI use could plausibly lead to harm imminently. Instead, it provides context on ongoing debates, legal actions, and proposed solutions, which aligns with Complementary Information as it enhances understanding of AI ecosystem developments and governance responses without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Patreon CEO Says AI Is Using Creators' Work Without Consent, Credit or Pay

2026-03-11
Techloy
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI trained on creators' work) and discusses the lack of consent, credit, and compensation for creators, which relates to violations of intellectual property rights and economic harm. Although no specific incident of harm is reported, the ongoing use of creators' work without permission plausibly leads to AI Incidents involving rights violations and economic harm. The CEO's warning and call for change indicate a credible risk of harm if the situation continues. Therefore, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Social media giants have grand plans for AI. Creators fear they'll be left out.

2026-03-12
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used by social media platforms to automatically tag and sell products from creators' posts without consent, which involves AI system use. The concerns raised by creators about loss of control, lack of compensation, and potential devaluation of their work indicate plausible future harms. However, no actual harm or incident is reported as having occurred yet. The AI systems' involvement is in their use by platforms, and the harms are potential economic and rights-related harms to creators. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.