Dutch Authors and Journalists Demand Meta Stop Using Copyrighted Works for AI Training

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Dutch writers, translators, and journalists, represented by the Auteursbond, NVJ, and Stichting Lira, have formally demanded that Meta cease using their copyrighted texts without permission or payment to train AI models like Llama. They allege this practice violates intellectual property rights and undermines creators' economic interests.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems (Meta's AI language model Llama) trained on copyrighted works without authorization, which constitutes a violation of intellectual property rights (harm category c). The unions' demand to stop using these datasets and the threat of legal action indicate that the harm has already occurred due to the AI system's development and use. Therefore, this qualifies as an AI Incident because the AI system's development and use have directly led to a breach of legal obligations protecting intellectual property rights. The event is not merely a potential risk or a complementary update but a concrete incident of harm related to AI.[AI generated]
AI principles
AccountabilityFairness

Industries
Media, social platforms, and marketing

Affected stakeholders
Workers

Harm types
Economic/Property

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Auteurs eisen dat Meta stopt met gebruik van hun werk om AI te trainen

2026-02-27
NOS
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Meta's AI language model Llama) trained on copyrighted works without authorization, which constitutes a violation of intellectual property rights (harm category c). The unions' demand to stop using these datasets and the threat of legal action indicate that the harm has already occurred due to the AI system's development and use. Therefore, this qualifies as an AI Incident because the AI system's development and use have directly led to a breach of legal obligations protecting intellectual property rights. The event is not merely a potential risk or a complementary update but a concrete incident of harm related to AI.
Thumbnail Image

Nederlandse schrijvers sommeren Meta te stoppen met het gebruik van hun teksten voor AI

2026-02-27
de Volkskrant
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (language models like Meta's Llama and Anthropic's Claude) trained on copyrighted texts without authorization, which the authors claim violates their intellectual property rights. The use of illegal datasets like LibGen for AI training is central to the dispute. The harm is a violation of intellectual property rights (a recognized harm category) caused by the AI systems' development and use. Although some courts have ruled in favor of the AI companies under fair use, the authors' associations and legal actions indicate ongoing harm and disputes. This fits the AI Incident definition as the AI systems' development and use have directly or indirectly led to a breach of intellectual property rights.
Thumbnail Image

Nederlandse auteursbonden sommeren Meta te stoppen met illegaal gebruik van teksten

2026-02-27
NRC
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Meta used texts from Dutch authors without permission or payment to train AI models, which is illegal and violates copyright law. This unauthorized use directly harms authors by infringing their intellectual property rights and economically by reducing their opportunities, especially for translators. The AI system's development and use are central to this harm, fulfilling the criteria for an AI Incident under violations of rights and harm to communities.
Thumbnail Image

Schrijvers eisen dat AI van Meta hun werk niet meer gebruikt

2026-02-27
Nederlands Dagblad
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Meta used copyrighted works without authorization to train AI models, which is a violation of intellectual property rights. This use of AI development practices has directly led to a breach of legal obligations intended to protect intellectual property rights, fitting the definition of an AI Incident under category (c).
Thumbnail Image

Nederlandse schrijvers sommeren Meta te stoppen met het gebruik van hun teksten voor AI

2026-02-27
Nederlands Dagblad
Why's our monitor labelling this an incident or hazard?
The use of copyrighted material without permission for AI training directly breaches intellectual property rights, which is a recognized harm under the AI Incident definition. Since the AI system's development involved unauthorized use of protected content, this qualifies as an AI Incident due to violation of legal and fundamental rights related to intellectual property.
Thumbnail Image

Schrijvers eisen dat AI van Meta hun werk niet meer gebruikt

2026-02-27
Nieuws.nl
Why's our monitor labelling this an incident or hazard?
The article describes that Meta has used large amounts of copyrighted works, including Dutch books and articles, without permission or compensation, to train AI models. This unauthorized use of protected content directly violates intellectual property rights, which falls under the category of harm (c) in the AI Incident definition. Since the use has already occurred and the AI models have been developed and offered, this is a realized harm, not just a potential one. Therefore, this event qualifies as an AI Incident due to violation of intellectual property rights caused by the AI system's development and use.
Thumbnail Image

Schrijvers eisen dat AI van Meta hun werk niet meer gebruikt

2026-02-27
ThePostOnline
Why's our monitor labelling this an incident or hazard?
The use of copyrighted material without authorization in AI training is a breach of intellectual property rights, which falls under harm category (c). Since the AI models have already been developed and offered, and the organizations are demanding cessation, it indicates that the harm has occurred. Therefore, this event qualifies as an AI Incident due to violation of intellectual property rights caused by the AI system's development and use.
Thumbnail Image

Schrijvers en journalisten sommeren Meta te stoppen met illegale AI-trainingsdata

2026-02-27
FOK!
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Meta used copyrighted texts without permission or payment to train its AI models, which is a violation of intellectual property rights. This use is part of the AI system's development process and directly breaches legal protections for authors, translators, and journalists. The harm is realized in the form of rights violations and financial and creative harm to the content creators. This fits the definition of an AI Incident under category (c) "Violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labor, and intellectual property rights." The event is not merely a potential risk or a complementary update but a concrete claim of illegal AI training data use causing harm.
Thumbnail Image

Journalisten en schrijvers eisen dat Meta illegaal gebruik van hun...

2026-02-27
Dutch IT Channel
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Meta's AI language models) and their development (training on datasets). The core issue is the alleged illegal use of copyrighted works without permission or payment, which is a violation of intellectual property rights (a form of harm under the framework). However, the article does not report a concrete AI Incident where harm has already occurred or been legally established, nor does it describe a hazard scenario of plausible future harm. Instead, it details advocacy and legal demands by authors and journalists seeking to stop the unauthorized use and to negotiate fair compensation. This fits the definition of Complementary Information, as it is a societal and governance response to AI-related rights issues, enhancing understanding of the AI ecosystem and its challenges without reporting a new incident or hazard itself.