Writers Guild Urges Legal Action Against AI Firms for Copyright Infringement

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Writers Guild of America has urged major Hollywood studios to take legal action against AI companies using copyrighted film and TV subtitles to train their models without permission. The Guild accuses tech firms like Apple and Nvidia of intellectual property theft, demanding studios defend writers' rights against unauthorized use.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes real, ongoing use of copyrighted works to train generative AI models—an unauthorized infringement of writers’ IP and a breach of their labor agreement. This constitutes a direct violation of intellectual property rights due to AI system development and use, meeting the definition of an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainabilityPrivacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketingIT infrastructure and hosting

Affected stakeholders
Workers

Harm types
Economic/PropertyReputational

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

WGA Slams Studios For Not Protecting Copyrighted Works Used In Generative AI Training Models: "Come Off The Sidelines"

2024-12-12
Deadline
Why's our monitor labelling this an incident or hazard?
The article describes real, ongoing use of copyrighted works to train generative AI models—an unauthorized infringement of writers’ IP and a breach of their labor agreement. This constitutes a direct violation of intellectual property rights due to AI system development and use, meeting the definition of an AI Incident.
Thumbnail Image

Writers Guild demands studios stop tech companies from training AI on their work

2024-12-12
Los Angeles Times
Why's our monitor labelling this an incident or hazard?
The unauthorized use of copyrighted scripts and subtitles to train AI models constitutes a violation of intellectual property rights, directly harming writers. This is a realized harm (not merely potential), and the AI systems’ development and use are central to the infringement. Accordingly, it meets the criteria for an AI Incident.
Thumbnail Image

WGA Sends Letter to Studios, Urging Lawsuits Against AI Plagiarism: 'Inaction has Harmed WGA Members'

2024-12-12
Variety
Why's our monitor labelling this an incident or hazard?
The article describes AI systems being trained on stolen, copyrighted works—an actual violation of IP rights. This is direct harm (copyright infringement) caused by AI development/use, fitting the definition of an AI Incident under category (c) (violation of intellectual property rights).
Thumbnail Image

Writers Guild Calls on Studios to Take "Immediate Legal Action" Against AI Companies

2024-12-12
The Hollywood Reporter
Why's our monitor labelling this an incident or hazard?
While the piece centers on alleged copyright violations by AI model developers (a form of IP rights harm), it is primarily about the union’s demand for studios to enforce contracts and pursue lawsuits. This constitutes a governance/legal proceeding response to AI-related issues, fitting the definition of Complementary Information rather than a standalone AI Incident or Hazard.
Thumbnail Image

Writers Guild demands studios start suing tech companies for AI plagiarism

2024-12-13
The A.V. Club
Why's our monitor labelling this an incident or hazard?
The core narrative is the Guild’s call for legal action (a governance response) to alleged past AI training on copyrighted scripts. The actual copyright infringement occurred earlier and is provided as background. Therefore, this is complementary information, not a brand-new incident or hazard.
Thumbnail Image

WGA Calls on Studios to Take 'Immediate Legal Action' on AI Companies Using Subtitles to Train Their Models

2024-12-12
TheWrap
Why's our monitor labelling this an incident or hazard?
The main focus is on the union’s appeal for studios to pursue lawsuits, representing a governance and legal response to previously reported AI misuse of copyrighted material. It does not itself describe a newly unfolding incident of harm or a hazard, but rather an update on potential legal actions and industry reaction—making it Complementary Information.
Thumbnail Image

WGA Tells Studios to Stop Letting AI Companies 'Plunder Entire Libraries' of Hollywood Writing

2024-12-12
IndieWire
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems trained on copyrighted Hollywood writing without consent, which is a breach of intellectual property rights. The WGA's letter highlights that this unauthorized use harms the rights of writers and studios. Since the AI systems' development and use have directly led to a violation of intellectual property rights, this qualifies as an AI Incident under the OECD framework. The event is not merely a potential risk or a general discussion but concerns realized harm through unauthorized AI training data use.
Thumbnail Image

Writers Guild demands studios stop tech companies from training AI on their work

2024-12-13
ArcaMax
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI systems have been trained on copyrighted works (subtitles from movies and TV episodes) without permission, which is a violation of intellectual property rights. The Writers Guild claims harm to its members due to this unauthorized use. The AI systems' development and use have directly led to this harm. Hence, this qualifies as an AI Incident under the framework, specifically under violation of intellectual property rights.
Thumbnail Image

WGA tells studios to sue AI trainers

2024-12-13
Advanced-television
Why's our monitor labelling this an incident or hazard?
The article describes how AI models have been trained on copyrighted scripts and subtitles without permission, which the WGA claims is theft of intellectual property. This use of AI training data directly violates the rights of writers and studios, causing harm to their intellectual property rights. The involvement of AI systems in training on this data and the resulting legal and rights issues meet the criteria for an AI Incident. The event is not merely a potential risk or a general discussion but involves realized harm through unauthorized use of copyrighted material for AI training.
Thumbnail Image

WGA urges US studios to take action over AI copyright 'theft'

2024-12-13
Broadcast
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems have been trained on copyrighted works without authorization, which is a violation of intellectual property rights (a breach of applicable law protecting intellectual property). The WGA highlights that this unauthorized use has harmed its members, indicating realized harm. The AI systems' development and use directly involve the unauthorized training on these works, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a response update but a claim of actual harm caused by AI system use, thus it is classified as an AI Incident.
Thumbnail Image

WGA calls on Hollywood studios to combat AI plagiarism

2024-12-13
Straight Arrow News
Why's our monitor labelling this an incident or hazard?
The article describes how AI models have been trained on large libraries of copyrighted movies and TV episodes without authorization, which the WGA claims constitutes theft of intellectual property. This use of AI systems has directly harmed writers and artists by infringing on their rights and potentially undermining their livelihoods. The ongoing legal case further confirms that harm has occurred and is being addressed. Hence, the event meets the criteria for an AI Incident due to violations of intellectual property rights caused by AI system development and use.
Thumbnail Image

Large studios will likely take their time adopting generative AI for content creation. Social media isn't hesitating.

2024-12-16
Deloitte Insights
Why's our monitor labelling this an incident or hazard?
The article centers on the potential risks and challenges studios face with generative AI, such as IP infringement liability, copyright issues, and labor union opposition. However, it does not report any actual harm, malfunction, or misuse of AI systems that have led to injury, rights violations, or other harms. The concerns are about plausible future risks and strategic responses rather than an event where AI has caused harm. Therefore, this is best classified as Complementary Information, providing context and updates on societal, legal, and governance responses related to AI adoption in the media industry.