Music Publishers Sue Anthropic Over AI Copyright Infringement

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Universal Music Group, Concord, and ABKCO have sued Anthropic, alleging its AI chatbot Claude was trained on and reproduces copyrighted song lyrics without permission. The publishers argue this infringes their intellectual property rights and competes with their market, challenging the AI's 'fair use' defense in California court.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Anthropic's Claude) used to generate content based on copyrighted song lyrics without permission, which the publishers claim infringes their copyrights. This is a direct violation of intellectual property rights, a category of harm defined under AI Incidents. The lawsuit and allegations indicate that the AI's use has already caused harm by reproducing copyrighted material and competing with the original market. Although the legal outcome is pending, the described harm is materialized and not merely potential, making this an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Business

Harm types
Economic/Property

Severity
AI incident

Business function:
Other

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

US music publishers suing Anthropic make their case against AI 'fair use'

2026-03-24
Reuters
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude) whose use is alleged to have caused harm through copyright infringement, a violation of intellectual property rights. This fits the definition of an AI Incident if the harm were realized. However, since the case is ongoing and the harm is currently alleged but not legally confirmed or resulting in a concrete incident, the article primarily reports on the legal dispute and the potential for harm rather than a confirmed AI Incident. Therefore, this is best classified as Complementary Information, as it provides important context and updates on societal and legal responses to AI-related copyright issues without confirming a realized AI Incident or hazard.
Thumbnail Image

US music publishers suing Anthropic make their case against AI 'fair use' - The Economic Times

2026-03-24
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude) used to generate content based on copyrighted song lyrics without permission, which the publishers claim infringes their copyrights. This is a direct violation of intellectual property rights, a category of harm defined under AI Incidents. The lawsuit and allegations indicate that the AI's use has already caused harm by reproducing copyrighted material and competing with the original market. Although the legal outcome is pending, the described harm is materialized and not merely potential, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US music publishers sue Anthropic, tell court: Anthropic has "committed copyright infringement on a massive scale" and that ...

2026-03-24
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude) that was trained on copyrighted song lyrics without permission and allegedly reproduces these lyrics, causing harm to the rights holders. This is a direct violation of intellectual property rights, which is one of the harms defined under AI Incidents. The involvement of the AI system in the infringement is explicit and central to the case. The harm is realized (copyright infringement), not just potential, and the event concerns the use of the AI system, not just its development. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

UMG Tells Judge Anthropic AI Training Was Clearly Not Fair Use: 'The Evidence Is Overwhelming'

2026-03-24
Billboard
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) whose development and use allegedly infringed on intellectual property rights by using copyrighted song lyrics without permission. This constitutes a violation of legal obligations protecting intellectual property rights, which fits the definition of an AI Incident under category (c). The harm is realized as the plaintiffs claim direct competition and unauthorized reproduction of copyrighted works by the AI system. Therefore, this is an AI Incident.
Thumbnail Image

Publishers Say Anthropic's Use Of Lyrics Violates Copyrights - Law360

2026-03-24
law360.com
Why's our monitor labelling this an incident or hazard?
The article describes a legal claim that Anthropic's AI system used copyrighted lyrics without permission to train or build a commercial product, which constitutes a violation of intellectual property rights. This is a direct harm caused by the AI system's development and use, meeting the criteria for an AI Incident.
Thumbnail Image

'Infringement on a massive scale': UMG, Concord and ABKCO ask court to rule against AI company Anthropic ahead of trial

2026-03-24
Music Business Worldwide
Why's our monitor labelling this an incident or hazard?
The article explicitly details that Anthropic's AI system Claude was trained on copyrighted lyrics without authorization and that it outputs those lyrics or derivative works, directly infringing on copyright holders' rights. This is a clear violation of intellectual property rights (harm category c). The involvement of the AI system is direct and central, as the infringement arises from the AI's training data and its generated outputs. The harm is realized and ongoing, as evidenced by the lawsuits and the plaintiffs' claims. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Major Publishers Make a Decisive Legal Strike Against Anthropic

2026-03-25
Digital Music News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's Claude) and its development (training on copyrighted song lyrics without authorization). The alleged harm is a violation of intellectual property rights due to unauthorized use of copyrighted material, which is a breach of applicable law. Since the harm is realized and the case is actively pursued in court, this qualifies as an AI Incident under the framework. The event is not merely a potential risk or a complementary update but a concrete legal action addressing actual harm caused by the AI system's use.