BMG Sues Anthropic Over AI Training With Copyrighted Song Lyrics

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

BMG Rights Management has sued AI company Anthropic in California, alleging its Claude chatbot was trained on and reproduces copyrighted song lyrics from artists like Bruno Mars and the Rolling Stones without authorization. The lawsuit claims direct copyright infringement by Anthropic's AI system, impacting music industry rights holders.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (large language models) in its development phase (training) where copyrighted material was allegedly used without permission, constituting a violation of intellectual property rights. This is a direct legal claim of harm (copyright infringement) caused by the AI system's development and use. Therefore, it qualifies as an AI Incident under the category of violations of intellectual property rights.[AI generated]
AI principles
AccountabilityPrivacy & data governance

Industries
Media, social platforms, and marketing

Affected stakeholders
Business

Harm types
Economic/Property

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

BMG sues Anthropic for using Bruno Mars, Rolling Stones lyrics in AI training

2026-03-18
CNA
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (large language models) in its development phase (training) where copyrighted material was allegedly used without permission, constituting a violation of intellectual property rights. This is a direct legal claim of harm (copyright infringement) caused by the AI system's development and use. Therefore, it qualifies as an AI Incident under the category of violations of intellectual property rights.
Thumbnail Image

BMG sues Anthropic for using Bruno Mars, Rolling Stones lyrics in AI training By Reuters

2026-03-18
Investing.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) whose training data allegedly includes copyrighted content without authorization, leading to a legal claim of copyright infringement. This is a violation of intellectual property rights, which falls under the definition of harm (c) in the AI Incident framework. Since the infringement has already occurred and legal action is underway, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

BMG sues Anthropic for using Bruno Mars, Rolling Stones lyrics in AI training

2026-03-18
Reuters
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) trained on copyrighted material without permission, leading to alleged copyright infringement. This is a direct violation of intellectual property rights, which falls under the definition of harm (c) in the AI Incident framework. The lawsuit indicates that the harm has already occurred due to unauthorized use of copyrighted works in AI training, making this an AI Incident rather than a potential hazard or complementary information.
Thumbnail Image

BMG Sues Anthropic for Alleged Training of Chatbot With Justin Bieber, Bruno Mars, Rolling Stones Lyrics

2026-03-18
Rolling Stone
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's Claude chatbot) whose development involved unauthorized use of copyrighted song lyrics, leading to a violation of intellectual property rights. This harm is materialized and ongoing, as per the lawsuit's claims. The AI system's development and use directly led to this harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a concrete legal claim of harm caused by AI system development and use.
Thumbnail Image

BMG sues Anthropic for using Bruno Mars, Rolling Stones lyrics in AI training

2026-03-18
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (large language models powering the Claude chatbot) whose development involved the use of copyrighted materials without authorization. This use constitutes a violation of intellectual property rights, which is one of the harms defined under AI Incidents. Since the lawsuit alleges that the infringement has already occurred and the AI system has been deployed, this is a realized harm, not just a potential one. Therefore, this qualifies as an AI Incident due to the breach of intellectual property rights caused by the AI system's development and use.
Thumbnail Image

BMG Sues Anthropic, Entering AI Copyright Battlefield: 'Egregious Law-Breaking'

2026-03-18
Billboard
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude) whose training and output processes allegedly infringe on copyrighted works, causing direct harm to rights holders. The harm is a violation of intellectual property rights, fitting the definition of an AI Incident. The lawsuit details realized harm (copyright infringement) rather than potential harm, and the AI system's role is pivotal in causing this harm. Hence, it is not a hazard or complementary information but a clear AI Incident.
Thumbnail Image

BMG sues Anthropic for infringement, alleging AI firm's $380B valuation was built on 'stolen copyrighted works'

2026-03-18
Music Business Worldwide
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude AI chatbot) whose development and use (training on copyrighted works and generating outputs reproducing copyrighted lyrics) has directly led to alleged violations of intellectual property rights. The lawsuit claims direct harm to BMG and songwriters due to unauthorized use of their copyrighted works in training data and AI outputs. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of intellectual property rights, a form of harm under category (c).
Thumbnail Image

BMG Files Massive Infringement Lawsuit Against Anthropic

2026-03-19
Digital Music News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's Claude) and details how its development and use allegedly caused harm through copyright infringement, including unauthorized training on protected works and generation of infringing outputs. The harms described are realized and ongoing, involving violations of intellectual property rights. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is materialized and directly linked to the AI system's use.
Thumbnail Image

BMG Sues Anthropic for Using Bruno Mars, Rolling Stones Lyrics in AI Training

2026-03-18
GV Wire
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Anthropic used copyrighted lyrics to train its AI models without permission, infringing hundreds of copyrights. This is a direct violation of intellectual property rights, which falls under harm category (c) in the AI Incident definition. The AI system (large language models) is central to the harm, as the training process involved unauthorized use of copyrighted material. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

BMG sued Anthropic over song lyric training?

2026-03-19
AllToc
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (a chatbot) trained on copyrighted material without rights, which constitutes a violation of intellectual property rights, a recognized harm under the AI Incident definition. The lawsuit indicates that the AI system's development and use have directly led to a breach of legal obligations protecting intellectual property rights. Therefore, this qualifies as an AI Incident.