David Guetta Uses AI to Deepfake Eminem's Voice, Raising Intellectual Property Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

David Guetta used AI tools to generate lyrics and a deepfake of Eminem's voice for a new song, which he played at a live show. The act, done without Eminem's consent, sparked debate over AI's potential to infringe on artists' rights and mislead audiences, highlighting ethical and legal risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes the use of AI tools to create a synthetic vocal performance of Eminem without his involvement, which implicates intellectual property rights and consent issues. While these are serious concerns, the article states that the track was not commercially distributed and does not report any legal action or harm having occurred. The event highlights potential future harms and legal challenges but does not document a realized harm or incident. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of intellectual property rights or other harms if such uses become widespread or commercialized without permission.[AI generated]
AI principles
AccountabilityTransparency & explainabilityPrivacy & data governance

Industries
Arts, entertainment, and recreation

Affected stakeholders
WorkersConsumers

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI hazard

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

David Guetta 'Collaborates' with Eminem Through Use of AI Technology

2023-02-08
Just Jared
Why's our monitor labelling this an incident or hazard?
The article details the creation and live performance of an AI-generated deepfake song but does not report any direct or indirect harm resulting from this use. There is no evidence of injury, rights violations, disruption, or other significant harms. The AI system's use here is for entertainment and demonstration purposes without commercial release or reported negative impact. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context on AI's creative applications and public reactions without introducing new harm.
Thumbnail Image

DJ David Guetta used Eminem in a set and 'people went nuts' -- but A.I. generated the rapper's voice and lyrics and that sparked some thorny questions

2023-02-09
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI tools to create a synthetic vocal performance of Eminem without his involvement, which implicates intellectual property rights and consent issues. While these are serious concerns, the article states that the track was not commercially distributed and does not report any legal action or harm having occurred. The event highlights potential future harms and legal challenges but does not document a realized harm or incident. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of intellectual property rights or other harms if such uses become widespread or commercialized without permission.
Thumbnail Image

DJ David Guetta used Eminem in a set and 'people went nuts' -- but A.I. generated the rapper's voice and lyrics and that sparked some thorny questions

2023-02-09
Fortune
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the voice and lyrics of Eminem, which were used in a public performance. The use of AI-generated content without the artist's consent implicates intellectual property rights and raises concerns about potential harm to artists' rights and reputations. Although the event involves AI use and potential legal and ethical harms, no direct harm or violation has been reported as having occurred. The article focuses on the emerging capabilities of AI in voice synthesis and the unresolved legal ramifications, indicating a credible risk of future harm. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving violations of intellectual property rights or harm to artists' reputations if such AI-generated content is used commercially or without permission.
Thumbnail Image

David Guetta Faked Eminem's Vocals Using AI for New Song

2023-02-10
Futurism
Why's our monitor labelling this an incident or hazard?
The event explicitly involves generative AI systems used to create a fake vocal performance of Eminem, which was publicly presented without his approval. This use of AI directly led to a violation of intellectual property and personal rights, as the impersonation was done without consent and could cause reputational harm. Although no commercial release occurred, the public performance and benefit to Guetta imply realized harm. The AI system's role is pivotal in generating the fake vocals and lyrics. Hence, this is an AI Incident under the category of violations of human rights and intellectual property rights.
Thumbnail Image

Eminem's Voice on David Guetta's New Song Uses Deepfake AI, Sparks Debate - Listen

2023-02-09
XXL Magazine
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake AI voice synthesis) used to generate content that imitates Eminem's voice. The use of this AI system could lead to violations of intellectual property rights, which is a recognized harm under the AI Incident definition. However, since the track is not commercially released and no direct harm or legal action has yet occurred, the event currently represents a plausible risk rather than a realized harm. Therefore, it fits best as an AI Hazard, reflecting the credible potential for harm due to unauthorized AI-generated voice replication.
Thumbnail Image

David Guetta Made A Song With Deepfake Eminem Vocals And Played It At A Show

2023-02-08
Stereogum
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate synthetic vocals mimicking Eminem's voice, which fits the definition of an AI system. However, the article does not describe any actual harm such as injury, rights violations, or disruption caused by this use. The main concern is ethical and potential future misuse, but no direct or indirect harm has materialized. Therefore, this is not an AI Incident. It also does not describe a plausible future harm scenario beyond general ethical concerns, so it is not an AI Hazard. The article primarily provides information about the use of AI in a novel way and the ethical implications, which aligns with Complementary Information.
Thumbnail Image

David Guetta makes potent anti-A.I. case by creating a song with an Eminem deepfake

2023-02-09
The FADER
Why's our monitor labelling this an incident or hazard?
The article describes AI systems generating lyrics and voice deepfakes in the style of Eminem without his consent, which directly implicates violations of intellectual property and personal rights. The AI's role is pivotal as it enabled the creation of unauthorized content that was publicly performed, causing harm to the artist's rights and potentially misleading the audience. Although no physical harm occurred, the infringement on rights and ethical concerns about AI misuse in creative fields meet the criteria for an AI Incident under violations of human rights and intellectual property rights.
Thumbnail Image

David Guetta uses AI to perfectly replicate Eminem's voice for a song | JOE.ie

2023-02-09
JOE.ie
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate voice and lyrics mimicking a real artist, which fits the definition of an AI system. However, there is no evidence of realized harm such as intellectual property violations, injury, or other harms defined under AI Incident. The concerns about future impacts are speculative and not presented as a credible or imminent risk, so it does not meet the threshold for an AI Hazard. The article mainly reports on the use of AI and public reaction, making it Complementary Information as it adds context to the evolving AI ecosystem and societal responses without describing a specific incident or hazard.
Thumbnail Image

David Guetta Sparks Debate After Using Deepfake AI to Put Eminem's Voice on His Song

2023-02-08
92.7 WOBM
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI systems to create a deepfake voice of Eminem and generate lyrics in his style, which involves AI system use. However, the AI-generated content was not commercially released, and no direct harm or violation has been reported. The main focus is on public reaction and debate about the technology's implications rather than an incident of harm or a credible imminent risk of harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides context and societal response to AI's role in music creation.
Thumbnail Image

David Guetta used AI to deepfake Eminem vocals for new song

2023-02-08
Consequence
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to create a deepfake vocal performance imitating Eminem, which is an AI system generating content that could infringe on intellectual property rights and mislead audiences. Although no direct harm or legal violation has been reported, the unauthorized deepfake use of a celebrity's voice poses a credible risk of harm, including violation of rights and reputational damage. Since the content was not commercially released and harm is not yet realized, this situation fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

David Guetta recrée la voix d'Eminem avec un logiciel et entre dans le débat sur l'utilité de l'intelligence artificielle en musique

2023-02-17
Franceinfo
Why's our monitor labelling this an incident or hazard?
The AI system is involved in generating a synthetic voice, which is an AI application. However, the event does not describe any injury, rights violation, disruption, or harm caused by this use. The artist explicitly states the use was for experimentation and fun, not commercial exploitation, and the article centers on the discussion of AI's role and ethical implications in music. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information providing context and societal response to AI in music.
Thumbnail Image

Le DJ David Guetta utilise l'IA pour imiter Eminem, le public devient fou [Vidéo]

2023-02-15
01net
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for generating text and voice deepfakes imitating Eminem, which were publicly presented without the artist's consent. This unauthorized use of the artist's voice and style infringes on intellectual property rights, a form of harm under the AI Incident criteria. The AI systems' outputs directly led to this violation, fulfilling the condition of an AI Incident. The article also highlights the legal ambiguity but confirms the harm has occurred, not just a potential risk, so it is not merely a hazard or complementary information.
Thumbnail Image

Deepfakes : faire imiter Eminem par une IA sans autorisation... est légal ?

2023-02-14
Presse-citron
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake content imitating a real person without authorization, which implicates intellectual property and personality rights. The use of AI-generated lyrics and voice synthesis clearly involves AI system use. Although the deepfake was used publicly in a concert, no direct harm such as legal violation or injury has been confirmed or enforced yet. The article discusses the legal uncertainty and potential for future harm, including unauthorized commercial use and promotion without consent, which fits the definition of an AI Hazard (plausible future harm). There is no indication of a realized AI Incident (actual harm or violation) or complementary information about responses or mitigation. Hence, the classification is AI Hazard.