Morgan Freeman Takes Legal Action Against Unauthorized AI Voice Cloning

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Morgan Freeman has condemned the unauthorized use of AI to clone his iconic voice, calling it theft and a violation of his intellectual property rights. The actor revealed his legal team is actively pursuing multiple cases where AI-generated voices have been used without his consent, resulting in lost work and exploitation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems cloning a human voice and creating synthetic performers, which directly leads to violations of intellectual property rights and labor rights of actors. The unauthorized use of Freeman's voice and the creation of AI actors without consent or compensation are clear harms under the framework's category (c) violations of human rights or breach of obligations protecting intellectual property and labor rights. The ongoing legal actions confirm that harm has materialized. Therefore, this qualifies as an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Workers

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Morgan Freeman slams AI-generated voices copying his own: 'Don't mimic me with falseness'

2025-11-12
Entertainment Weekly
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems cloning a human voice and creating synthetic performers, which directly leads to violations of intellectual property rights and labor rights of actors. The unauthorized use of Freeman's voice and the creation of AI actors without consent or compensation are clear harms under the framework's category (c) violations of human rights or breach of obligations protecting intellectual property and labor rights. The ongoing legal actions confirm that harm has materialized. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Morgan Freeman breaks silence on the use of AI to replicate his voice

2025-11-10
GEO TV
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the replication of Morgan Freeman's voice is done through AI technology. The harm relates to violation of rights, specifically intellectual property and possibly personality rights, as the AI-generated voice is used without consent. Since the article describes that such unauthorized use has already occurred, this constitutes a realized harm. Therefore, this event qualifies as an AI Incident due to the violation of rights caused by the AI system's use.
Thumbnail Image

Morgan Freeman Blasts AI Voice Mimics: 'You're Robbing Me'

2025-11-11
Mandatory
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI voice cloning technology being used to mimic Morgan Freeman's voice without permission, leading to harm in the form of loss of income and violation of rights. The actor's objection and legal actions indicate that harm has occurred or is ongoing. Therefore, this qualifies as an AI Incident due to violations of intellectual property and labor rights caused by the AI system's use.
Thumbnail Image

Morgan Freeman Slams AI's Unauthorized Use Of His Voice: 'You're Robbing Me'

2025-11-12
Black Enterprise
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to replicate Morgan Freeman's voice without authorization, which constitutes a violation of intellectual property rights, a form of harm under the AI Incident definition. The harm is realized as it has cost him valuable work and exploits his voice. The involvement of AI in this unauthorized use and the resulting harm to Freeman's rights justifies classifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Morgan Freeman on AI Mimicking His Voice: 'You're Robbing Me'

2025-11-11
eWEEK
Why's our monitor labelling this an incident or hazard?
The article centers on the unauthorized AI cloning of Morgan Freeman's voice, which involves an AI system generating voice content without permission, implicating intellectual property and publicity rights. However, it does not describe a concrete AI Incident where harm has been realized beyond the legal and ethical concerns, nor does it describe a plausible future harm event beyond the general risk. Instead, it provides context on the societal and legal responses to AI voice cloning, making it Complementary Information. The focus is on the evolving discourse, legal challenges, and calls for responsible AI use rather than a specific AI Incident or Hazard.
Thumbnail Image

Morgan Freeman slams unauthorised AI clones of his voice: 'You're robbing me'

2025-11-13
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to clone Morgan Freeman's voice without authorization, which directly leads to violations of intellectual property rights and personal rights. The harm is realized as Freeman and his legal team are actively addressing these unauthorized uses, indicating that the AI system's use has already caused harm. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of obligations under applicable law protecting intellectual property rights.
Thumbnail Image

Morgan Freeman Says His 'Lawyers Have Been Very Busy' Cracking Down on Unauthorized AI Use of His Voice

2025-11-13
Breitbart
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology copying Morgan Freeman's voice without authorization, which directly harms him by robbing him of his likeness and potential earnings. The legal actions taken indicate that the harm is materialized and recognized. The AI system's use here is central to the harm, fulfilling the criteria for an AI Incident involving violation of intellectual property and personality rights.
Thumbnail Image

Morgan Freeman Slams Unauthorized AI Voice Use, Says His Lawyers Are "Busy

2025-11-14
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated imitations of Morgan Freeman's voice being used without authorization, which directly infringes on his rights and causes harm to his professional and personal interests. The involvement of lawyers actively working to remove such unauthorized uses confirms that harm has occurred. The AI system's misuse in generating these voice imitations is central to the incident, fulfilling the criteria for an AI Incident under violations of intellectual property and related rights.
Thumbnail Image

Morgan Freeman Says He Won't Retire and Gets 'Pissed Off' at AI Recreations of His Voice: 'Don't Mimic Me With Falseness... You're Robbing Me'

2025-11-14
Aol
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of AI voice recreations, which are being used without consent, implicating violations of intellectual property and personal rights. However, it does not describe a specific AI Incident where harm has directly or indirectly occurred, nor does it describe a plausible future harm event. Instead, it focuses on the actor's public statements and legal actions addressing the unauthorized AI use. This fits the definition of Complementary Information, as it details societal and legal responses to AI misuse and helps track the evolving ecosystem of AI-related rights and protections.
Thumbnail Image

Morgan Freeman Says He Won't Retire and Gets 'Pissed Off' at AI Recreations of His Voice: 'Don't Mimic Me With Falseness... You're Robbing Me'

2025-11-13
Variety
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems recreating Morgan Freeman's voice without his consent, which is a direct use of AI technology causing harm by violating his rights and potentially causing financial and reputational damage. The involvement of lawyers to remove unauthorized uses indicates that harm has occurred. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of intellectual property and personal rights.
Thumbnail Image

Morgan Freeman Slams AI Software "Stealing" His Voice

2025-11-13
Movieweb
Why's our monitor labelling this an incident or hazard?
The AI system in question is voice imitation software, an AI system that generates outputs (voice) influencing virtual environments (media, entertainment). The unauthorized use of Freeman's voice constitutes a violation of his intellectual property and labor rights, which is a harm under the framework. The article states that legal cases are already underway, indicating realized harm rather than just potential. Hence, this qualifies as an AI Incident due to the direct involvement of AI in causing harm to a person's rights and professional interests.
Thumbnail Image

"You're robbing me": Morgan Freeman takes legal action against AI usage of his voice

2025-11-13
Sportskeeda
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of sophisticated AI voice generation tools to clone Morgan Freeman's voice for unauthorized commercial use, which constitutes a violation of intellectual property rights. The actor's legal actions indicate that harm has occurred due to the AI system's misuse. The involvement of AI in generating synthetic voices that infringe on rights and livelihoods of real artists is central to the event, meeting the criteria for an AI Incident under violations of intellectual property rights.
Thumbnail Image

Morgan Freeman Is 'A Little PO'ed' About AI Clones: 'You're Robbing Me'

2025-11-13
The Daily Wire
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated voice clones being used without Morgan Freeman's permission, leading to legal disputes. This unauthorized use of AI to replicate a person's voice infringes on their rights and causes harm by appropriating their identity and potential earnings. Since the AI system's use has directly led to a breach of intellectual property and personality rights, this qualifies as an AI Incident under the framework.
Thumbnail Image

Morgan Freeman Shoots Down Retirement As His Legal Team Is "Very Busy" With AI Imitators: "You're Robbing Me"

2025-11-14
Deadline
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI recreations of Morgan Freeman's voice and likeness being used without his permission, which his legal team is actively addressing. This unauthorized use constitutes a violation of his rights and causes harm by appropriating his identity and potential earnings. The AI system's use here directly leads to harm (violation of rights and economic harm), fitting the definition of an AI Incident.
Thumbnail Image

'My lawyers have been very, very busy': Morgan Freeman takes action over AI voice cloning

2025-11-14
NZ Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of AI-generated voice cloning used without consent, which directly leads to violations of intellectual property and labor rights. The legal actions taken by Morgan Freeman's lawyers and the condemnation by the actors' union SAG-AFTRA confirm that harm has occurred through unauthorized AI use. This meets the criteria for an AI Incident because the AI system's use has directly led to a breach of rights protected under applicable law. Although the article does not detail specific damages or outcomes, the unauthorized cloning and use of actors' voices without permission is a clear violation and harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Morgan Freeman taking legal action over AI copycats: "You're robbing me

2025-11-13
NME Music News, Reviews, Videos, Galleries, Tickets and Blogs | NME.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used to replicate Morgan Freeman's voice without his consent, leading to legal action. This unauthorized use of AI-generated voice copies infringes on his intellectual property rights and causes harm by 'robbing' him of earnings and control over his own voice. The involvement of AI in the harm is direct, as the AI system is the tool enabling the unauthorized replication. Hence, this meets the criteria for an AI Incident due to violation of intellectual property rights (a form of harm under category (c)).
Thumbnail Image

Morgan Freeman says his lawyers are "very busy" going after AI voice copies

2025-11-14
The A.V. Club
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice copying technology being used to mimic Morgan Freeman's voice without authorization, which is a direct use of AI systems. The harm involves violation of intellectual property and personality rights, as Freeman states that unauthorized use is equivalent to 'robbing' him. His lawyers are actively pursuing legal remedies, indicating that harm has occurred and is being addressed. Therefore, this qualifies as an AI Incident due to realized violation of rights caused by AI misuse.
Thumbnail Image

Why Morgan Freeman's fight against AI cloning matters for every artist

2025-11-13
IOL
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems capable of cloning voices and creating synthetic actors, which are being used without consent, raising legal and ethical concerns. Although no direct harm incident is reported, the unauthorized use of AI to clone voices constitutes a plausible risk of violating intellectual property and personal rights, which are harms under the AI Incident definition. Since the article focuses on the ongoing issue and potential for harm rather than a specific realized incident, it fits best as an AI Hazard. The discussion of legal efforts and calls for regulation further supports this classification as a hazard rather than an incident or complementary information.
Thumbnail Image

Morgan Freeman Threatens This Action if Folks Keep Cloning His Voice for AI Without Permission - The Root

2025-11-13
The Root
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice cloning technology being used without Morgan Freeman's consent, which is a clear involvement of an AI system. The unauthorized cloning of his voice constitutes a violation of intellectual property and personal rights, which falls under harm category (c) violations of human rights or breach of obligations protecting intellectual property rights. Although no specific incident of harm is detailed as having occurred, the ongoing unauthorized use and legal research into these cases indicate a plausible risk of harm. Since the harm is not yet concretely realized but is credible and ongoing, this event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

Morgan Freeman slams AI voice replicas

2025-11-13
ArcaMax
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology cloning Morgan Freeman's voice without permission, which is a direct use of an AI system leading to harm—specifically, violation of intellectual property and labor rights. The actor's legal representatives are actively pursuing action against these unauthorized uses, indicating that harm has occurred. The AI system's role is pivotal as it enables the replication of the voice without consent, constituting a breach of rights protected under applicable law. Hence, this qualifies as an AI Incident.
Thumbnail Image

Morgan Freeman taking legal action over AI use of his voice: 'My lawyers have been very, very busy'

2025-11-13
Face2Face Africa
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI programs cloning Morgan Freeman's voice without his involvement or consent, which is a clear example of AI system use leading to a violation of intellectual property rights. The legal action taken by Freeman indicates that harm has occurred due to unauthorized AI use. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of rights protected under applicable law.
Thumbnail Image

Morgan Freeman Doesn't 'Appreciate' AI Programs Using His Voice, Teases Legal Action

2025-11-13
ComingSoon.net
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI programs cloning Morgan Freeman's voice without consent, which is a direct violation of his rights and intellectual property. The involvement of AI in creating unauthorized voice replicas is clear, and the harm—unauthorized use and potential financial loss—is realized. The legal actions underway further confirm the recognition of harm. Hence, this event meets the criteria for an AI Incident as it involves the use of AI systems leading to violations of rights.
Thumbnail Image

Morgan Freeman's Reaction to AI Actor Tilly Norwood Isn't Surprising

2025-11-13
ComingSoon.net
Why's our monitor labelling this an incident or hazard?
While the article involves an AI system (the AI actor Tilly Norwood) and discusses concerns about AI replacing human actors, it does not report any realized harm, violation of rights, or disruption caused by the AI actor. The concerns are anticipatory and societal in nature, reflecting potential future issues but no direct or indirect harm has occurred yet. Therefore, this is best classified as Complementary Information, as it provides context and societal response to AI developments rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

If You're Using Morgan Freeman's Voice With AI, You Might Want To Stop

2025-11-13
Global Grind
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated voice cloning technology being used without Morgan Freeman's consent, which is a direct use of AI systems. The harm is a violation of rights (intellectual property and personality rights), as Freeman is not compensated and his likeness is exploited. The legal actions and complaints indicate that harm has occurred, not just a potential risk. Hence, this is an AI Incident involving the use of AI systems leading to a breach of rights.
Thumbnail Image

Morgan Freeman slams AI voice replicas

2025-11-13
Femalefirst
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology cloning Morgan Freeman's voice without permission, which is a direct use of an AI system leading to a violation of intellectual property rights. The actor and his legal team are responding to this harm, indicating that the AI system's use has already caused realized harm. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of obligations intended to protect intellectual property rights.
Thumbnail Image

'Robbing me': Morgan Freeman slams AI voice replicas

2025-11-14
Bunbury Mail
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice replication technology being used without consent, which is an AI system's use leading to harm in the form of violation of rights and unauthorized use of a person's voice. Morgan Freeman's legal actions indicate that harm has materialized. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of intellectual property and labor rights, harming the actor's interests.
Thumbnail Image

Why Morgan Freeman's fight against AI cloning matters for every artist

2025-11-13
DFA
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI firms cloning Morgan Freeman's voice and likeness without consent, which is a direct misuse of AI systems leading to harm to the artist's rights and livelihood. This fits the definition of an AI Incident as the AI system's use has directly led to a violation of intellectual property rights and harm to the individual. The involvement of legal action and identification of offenders further supports that harm has occurred and is ongoing.
Thumbnail Image

Morgan Freeman Threatens Legal Action Against AI Voice Cloning | 94.1 The Beat

2025-11-13
94.1 The Beat
Why's our monitor labelling this an incident or hazard?
The use of AI to recreate Morgan Freeman's voice without permission constitutes a violation of intellectual property and personal rights, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law protecting intellectual property rights. Since the unauthorized AI voice cloning has already occurred and is causing harm to the actor's livelihood and rights, this qualifies as an AI Incident. The article describes realized harm due to the AI system's use, not just potential future harm or general commentary, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Morgan Freeman on unauthorized AI voice cloning: "You're robbing me

2025-11-13
Cybernews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to clone a celebrity's voice without permission, which directly leads to violations of intellectual property rights and personal rights. The harm is realized as Freeman's lawyers are actively pursuing cases, indicating that unauthorized AI voice cloning has already occurred and caused harm. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of rights and harm to the individual.
Thumbnail Image

Morgan Freeman Battles AI Voice Clones in Legal Standoff

2025-11-14
Bangla news
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems cloning Morgan Freeman's voice without authorization, which is a direct violation of his rights and a form of theft. This misuse of AI technology harms the actor's economic interests and artistic reputation, fitting the definition of an AI Incident under violations of intellectual property rights and harm to the individual. The legal efforts to shut down these AI voice clones further confirm the harm is materialized and recognized.
Thumbnail Image

Morgan Freeman calls out AI for mimicking his voice without consent

2025-11-14
The News International
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to imitate Morgan Freeman's voice without authorization, which is a direct misuse of AI technology causing harm to the actor's rights and potentially his income. The involvement of legal teams and multiple unauthorized incidents confirms that harm has occurred. The issue also touches on broader concerns about AI-generated content replacing human performers, which is a recognized harm to labor rights and artistic integrity. Hence, this is an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

'Robbing me': Morgan Freeman frustrated over AI copying him

2025-11-14
NewsBytes
Why's our monitor labelling this an incident or hazard?
While the article involves AI systems (AI-generated avatars and voice synthesis), it primarily focuses on the concerns and frustrations expressed by Morgan Freeman regarding the unauthorized use of his likeness and voice by AI. There is no description of an actual AI Incident causing realized harm such as violation of rights through unauthorized commercial use, nor is there a clear AI Hazard indicating plausible future harm beyond general concerns. The article also mentions a positive use case (Michael Caine partnering with AI to preserve his voice) without harm. Therefore, this is best classified as Complementary Information, providing context on societal and industry responses and concerns related to AI-generated content and actors' rights.
Thumbnail Image

Morgan Freeman furious over AI impersonations

2025-11-14
The Jamaica Star
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology cloning Morgan Freeman's voice without permission, which is an AI system's use leading to a violation of intellectual property rights. The actor's legal actions indicate that harm has occurred due to unauthorized use of his voice. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of intellectual property rights and personal harm to the actor.
Thumbnail Image

Morgan Freeman kämpft gegen KI-Kopien seiner Stimme

2025-11-14
Zeitungsverlag Waiblingen
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to replicate a person's voice without permission, which constitutes a violation of intellectual property and personality rights. Since the AI-generated voice copies are being used without authorization, this is a breach of rights protected under applicable law. Therefore, this qualifies as an AI Incident due to violation of rights through the use of AI voice synthesis technology.
Thumbnail Image

Morgan Freeman wütend über Missbrauch seiner Stimme durch KI

2025-11-14
WEB.DE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create unauthorized copies of Morgan Freeman's voice, which constitutes an AI system's use leading to harm. The harms include violation of intellectual property rights and economic harm to the actor, as well as harm to his artistic identity. The involvement of AI in generating these voice imitations is clear, and the harm is realized, not just potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations and harm.
Thumbnail Image

Morgan Freeman threatens legal action over AI use of his voice, says he's 'a little PO'd'

2025-11-14
Fox News
Why's our monitor labelling this an incident or hazard?
Morgan Freeman's voice has been replicated by AI without his consent, leading to unauthorized use and legal disputes. This is a direct harm related to intellectual property rights and personal rights violations caused by AI systems generating his voice. The article reports on actual incidents of AI misuse rather than hypothetical or potential risks, qualifying it as an AI Incident under the framework.
Thumbnail Image

Morgan Freeman kämpft gegen Missbrauch seiner Stimme

2025-11-14
Yahoo!
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated imitations of Morgan Freeman's voice being used without authorization, which directly violates his rights and causes harm to his artistic identity and financial interests. The involvement of AI systems in generating these voice imitations is clear, and the harm is realized, not just potential. Hence, this qualifies as an AI Incident due to violations of rights and harm to the individual and community.
Thumbnail Image

Morgan Freeman Says He's 'a Little PO'd' over AI Replicas of His Voice: 'My Lawyers Have Been Very, Very Busy'

2025-11-14
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used to replicate Morgan Freeman's voice without his permission, which is a direct use of AI systems leading to harm in the form of intellectual property and personal rights violations. The involvement of lawyers and the identification of multiple instances confirm that harm has occurred. Therefore, this qualifies as an AI Incident under the category of violations of human rights or breach of intellectual property rights.
Thumbnail Image

Morgan Freeman kämpft gegen Missbrauch seiner Stimme

2025-11-14
WEB.DE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate unauthorized imitations of Morgan Freeman's voice, which directly leads to violations of his rights and financial harm. The AI system's misuse is central to the harm described, fulfilling the criteria for an AI Incident. The involvement of legal responses further confirms the materialization of harm rather than a potential risk. Hence, the event is classified as an AI Incident.
Thumbnail Image

Morgan Freeman threatens legal action over AI use of his voice:...

2025-11-14
New York Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions unauthorized AI uses of Morgan Freeman's voice, including AI deepfakes and voice imitation, which infringe on his rights and cause harm. The actor's legal actions indicate that these AI systems have already caused realized harm through misuse. The involvement of AI systems in replicating his voice without consent directly leads to violations of intellectual property and personal rights, fitting the definition of an AI Incident. The event is not merely a potential risk or complementary information but a current harm situation.
Thumbnail Image

Morgan Freeman hat viel Arbeit in seine weltberühmte Stimme gesteckt - die wird oft von KI gestohlen

2025-11-11
Spiegel Online
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it is used to imitate Morgan Freeman's voice. The use is unauthorized, which constitutes a violation of intellectual property rights and personal rights, fitting the definition of harm under (c) violations of human rights or breach of obligations under applicable law protecting intellectual property rights. Since the harm is occurring (the voice is being stolen and used without permission), this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Actor Morgan Freeman 'a Little PO'd' Over AI Use

2025-11-14
IJR
Why's our monitor labelling this an incident or hazard?
Morgan Freeman's voice being used by AI without consent constitutes a violation of his rights, particularly intellectual property and possibly personality rights. The article mentions ongoing legal cases addressing this misuse. Since the AI system's use has directly led to a rights violation harm, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations protecting intellectual property rights.
Thumbnail Image

Leute: Morgan Freeman kämpft gegen KI-Kopien seiner Stimme

2025-11-14
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the voice replication is done using AI technology. The harm stems from the unauthorized use of AI-generated voice copies, which constitutes a violation of intellectual property and personal rights, fitting the definition of harm under violations of human rights or intellectual property rights. Since the article reports ongoing unauthorized uses and legal actions, the harm is realized, making this an AI Incident rather than a potential hazard or complementary information.
Thumbnail Image

Morgan Freeman kämpft gegen KI-Kopien seiner Stimme

2025-11-14
stern.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate voice replicas of Morgan Freeman without his consent, which directly leads to a violation of his intellectual property and personal rights. This is a clear case of harm caused by the use of AI technology, as Freeman is deprived of compensation and control over his voice. The mention of legal actions and multiple discovered cases confirms that harm has occurred, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Ich bin sauer": Morgan Freeman geht gerichtlich gegen KI-Missbrauch seiner Stimme vor, während zwei seiner Kollegen sie gerade verkauft haben

2025-11-13
GameStar
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems copying Morgan Freeman's voice without authorization, which is a direct misuse of AI technology leading to violations of intellectual property and personal rights. The legal actions taken by Freeman's team confirm that harm has occurred. The presence of AI voice cloning systems is clear, and the harm is realized, not just potential. Hence, this is an AI Incident involving the use and misuse of AI systems causing rights violations.
Thumbnail Image

Morgan Freeman taking legal action over unauthorized AI replicas of his voice

2025-11-15
ArcaMax
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI to replicate Morgan Freeman's voice without authorization, which directly leads to a violation of his rights and legal action. The AI system's use has caused harm by infringing on intellectual property and labor rights, fulfilling the criteria for an AI Incident. The involvement of AI is clear, and the harm is realized, not just potential, so it is not a hazard or complementary information.
Thumbnail Image

THE DARK KNIGHT Star Morgan Freeman Slams Unauthorized AI Usage Of His Image; Reveals He's Taken Legal Action

2025-11-15
Comic Book Movie
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the event concerns AI cloning technology replicating Freeman's voice and image. The harm arises from unauthorized use, which breaches intellectual property rights and personal rights, impacting Freeman's income and reputation. The actor's legal action confirms the harm has materialized. The event does not merely warn of potential harm but reports actual unauthorized AI use causing harm, fitting the definition of an AI Incident.
Thumbnail Image

Morgan Freeman Takes Stand Against AI Impersonation of His Iconic Voice | Entertainment

2025-11-14
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate unauthorized voice imitations, which infringes on intellectual property and personal rights, thus constituting a violation of rights under applicable law. Since the misuse has already occurred (AI-generated ads using Freeman's voice without permission), this is a realized harm. Therefore, this qualifies as an AI Incident due to violations of rights caused by the AI system's use.
Thumbnail Image

Entertainment News | Morgan Freeman 'pissed Off' at AI Recreations of His Voice | LatestLY

2025-11-14
LatestLY
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to generate unauthorized recreations of Morgan Freeman's voice, which is a clear example of AI involvement. The harm is realized as it infringes on Freeman's rights and causes reputational and financial harm. Therefore, this qualifies as an AI Incident due to violations of intellectual property and personality rights caused by the AI system's use.
Thumbnail Image

Morgan Freeman 'pissed off' at AI recreations of his voice

2025-11-14
Asian News International (ANI)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to recreate Morgan Freeman's voice without authorization, which directly violates his rights and causes harm through unauthorized use and potential financial and reputational damage. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of intellectual property and personal rights, which are protected under applicable law.
Thumbnail Image

Morgan Freeman wehrt sich gegen KI-Kopien seiner Stimme

2025-11-14
Kurier
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (voice cloning and AI-generated actors) and their use/misuse, which can lead to violations of rights and harm to actors' careers. However, the article focuses on Morgan Freeman's resistance and legal efforts against such uses, without describing a specific incident of harm that has already occurred. Therefore, it is best classified as Complementary Information, as it provides context on societal and legal responses to AI misuse rather than reporting a concrete AI Incident or a plausible future hazard.
Thumbnail Image

"Meine Anwälte sind sehr busy"

2025-11-14
GALA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate unauthorized copies of Morgan Freeman's voice, which directly violates his rights and causes harm to his livelihood and artistic identity. The involvement of AI in creating these voice imitations is clear, and the harm is realized, not just potential. The legal response further confirms the seriousness of the incident. Hence, this event fits the definition of an AI Incident involving violations of intellectual property rights and harm to the individual caused by AI misuse.
Thumbnail Image

Morgan Freeman kämpft gegen KI-Kopien seiner Stimme

2025-11-14
Kölner Stadt-Anzeiger
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating unauthorized voice replicas of Morgan Freeman, which directly leads to harm by violating his intellectual property and personal rights, depriving him of rightful compensation and control. The article explicitly mentions ongoing legal actions and multiple cases, indicating that harm has occurred. The AI system's use here is central to the harm, fulfilling the criteria for an AI Incident under violations of rights.
Thumbnail Image

Morgan Freeman Speaks Out Against AI Voice Replication: 'You're Robbing Me' | EURweb | Black News, Culture, Entertainment & More

2025-11-14
EURweb
Why's our monitor labelling this an incident or hazard?
The article centers on the unauthorized use of AI voice replication, which raises ethical and legal issues but does not describe a realized harm or incident caused by AI. The involvement of AI is clear, but the event is about objections and concerns rather than an AI Incident or Hazard. It also includes information about AI use in film production as a technological advancement without harm. Therefore, this is best classified as Complementary Information, providing context and updates on societal and governance responses to AI voice replication and image technologies in entertainment.
Thumbnail Image

Morgan Freeman taking legal action over unauthorized AI replicas of his voice

2025-11-15
The Columbian
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to replicate Morgan Freeman's voice without authorization, which directly leads to a violation of intellectual property rights. Since the unauthorized use has already occurred and legal action is underway, this is a realized harm rather than a potential one. Therefore, it qualifies as an AI Incident due to the breach of obligations under applicable law protecting intellectual property rights.
Thumbnail Image

Morgan Freeman kämpft gegen KI-Kopien seiner Stimme

2025-11-14
Freie Presse
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating voice replicas, which is an AI system use. The misuse of AI to replicate a person's voice without consent can lead to violations of intellectual property and personal rights, which are harms under the framework. However, since the article only discusses ongoing legal and union efforts to combat this and does not describe any specific incident of harm that has already occurred, it fits the category of a plausible risk or ongoing misuse that could lead to harm. Therefore, this is best classified as Complementary Information, as it provides context on societal and legal responses to AI misuse rather than describing a concrete AI Incident or a purely potential hazard.
Thumbnail Image

Morgan Freeman Threatens Legal Action Against AI Voice Cloning

2025-11-14
Atlanta Daily World
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated versions of Morgan Freeman's voice being used without his consent, which directly violates his intellectual property and personal rights. The AI system's use here has led to harm by unauthorized exploitation of his voice, impacting his livelihood and legal rights. This fits the definition of an AI Incident as the AI system's use has directly led to a breach of obligations under applicable law protecting intellectual property rights.
Thumbnail Image

Morgan Freeman kämpft gegen Stimmenklau durch Künstliche Intelligenz

2025-11-14
LZ online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate synthetic voices without the consent of the individual, leading to violations of rights related to voice ownership and likeness. This unauthorized AI-generated voice use directly harms the actor's rights and livelihood, fitting the definition of an AI Incident. The article reports that such cases have been discovered and pursued, confirming realized harm rather than just potential risk.
Thumbnail Image

Morgan Freeman und die Herausforderungen der KI in Hollywood

2025-11-11
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technologies being used to imitate Morgan Freeman's voice without authorization, which directly leads to harm in the form of lost income and violation of rights. The involvement of AI in replicating his voice is clear, and the harm is realized as legal cases are underway. This fits the definition of an AI Incident due to violation of intellectual property rights and harm to the individual caused by AI misuse.
Thumbnail Image

Morgan Freemans Stimme: Ein Meisterwerk der Künstlichen Intelligenz

2025-11-10
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that imitate Morgan Freeman's voice without permission, directly leading to legal and personal rights violations. The harm is realized as Freeman's voice is used without consent, and legal actions are underway. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of intellectual property and personal rights, which are protected under applicable law.
Thumbnail Image

Morgan Freeman fumes over AI copies of his voice and warns imitators to stop: 'You're robbing me'

2025-11-16
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating voice copies and digital performers without consent, directly leading to violations of intellectual property and labor rights, which are recognized harms under the AI Incident definition. The article highlights actual use and legal actions underway, indicating harm has occurred rather than just potential harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Morgan Freeman Calls Out AI Stealing His Voice: 'Don't Mimic Me With Falseness'

2025-11-16
News18
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to generate synthetic voices mimicking a real actor without authorization, which constitutes a violation of intellectual property and labor rights. The unauthorized use of AI-generated voices has already led to legal interventions, indicating realized harm. Therefore, this event qualifies as an AI Incident due to violations of rights and harm to the actor's interests caused by AI misuse.
Thumbnail Image

Morgan Freeman taking legal action over unauthorized AI replicas of his voice

2025-11-15
Daily News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems replicating Morgan Freeman's voice without authorization, which is a clear example of AI system use leading to a violation of intellectual property rights. This harm has already occurred, as indicated by the legal actions underway. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations protecting intellectual property rights.
Thumbnail Image

Don't mimic me: Morgan Freeman criticizes AI voice replication

2025-11-15
http://www.uniindia.com/fadnavis-orders-probe-into-mumbai-pub-fire/states/news/1090400.html
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate unauthorized voice replicas of Morgan Freeman, which is a direct misuse of AI technology. This misuse has led to violations of intellectual property and personal rights, which are recognized harms under the AI Incident definition (c). Since the harm is occurring and legal actions are underway to address it, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Michael Caine needs to listen to Morgan Freeman and say no to AI

2025-11-15
Far Out Magazine
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (voice cloning AI) and their use, but the article primarily discusses the potential harms and ethical concerns rather than describing a concrete incident where harm has occurred. There is no direct or indirect evidence of injury, rights violations, or other harms materializing from the AI voice cloning in this context. The concerns about misleading content and reputation damage are plausible future harms but are not reported as having happened yet. Therefore, this qualifies as an AI Hazard because the development and use of AI voice cloning technology could plausibly lead to harms such as misinformation, identity misuse, or violation of rights, but no specific incident of harm is described.
Thumbnail Image

Morgan Freeman slams the rising use of AI voice clones: 'Don't mimic me'

2025-11-16
The Telegraph
Why's our monitor labelling this an incident or hazard?
The article describes AI systems replicating Morgan Freeman's voice without consent, which directly relates to violations of intellectual property and labor rights. The unauthorized use of AI voice clones deprives actors of compensation and undermines their livelihoods, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations protecting labor and intellectual property rights. The involvement of legal action and union statements further supports that harm has occurred. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Morgan Freeman Condemns AI Voice Theft as Digital Robbery of Actors' Livelihoods

2025-11-16
Bangla news
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the unauthorized use of AI voice cloning technology to replicate Morgan Freeman's voice without permission, which the actor and SAG-AFTRA identify as theft and a violation of rights. This constitutes a breach of intellectual property and labor rights, fitting the definition of harm under AI Incident (c). The involvement of AI in cloning the voice and the resulting harm to the actor's livelihood is direct and ongoing, not merely a potential or future risk. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Morgan Freeman Warns Michael Caine Against Allowing AI to Clone His Voice

2025-11-16
BGNES: Breaking News, Latest News and Videos
Why's our monitor labelling this an incident or hazard?
The article centers on the development and use of AI voice cloning technology and the associated ethical and legal concerns. While it discusses potential harms such as unauthorized use and damage to artists' legacies, it does not report a concrete event where AI voice cloning has directly or indirectly caused harm. The involvement of AI is clear, and the potential for harm is significant, but the current situation is more about ongoing risks and responses rather than a realized incident. Therefore, this qualifies as Complementary Information, providing context and updates on societal and legal responses to AI voice cloning technology.
Thumbnail Image

Morgan Freeman Geram Ditiru Pakai AI, Siapkan Langkah Hukum karena Merasa Dirampok

2025-11-14
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to imitate Morgan Freeman's voice and likeness without permission, which constitutes a violation of intellectual property and personal rights. This harm has already occurred, as indicated by Freeman's legal actions and statements. The AI system's use directly leads to this harm, fulfilling the criteria for an AI Incident under violations of human rights or intellectual property rights.
Thumbnail Image

AI dituduh oleh Morgan Freeman mengkloning suara khasnya

2025-11-11
Antara News Kalteng
Why's our monitor labelling this an incident or hazard?
Morgan Freeman explicitly accuses AI technology of cloning his voice without permission, which is a direct use of an AI system (voice cloning AI). The unauthorized replication of his voice constitutes a violation of intellectual property rights, a recognized harm under the AI Incident framework. The harm is realized, as it affects his income and rights, and legal actions are being pursued. The presence and use of AI in cloning the voice is clear, and the harm is directly linked to the AI system's use. Hence, this is classified as an AI Incident.
Thumbnail Image

Aktor Morgan Freeman tuduh AI merampoknya lewat kloning suara

2025-11-11
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used for voice cloning, which is an AI application that generates outputs (replicated voice) influencing virtual environments (media, films, etc.). The unauthorized use of Freeman's voice without permission is a breach of intellectual property rights, a form of harm covered under AI Incidents. The harm is realized, not just potential, as Freeman's voice is being cloned and used without authorization, impacting his rights and income. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing a violation of intellectual property rights.
Thumbnail Image

Morgan Freeman kesal suaranya dicuri AI, sebut itu bentuk perampokan digital

2025-11-11
Antara News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone Morgan Freeman's voice without permission, which is a direct violation of intellectual property rights, a form of harm under the AI Incident definition (c). The harm is realized as it affects Freeman's identity and income, and legal steps are being taken. The AI system's use in voice cloning is central to the incident, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Bantah Pensiun, Morgan Freeman Kesal dengan Video AI Tiru Dirinya

2025-11-14
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
Morgan Freeman's statements highlight the unauthorized use of AI to create synthetic videos of him, which implicates AI systems in potential intellectual property and personal rights violations. However, the article focuses on his reaction and the legal efforts underway rather than describing a specific incident where harm has already occurred or a hazard that is imminent. The event thus fits the definition of Complementary Information, as it details responses and concerns about AI misuse without reporting a direct or plausible harm event.
Thumbnail Image

Morgan Freemans Anwälte verfolgen KI-Fakes seiner Stimme

2025-11-14
N-tv
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated voice fakes of Morgan Freeman's distinctive voice being used without permission, which harms his professional livelihood and violates his rights. The legal pursuit by his lawyers confirms the harm has occurred. The use of AI to create these voice replicas is central to the harm, fulfilling the criteria for an AI Incident involving violation of intellectual property and labor rights. The mention of union concerns further supports the recognition of harm to actors' rights and jobs due to AI misuse.
Thumbnail Image

Wenn die eigene Stimme geklaut wird: Hollywood-Legende nimmt Kampf gegen KI auf

2025-11-15
Buffed
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI companies using Morgan Freeman's voice without authorization, which is a direct misuse of an AI system (voice cloning) leading to harm in the form of intellectual property and personal rights violations. The involvement of legal action confirms the recognition of harm. The AI system's use here is not hypothetical or potential but has already occurred, causing realized harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

KI-Stimmen-Skandal: Morgan Freeman kämpft gegen Missbrauch seiner Stimme

2025-11-14
stuttgarter-nachrichten.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated voice imitations of Morgan Freeman being used without authorization, which constitutes a violation of intellectual property rights and artistic integrity. The involvement of AI in generating these voice imitations is clear, and the harm is realized as it affects Freeman's financial interests and control over his artistic identity. The legal response further confirms the recognition of harm. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's misuse.
Thumbnail Image

Morgan Freeman wehrt sich gegen KI: "Ihr beraubt mich" | Tiroler Tageszeitung Online

2025-11-14
Tiroler Tageszeitung Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to replicate a person's voice and create AI-generated actors, which directly relates to violations of intellectual property and personal rights. Since the article describes actual unauthorized use and legal disputes already occurring, this constitutes a violation of rights caused by AI use, fitting the definition of an AI Incident.
Thumbnail Image

Morgan Freeman kämpft gegen KI-Fälschungen seiner Stimme

2025-11-15
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic voices that imitate Morgan Freeman without consent, which constitutes a violation of personality and intellectual property rights, harming the affected individuals and potentially the broader acting community. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to professional livelihoods. The article focuses on the harm caused by unauthorized AI voice cloning and the legal and union responses, indicating realized harm rather than just potential risk or general commentary.
Thumbnail Image

Morgan Freeman kämpft gegen KI-Nachbildungen seiner Stimme

2025-11-16
Deutschlandfunk Kultur
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it is used to generate voice replicas. The event concerns the use of AI-generated voice without consent, which is a violation of intellectual property and personal rights, fitting the definition of harm under (c) violations of human rights or breach of obligations protecting intellectual property rights. Since the harm (unauthorized use and lack of compensation) is occurring, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Morgan Friman ljut zbog zloupotrebe AI replika njegovog glasa: "Prestanite me pljačkati"

2025-11-15
Avaz.ba
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools to generate Morgan Freeman's voice without authorization, which is a direct misuse of an AI system. This misuse has led to harm in the form of violation of intellectual property and labor rights, as the actor is not compensated and his voice is exploited without consent. The involvement of lawyers and the actor's public complaint confirm the harm has materialized. Hence, this qualifies as an AI Incident under the framework's definition of violations of rights caused by AI misuse.
Thumbnail Image

Morgan Freeman ljut zbog zloupotrebe AI replika njegovog glasa: "Prestanite me pljačkati"

2025-11-15
Klix.ba
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as AI technology is used to replicate Morgan Freeman's voice without authorization. This unauthorized use constitutes a violation of intellectual property rights and personal rights, which falls under harm category (c) in the framework. The harm is realized as the actor is being 'plundered' financially and personally without consent, and legal actions are already underway. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Morgan Frimen bijesan na AI: Prestanite da me pljačkate

2025-11-15
Cafe del Montenegro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to replicate Morgan Freeman's voice without permission, which is a direct violation of his rights and constitutes harm to his intellectual property and labor rights. The involvement of AI in unauthorized voice replication and the resulting harm to the actor's rights meets the definition of an AI Incident. The event is not merely a potential risk but involves actual unauthorized use and harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Morgan Friman besan zbog veštačke inteligencije

2025-11-16
Glas javnosti
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it is used to recreate Morgan Freeman's voice without authorization, leading to a violation of his rights and potential intellectual property infringement. This unauthorized use of AI-generated voice constitutes harm to the individual (violation of rights) and is ongoing. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to a breach of rights and harm to the actor. The mention of the actors' union opposition and concerns about AI-generated actors further contextualizes the harm and resistance but does not change the classification.
Thumbnail Image

Morgan Freeman ljut zbog zloupotrebe AI replika njegovog glasa: "Prestanite me pljačkati"

2025-11-16
Info-ks.net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to replicate Morgan Freeman's voice without authorization, which is a direct violation of his rights and constitutes harm. The AI system's development and use in this context have led to realized harm (violation of intellectual property rights and personal rights). The actor's legal team is already involved, indicating the seriousness of the incident. Therefore, this event qualifies as an AI Incident due to the direct harm caused by unauthorized AI use.
Thumbnail Image

Morgan Frimen pobesneo, zapretio tužbom: "Ne možete da me zamenite"

2025-11-19
B92
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to replicate Morgan Friman's voice without permission, which is a direct violation of his rights and constitutes harm. The actor's threat of legal action further confirms the recognition of this harm. The AI system's use here has directly led to a breach of intellectual property and personal rights, fitting the definition of an AI Incident.
Thumbnail Image

Morgan Freeman denuncia imitaciones con inteligencia artificial y exige frenar la clonación de voces

2025-11-15
infobae
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone Morgan Freeman's voice without authorization, which constitutes a violation of intellectual property and personal rights. The actor's legal team is already intervening, indicating that harm has occurred or is ongoing. The AI system's use directly leads to harm in terms of rights violations and labor concerns. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and potential labor market harm).
Thumbnail Image

Morgan Freeman , 88 años, actor, critica las voces generadas por IA: "Estoy un poco molesto, ¿sabes? Soy como cualquier otro actor: no me imiten. Me pagan por hacer ese trabajo; si lo hacen sin mí, me están robando"

2025-11-17
Mundo Deportivo
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (voice cloning technology) to replicate Morgan Freeman's voice without authorization, which directly leads to a violation of intellectual property and labor rights. The article states that legal actions are underway due to this unauthorized use, confirming that harm has occurred. The AI system's use in this context is not hypothetical but has materialized in unauthorized voice replication, causing harm to the actor's professional and personal rights. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Morgan Freeman prepara demanda por deepfakes de su voz creados con IA

2025-11-16
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake voices of Morgan Freeman being used without his permission, which he and his legal team are actively addressing through lawsuits. This unauthorized use of AI technology infringes on his rights and causes harm by potentially depriving him of income and work opportunities. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident involving violations of rights and harm to labor interests.
Thumbnail Image

"Me estáis robando": Morgan Freeman (88 años) estalla contra las imitaciones de su voz por IA

2025-11-15
20 minutos
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to synthesize a recognizable human voice without authorization, which directly leads to a violation of intellectual property and labor rights. This constitutes harm to the individual (Morgan Freeman) and the acting community, fulfilling the criteria for an AI Incident under violations of human rights and intellectual property rights. The article details ongoing harm and legal actions, not just potential risks, so it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Morgan Freeman denuncia que el software de inteligencia artificial le "roba" la voz - Es de Latino News

2025-11-13
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI voice synthesis technology to replicate Morgan Freeman's voice without his consent, which is a direct use of an AI system. The harm is a violation of intellectual property and labor rights, as the AI-generated voice could replace the actor's work and income. The article states that Freeman's lawyers are actively dealing with multiple potential legal cases, implying that the harm has materialized or is ongoing. Therefore, this meets the criteria for an AI Incident, as the AI system's use has directly or indirectly led to a breach of rights and potential economic harm to the actor.
Thumbnail Image

Morgan Freeman denuncia el uso no autorizado de voz de IA y emprende acciones legales - Es de Latino News

2025-11-14
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate Morgan Freeman's voice without authorization, which is a clear example of AI system use leading to harm through violation of intellectual property and personal rights. The legal actions and complaints indicate that harm has already occurred. The involvement of AI in generating synthetic voice content is central to the incident. Hence, this is an AI Incident due to realized harm caused by unauthorized AI-generated voice use.