Bollywood Celebrities Take Legal Action Against AI-Generated Deepfakes and Image Misuse

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Several Bollywood celebrities, including Aishwarya Rai Bachchan, Abhishek Bachchan, Karan Johar, Anil Kapoor, and Jackie Shroff, have approached the Delhi High Court to combat unauthorized AI-generated deepfakes, voice cloning, and misuse of their likenesses. The court recognized these as violations of personality rights, privacy, and dignity, ordering takedowns and legal protections.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (AI image generators) used to create deepfakes and unauthorized images, which have directly led to violations of personality rights and legal disputes. This constitutes a violation of human rights and intellectual property rights, fitting the definition of an AI Incident. The court ruling confirms that harm has materialized and is recognized legally.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilitySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
Other

Harm types
Human or fundamental rightsReputationalPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

5 Risks of Uploading Your Photos to AI Image Generators | Herzindagi

2025-09-16
HerZindagi
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI image generators) used to create deepfakes and unauthorized images, which have directly led to violations of personality rights and legal disputes. This constitutes a violation of human rights and intellectual property rights, fitting the definition of an AI Incident. The court ruling confirms that harm has materialized and is recognized legally.
Thumbnail Image

Why Aishwarya, Karan Johar Are Rushing To Court

2025-09-16
Rediff.com India Ltd.
Why's our monitor labelling this an incident or hazard?
The article explicitly links AI deepfake technology to the unauthorized use and exploitation of celebrity personas, which constitutes a violation of personality rights—a form of human rights and intellectual property rights violation. The harms are realized as celebrities have sought and obtained court injunctions to stop ongoing misuse. The AI systems' outputs (deepfakes, voice clones) are directly causing harm to individuals' rights and commercial interests. The legal actions and court rulings confirm that these harms are materializing, not just potential. Hence, this is an AI Incident under the framework.
Thumbnail Image

Why personality rights need legislative protection

2025-09-15
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake generation and AI chatbots) used without consent to create unauthorized and harmful content that violates personality rights, including privacy and dignity, and causes commercial harm. These harms have materialized, as evidenced by the court granting interim relief. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of human rights and harm to the individual's dignity and property rights. The article's focus is on the actual misuse and harm caused by AI-generated content, not just potential or future risks, nor is it merely complementary information or unrelated news.
Thumbnail Image

'Jhakaas' to 'Beedu' - B-town stars rush to protect personality rights amid AI trends - The Tribune

2025-09-17
The Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technologies like deepfakes and voice cloning being used to create unauthorized content that harms celebrities' personality rights, causing financial and reputational damage. The harm is direct and ongoing, with courts recognizing the violation of rights and dignity. The AI systems' misuse has directly led to these harms, fulfilling the criteria for an AI Incident under violations of rights and harm to individuals. The legal responses are reactions to these incidents rather than the main focus, so this is not merely complementary information.
Thumbnail Image

Bollywood stars knock on Delhi HC doors to protect personality rights amid deepfakes and misuse of likeness - CNBC TV18

2025-09-15
cnbctv18.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfakes and images that have been used without consent, causing reputational damage and misleading the public. The court ruling recognizes this as a violation of personality rights, which are linked to privacy and intellectual property rights. The harm is realized and legally acknowledged, with direct involvement of AI systems in generating the harmful content. Hence, this is an AI Incident due to direct harm caused by AI misuse.
Thumbnail Image

Amid deepfake disaster, Bollywood draws the legal sword - The Statesman

2025-09-16
The Statesman
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos and voice cloning being used without consent, causing harm to celebrities' personality rights, which are legally recognized as extensions of privacy and property rights. The misuse of AI technology here has directly led to harm (reputational, financial, and personal distress) to the individuals involved. The legal cases cited demonstrate that these harms have materialized and are being addressed through the courts. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to individuals.
Thumbnail Image

Karan Johar Seeks Protection of Personality Rights in Court

2025-09-16
MediaNama
Why's our monitor labelling this an incident or hazard?
The article details realized harms caused by the unauthorized use of AI-generated content and AI tools to impersonate celebrities, create misleading profiles, and produce inappropriate videos, which infringe on personality rights and cause reputational damage. These harms fall under violations of human rights and breaches of legal protections for personality and publicity rights. The AI systems' use and misuse are central to the incidents, making this an AI Incident rather than a hazard or complementary information. The legal responses are part of the broader context but do not change the classification of the core event as an incident.
Thumbnail Image

Delhi High Court restrains misuse of Karan Johar's persona through AI, deepfakes

2025-09-19
The Hindu
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfakes, face morphing, AI-generated content) being used to create unauthorized and potentially harmful content exploiting Karan Johar's persona. The misuse has led to violations of personality and publicity rights, which are a form of human rights violation under the framework. The court's restraining order is a response to realized harm caused by AI misuse. Therefore, this event meets the criteria for an AI Incident as the AI system's use has directly or indirectly led to harm (violation of rights).
Thumbnail Image

India News | Delhi High Court Protects Karan Johar's Personality Rights, Flags Concerns over Misuse of Technology | LatestLY

2025-09-19
LatestLY
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-driven tools such as deepfakes and face-morphing as technologies that could be misused to infringe on personality rights. The court's injunction aims to prevent such misuse, indicating awareness of AI's potential for harm. However, the article does not describe an actual AI incident causing harm but rather a legal action to prevent or mitigate such harm. This fits the definition of Complementary Information, as it provides context on governance and societal responses to AI-related risks without reporting a new AI Incident or Hazard.
Thumbnail Image

Aishwarya Rai: Bollywood stars fight for personality rights amid deepfake surge

2025-09-24
BBC
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content as part of the misuse of celebrity identities, indicating AI system involvement in creating harmful fake content. The harms involved include violations of personality rights and potential reputational damage, which fall under violations of rights. However, the article does not report a specific incident of harm caused by AI-generated content but rather discusses ongoing legal efforts and court rulings addressing these issues. This aligns with the definition of Complementary Information, which covers societal and governance responses to AI-related harms rather than new incidents or hazards themselves.
Thumbnail Image

Bollywood stars fight for personality rights amid deepfake surge

2025-09-23
Yahoo
Why's our monitor labelling this an incident or hazard?
The article centers on the misuse of AI-generated content (deepfakes) involving celebrities' images and identities, which constitutes a violation of personality rights—a form of harm to individuals' rights. Since the misuse of AI-generated content is occurring and has led to legal actions and court rulings, this qualifies as an AI Incident. The AI system's use in generating fake images and content directly leads to violations of rights, fulfilling the criteria for an AI Incident. The article does not merely discuss potential future harm or general AI developments but reports on realized harms and legal responses to them.
Thumbnail Image

Aishwarya Rai leads Bollywood fight against deepfakes in 'personality rights' lawsuit

2025-09-24
Yahoo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of deepfake technology used to create unauthorized and harmful content involving celebrities. The harms are realized and ongoing, including violations of personality rights (a form of intellectual property and personal rights), commercial exploitation without consent, and reputational harm. The legal actions and court rulings confirm the recognition of these harms. Hence, this qualifies as an AI Incident because the development and use of AI systems have directly led to violations of rights and harm to individuals.
Thumbnail Image

Aishwarya Rai leads Bollywood fight against deepfakes in personality rights lawsuit

2025-09-24
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI deepfake technology to create doctored videos and images that harm celebrities by violating their personality rights and causing reputational damage. This constitutes a violation of fundamental rights and commercial exploitation, which fits the definition of an AI Incident. The involvement of AI systems in generating deepfakes that have already caused harm to individuals' rights and reputations is clear and direct. The legal actions and court rulings are responses to these harms, not the primary event, so the classification is AI Incident rather than Complementary Information.
Thumbnail Image

What are personality rights and how have courts shielded Indian celebrities | Explained

2025-09-24
The Hindu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content (deepfakes, voice cloning) being used without consent to exploit celebrities' images, voices, and likenesses commercially, which constitutes harm to their personality rights and dignity. The courts have issued orders to restrain such misuse, indicating that harm has occurred and is ongoing. The involvement of AI systems in generating unauthorized content that harms individuals' rights fits the definition of an AI Incident. The article focuses on actual harm and legal remedies rather than potential or future harm, so it is not an AI Hazard. It is not merely complementary information since the core subject is the misuse of AI-generated content causing harm and the courts' protective rulings. Hence, the classification is AI Incident.
Thumbnail Image

What are personality rights and how are courts shielding Indian celebrities

2025-09-25
The Hindu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content including deepfakes, voice cloning, and AI chatbots that misuse celebrities' images and voices without consent. This unauthorized use constitutes a violation of personality rights, which are recognized under Indian law and protected by courts. The harm is realized as the celebrities' autonomy, dignity, and commercial interests are compromised. The courts' interventions to restrain such misuse confirm that these are incidents where AI systems have directly led to harm. Hence, the event qualifies as an AI Incident under the framework.
Thumbnail Image

Bollywood stars fight for personality rights amid deepfake surge

2025-09-24
Sri Lanka Mirror
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake content being used without consent, causing harm to celebrities' personality rights, which are legally recognized as protecting identity and image. This misuse of AI technology has directly led to violations of rights and reputational harm, fitting the definition of an AI Incident. The legal actions and court rulings confirm that harm has occurred due to AI system use. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

BOLLYWOOD STARS BATTLE AMID SURGE IN DEEPFAKES - herald

2025-09-24
heraldonline.co.zw
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content (deepfakes) being used without consent to exploit celebrities' identities, which constitutes a violation of their personality rights, a form of human rights violation. The misuse of AI-generated images and content has directly led to harm by infringing on these rights and causing reputational and commercial damage. Therefore, this qualifies as an AI Incident because the development and use of AI systems for generating fake content have directly led to harm to individuals' rights and reputations.
Thumbnail Image

Nagarjuna gets Delhi High Court protection from AI misuse, fake content

2025-09-25
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content being circulated and monetized without authorization, which constitutes misuse of an AI system's outputs leading to harm (violation of personality and publicity rights). This misuse has already occurred, causing harm to the individual. Therefore, this qualifies as an AI Incident because the AI system's outputs have directly led to violations of rights and harm to the individual. The court's protection is a response to this realized harm, not just a potential future risk.
Thumbnail Image

Nagarjuna gets protection from Delhi HC over misuse of his peronality rights

2025-09-25
mid-day
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos misusing Nagarjuna's likeness, which is a direct involvement of AI systems in creating harmful content. The harm includes violation of personality rights and unauthorized use of identity, which falls under violations of human rights or breach of applicable law protecting fundamental rights. Since the misuse has already happened and legal action is underway, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Nagarjuna Approaches Delhi High Court Over Unauthorized Use of AI Technology

2025-09-25
Andhrawatch
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create objectionable content and commercial exploitation of Nagarjuna's images and videos without consent, leading to harm to his personal rights and reputation. This fits the definition of an AI Incident as the AI system's use has directly led to violations of rights and harm to the individual. The legal petition and court involvement further confirm the recognition of harm caused by AI misuse.
Thumbnail Image

Delhi High Court Upholds Nagarjuna's Personality Rights, Actor Expresses Gratitude

2025-09-27
IndiaGlitz.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated and morphed visuals of the actor being circulated without authorization, which directly harms his personality rights and reputation. The AI system's misuse in generating misleading content has led to realized harm (violation of rights). The court ruling and the actor's response confirm the harm has occurred and is being addressed. Hence, this is an AI Incident due to the direct harm caused by AI-generated content infringing on personality rights.
Thumbnail Image

Delhi High Court protects Nagarjuna personality rights, actor expresses gratitude

2025-09-26
WION
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated material misusing Nagarjuna's likeness, which constitutes a violation of personality rights and intellectual property rights. The misuse of AI to create unauthorized content that harms the actor's reputation and identity is a direct harm linked to the use of AI systems. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to harm through unauthorized AI-generated content. The court's intervention to restrain such misuse confirms the materialization of harm rather than a mere potential risk.
Thumbnail Image

Nagarjuna thanks and appreciates Delhi High Court after personality rights secured

2025-09-26
KalingaTV
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content misusing Nagarjuna's likeness, which constitutes a violation of his personality rights, a form of harm under the framework. The court's intervention is a response to an AI Incident where the AI system's outputs (AI-generated content) have directly led to harm (violation of rights). Therefore, this qualifies as an AI Incident. The article focuses on the harm caused by AI-generated misuse and the legal protection granted, not merely on general AI developments or future risks, so it is not Complementary Information or an AI Hazard.