Assam Influencer Defamed by AI-Generated Obscene Content; Ex-Boyfriend Arrested

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Pratim Bora, ex-boyfriend of Assam influencer Archita Phukan, was arrested for using AI platforms to create and circulate fake, explicit images and videos of her online. The AI-generated content falsely linked Phukan to the adult industry, causing reputational harm and public outrage before police intervention exposed the fabrication.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly mentions the use of AI-generated images to create fake obscene content, which was deliberately circulated to harass and defame Archita Phukan. This constitutes a violation of privacy and defamation, falling under violations of human rights and applicable law. The AI system's use directly led to harm to the individual, making this an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyRobustness & digital securityAccountabilityTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
Women

Harm types
ReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Archita Phukan Dragged Into AI Scandal: Ex-Partner Accused Of Creating Fake Profile

2025-07-13
Oneindia
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-generated images to create fake obscene content, which was deliberately circulated to harass and defame Archita Phukan. This constitutes a violation of privacy and defamation, falling under violations of human rights and applicable law. The AI system's use directly led to harm to the individual, making this an AI Incident.
Thumbnail Image

Tinsukia techie held for creating, circulating morphed pics using AI | Guwahati News - Times of India

2025-07-13
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI tools to generate doctored pornographic images of a woman, which were then circulated maliciously. This use of AI directly led to harm in the form of cyber harassment, defamation, and invasion of privacy, all of which are violations of human rights and personal dignity. The harm is realized and ongoing, and the AI system's role is pivotal in creating the fabricated content. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Video | Man Arrested In Assam For Circulating Deepfake Videos Of His Ex-Girlfriend

2025-07-13
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (advanced AI tools for deepfake creation) to produce manipulated explicit content that harms the individual's rights and dignity. This constitutes a violation of human rights and privacy, fulfilling the criteria for an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

'Adult content' made her an overnight sensation. Now, a police arrest reveals how Assam woman was victim of AI used by ex-collegemate

2025-07-14
The Indian Express
Why's our monitor labelling this an incident or hazard?
The accused used AI tools (OpenArt and Midjourney) to create manipulated content that harmed the victim by spreading false and obscene material, leading to harassment and defamation. The AI system's use was central to the harm caused, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in the incident.
Thumbnail Image

Assam man arrested for creating fake profile, AI-generated images of influencer Archita Phukan

2025-07-13
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake images (AI-generated content) which were then used maliciously to harass and defame an individual. This caused harm to the person's reputation and privacy, which constitutes a violation of rights. The AI system's use directly led to this harm, qualifying the event as an AI Incident.
Thumbnail Image

Assam Man Arrested For Creating Fake Profile, AI-Generated Pics Of Influencer Archita Phukan

2025-07-13
News18
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate morphed images of the influencer, which were then posted on a fake profile to cause reputational harm and harassment. The AI-generated content directly contributed to violations of the influencer's rights and caused harm to her reputation and community standing. Therefore, this qualifies as an AI Incident due to the realized harm stemming from the AI-generated images and their malicious use.
Thumbnail Image

Assam Man Arrested After Sharing AI-Morphed Images Of Ex-Girlfriend

2025-07-14
Oneindia
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-generated image manipulation to create and distribute defamatory content, causing harm to the victim's reputation and privacy. This constitutes a violation of rights and cyber harassment, which are harms under the AI Incident definition. The AI system's use is central to the harm, as the images were AI-morphed and distributed maliciously. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Assam man circulates AI-morphed images of ex-girlfriend, arrested

2025-07-13
India Today
Why's our monitor labelling this an incident or hazard?
The use of advanced AI tools to create and distribute morphed explicit images directly led to reputational harm and emotional trauma to the victim, which qualifies as a violation of human rights and personal dignity. The AI system's use here is malicious and has caused realized harm, meeting the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a concrete case of harm caused by AI misuse.
Thumbnail Image

Assam: Man arrested for cyber defamation after using AI to create fake adult content

2025-07-13
NORTHEAST NOW
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-powered image generation platforms were used to create morphed and explicit images falsely portraying the victim, which were then circulated to defame her. This misuse of AI directly led to harm to the individual's reputation and emotional well-being, fitting the definition of an AI Incident due to violation of rights and harm to a person. The involvement of AI in the creation and spread of harmful content is clear and central to the incident.
Thumbnail Image

Assam: Police arrests Archita Phukan's ex-boyfreind for creating her viral morphed, AI-generated images

2025-07-13
OpIndia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to generate morphed images and videos that were used maliciously to harass and defame Archita Phukan. The harm includes violation of privacy, defamation, and emotional distress, which fall under violations of human rights and harm to communities. The AI system's use was central to the creation and dissemination of harmful content, fulfilling the criteria for an AI Incident. The arrest and legal proceedings confirm the harm has materialized, not just a potential risk.
Thumbnail Image

Babydoll Archi AKA Archita Phukan TARGETED? Assam influencer's AI-morphed photos lead to the arrest of...

2025-07-14
BollywoodLife
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-morphed photos (AI system involvement inferred from advanced photo editing software used to superimpose faces) to create and spread false explicit content, which constitutes a violation of rights and harm to the individual and community. The harm has already occurred, and the AI system's role is pivotal in causing this harm. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Archita Phukan's Viral S*x Video Was AI-Generated; Ex-Boyfriend Arrested For Making Assam Woman's Porn Content To Seek Revenge

2025-07-14
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The event explicitly states that AI software was used to generate fake pornographic videos and images of Archita Phukan, which were then circulated to defame and harass her. This use of AI directly led to harm to the victim's reputation and privacy, fulfilling the criteria for an AI Incident under violations of human rights and breach of legal protections. The involvement of AI in creating harmful content and the resulting real harm to the victim's well-being and rights clearly classifies this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

One arrested in Assam for defaming influencer by creating fake social media account

2025-07-13
http://www.uniindia.com/fadnavis-orders-probe-into-mumbai-pub-fire/states/news/1090400.html
Why's our monitor labelling this an incident or hazard?
The use of AI platforms to create and disseminate fake, obscene images directly led to harm to the influencer's reputation and caused harassment, which qualifies as harm to the individual (a form of harm to a person). The AI system's role in generating the morphed photos is pivotal to the incident. Therefore, this event meets the criteria of an AI Incident due to realized harm caused by the AI-generated content.
Thumbnail Image

Dibrugarh man arrested for circulating AI-generated obscene images of former partner

2025-07-13
The Assam Tribune
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-powered platforms to generate fake obscene images, which were then circulated maliciously, causing harm to the victim's reputation and emotional health. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to the individual. The police action and legal provisions invoked further confirm the recognition of harm caused. Therefore, this is classified as an AI Incident.
Thumbnail Image

AI Revenge Porn Shocker: Dibrugarh Police Arrest Man For Morphing Ex-Classmate's Photos

2025-07-14
News18
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-based image generation and morphing tools) to create fake pornographic content, which was then distributed publicly and monetized. This caused direct harm to the victim's personal and social well-being, constituting a violation of rights and harm to the community. The AI system's development and use were central to the harm, meeting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ex-lover turns revenge into porn profit, morphs Assam girl into Babydoll Archi

2025-07-14
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI tools (Midjourney AI, Desire AI, OpenArt AI) to generate fake pornographic images and videos of a real person, which were then distributed widely, causing harm to the victim's privacy and reputation. This constitutes a violation of human rights and personal dignity, fitting the definition of an AI Incident. The harm is direct and realized, as evidenced by the police complaint, investigation, and arrest. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Archita Phukan Fake Instagram Account, AI Deepfakes: Ex-Boyfriend Arrested in Assam Cybercrime Case

2025-07-14
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated deepfake images by an individual using AI tools to harass and defame a person, causing real harm including mental health impacts. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident involving violations of rights and harm to a person. The involvement of AI in generating fake images and the resulting harassment and defamation constitute direct harm caused by AI misuse.
Thumbnail Image

Archita Phukan aka Babydoll Archi's shocking truth: Ex-boyfriend's chilling revenge made her an adult star due to...

2025-07-15
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI tools to morph photos and create a fake explicit persona, which directly led to harm in the form of reputational damage and violation of Archita Phukan's rights. The AI system's misuse by the ex-boyfriend caused a clear violation of personal and possibly labor rights (as an influencer) and harm to the individual's reputation and community standing. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person and violation of rights.
Thumbnail Image

Babydoll Archi aka Archita Phukan TRAPPED: Assam influencer's ex-boyfriend Pratim Bora took revenge by using..., sold her viral AI photos and videos for...

2025-07-15
BollywoodLife
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate fake obscene content of Archita Phukan without her consent, which constitutes a violation of her rights and causes harm to her community and personal dignity. The use of AI in this harmful manner directly led to reputational and emotional harm, fitting the definition of an AI Incident involving violations of human rights and harm to communities.
Thumbnail Image

Babydoll Archi wasn't real: How a viral AI Instagram star was built on one real woman's photo for fame, revenge and profit

2025-07-15
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI tools (OpenAI, Midjourney) to generate and morph images from a real woman's photo to create a fake Instagram persona that was used for harassment, defamation, and profit. The harm includes violation of the woman's privacy and rights, cyber defamation, and psychological harm, which are direct harms caused by the AI-generated content. The AI system's role is pivotal in fabricating the false identity and spreading sexually suggestive content, leading to significant harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Assam engineer creates AI-generated porn of former classmate, earns Rs 10 lakh; arrested

2025-07-15
MoneyControl
Why's our monitor labelling this an incident or hazard?
The incident clearly involves AI systems used to create harmful, non-consensual pornographic content, which is a violation of the victim's rights and causes significant personal harm. The AI-generated content was monetized, indicating active use and harm rather than a potential or hypothetical risk. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in generating and distributing the explicit material.
Thumbnail Image

What Babydoll Archi's Viral Fame Says About India's Porn Problem

2025-07-15
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated content (deepfakes) created from a real person's image to fabricate a false persona and produce explicit content without consent. This misuse of AI has directly led to harm to the individual's dignity, privacy, and mental health, constituting violations of rights and causing harm to the community by spreading non-consensual pornographic material. The involvement of AI in generating and spreading this content, the resulting harassment, and the monetization of such content clearly meet the criteria for an AI Incident under the definitions provided.
Thumbnail Image

Babydoll Archi's secret unveiled: She's neither a content influencer nor living in US. Who is she?

2025-07-16
Economic Times
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI tools to generate a fake social media persona and explicit content based on one real person's image, without her consent. This led to direct harm to the victim's reputation and mental health, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The AI system's development and use were central to the harm caused, and the malicious intent and monetization further underline the direct link to harm.
Thumbnail Image

Viral Babydoll Archi: How a jilted techie created an Insta sensation with 1 million+ followers through one photo of his ex-girlfriend

2025-07-16
Economic Times
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI tools (Midjourney AI, Desire AI, OpenArt AI) to generate fake pornographic content by superimposing the victim's face onto synthetic bodies. This misuse of AI caused direct harm to the victim's rights and mental health, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The engineer's arrest and ongoing investigation further confirm the realized harm stemming from AI misuse.
Thumbnail Image

The curious case of Babydoll Archi: The AI illusion that trapped millions, Achita phukan Video viral

2025-07-16
Business Standard
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI platforms (Midjourney AI, Desire AI, OpenArt AI) to fabricate fake pornographic visuals, which were then disseminated widely, causing harm to the individual depicted and leading to legal action. This constitutes a violation of human rights (privacy, dignity) and cyber defamation, fulfilling the criteria for an AI Incident. The AI system's development and use directly led to realized harm, not just potential harm, so it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

How Archita Phukan Viral Video Link Fueled an AI Porn Hoax and Shamed a Real Woman

2025-07-16
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (e.g., Midjourney and OpenAI tools) to generate fake pornographic content and deepfake videos impersonating a real person, which directly caused harm to the victim's reputation, privacy, and dignity. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to a person. The malicious creation and dissemination of AI-generated sexualized content without consent is a clear breach of fundamental rights and causes significant personal harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Archita Phukan Changes Her Instagram Name To Amira Ishtara Amid Obscene Video Leak Controversy

2025-07-16
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated images and videos that were created and circulated to malign Archita Phukan's reputation, constituting a violation of her rights and causing harm to her as an individual. The AI system's misuse in generating false explicit content directly led to harm, fulfilling the criteria for an AI Incident. The involvement of law enforcement and arrest further confirms the realized harm and direct link to AI misuse.
Thumbnail Image

'Archita Phukan Viral Viral Video Link' Trends After Assam Girl's Ex-Boyfriend Pratim Bora Arrested for Creating Fake Profile With AI-Generated 'Babydoll Archi' Images | 👍 LatestLY

2025-07-16
LatestLY
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the creation and distribution of AI-generated deepfake content that caused direct harm to Archita Phukan through defamation, harassment, and invasion of privacy. The AI system was used maliciously to fabricate and spread false explicit material, which led to legal action and emotional trauma. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to the individual and community. The event is not merely a potential risk or complementary information but a realized harm caused by AI misuse.
Thumbnail Image

How Ex-Boyfriend Made ₹10 Lakh via AI Deepfake in Assam

2025-07-17
TechnoSports Media Group
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-powered image generation platforms to create deepfake pornographic content without consent, which was then distributed and monetized by the perpetrator. This caused direct harm to the victim's psychological health and violated her rights, including privacy and protection from defamation. The AI system's use was central to the harm caused, fulfilling the criteria for an AI Incident under the framework. The case also highlights the misuse of AI technology for cybercrime and digital harassment, with realized harm rather than just potential risk.