Pakistani Influencer Alina Amir Targeted by AI Deepfake Video

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Pakistani influencer Alina Amir was targeted by a viral AI-generated deepfake video falsely attributed to her, resulting in reputational harm and harassment. Amir publicly condemned the misuse of AI technology, called for government intervention, and offered a reward for identifying the perpetrators behind the fabricated content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated deepfake content falsely attributed to a person, which is a clear example of AI misuse causing harm to an individual's reputation and emotional well-being. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to communities through misinformation and harassment). The event is not merely a warning or potential risk but describes actual harm occurring and responses to it, thus qualifying as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Respect of human rightsTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
ReputationalPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

TikToker Alina Amir breaks silence on AI-generated deepfake video

2026-01-26
24 News HD
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake content falsely attributed to a person, which is a clear example of AI misuse causing harm to an individual's reputation and emotional well-being. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to communities through misinformation and harassment). The event is not merely a warning or potential risk but describes actual harm occurring and responses to it, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Alina Amir Exposes Viral 'Leaked Video Link' as AI Deepfake

2026-01-26
LatestLY
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create manipulated videos that have caused reputational harm and harassment to Alina Amir and other individuals. The AI system's misuse has directly led to harm to the individuals targeted, including psychological and social harm, which fits the definition of an AI Incident. The event involves the use and malicious misuse of AI systems (deepfake generation) causing realized harm, not just potential harm or general information, so it is not an AI Hazard or Complementary Information. It is not unrelated because AI deepfake technology is central to the incident.
Thumbnail Image

TikToker Alina Amir responds to video circulating online

2026-01-26
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The deepfake video is an AI system's output used maliciously to harm a person's reputation and cause harassment, which is a violation of rights and harm to the individual and community. The event describes realized harm from the AI-generated content, not just a potential risk. Therefore, it qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake video.
Thumbnail Image

Watch: Alina Amir's Viral Video Real Or Deepfake? Pakistani TikToker Breaks Silence, Calls It 'Harassment And Digital Violence'

2026-01-26
NewsX
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate a deepfake video, which is a clear example of AI-generated content causing harm through misinformation and harassment. The harm is realized, as the victim calls it harassment and digital violence with psychological and social consequences. The article also details the malicious use of AI deepfakes in cybercrime operations, confirming the AI system's role in causing harm. Hence, this qualifies as an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

Viral Leaked Videos 2026: Alina Amir, Fatima Jatoi, Payal Gaming Hit Back; Arohi Mim & Marry Umair Silent

2026-01-26
LatestLY
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of deepfake technology used to create fake videos that have been disseminated widely, causing reputational harm and harassment to individuals. The harms are realized, not hypothetical, as victims have suffered from viral misinformation and have taken legal action. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The article also discusses responses to these harms but the primary focus is on the incidents themselves.
Thumbnail Image

Alina Amir takes stand against AI abuse

2026-01-26
Daily Times
Why's our monitor labelling this an incident or hazard?
The article describes an AI-generated fake video that has been falsely linked to Alina Amir, causing reputational damage and emotional harm. The AI system's use in creating misleading content that harms a person's reputation and causes emotional trauma fits the definition of an AI Incident, as it involves harm to a person and communities through misinformation and violation of rights. The harm is realized and ongoing, not just a potential risk, thus classifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Alina Amir New Viral Video: Pakistani TikToker urges Maryam Nawaz to act against deepfake after MMS row

2026-01-27
Zee News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of deepfake video generation, which is explicitly mentioned. The deepfake video has been circulated, causing emotional distress and reputational harm to the individual, fulfilling the criteria for harm to persons and communities. The creation and dissemination of such AI-generated content is a direct cause of harm, making this an AI Incident. The article does not merely warn of potential harm but reports on actual harm and ongoing impacts, thus it is not an AI Hazard or Complementary Information.
Thumbnail Image

Alina Aamir Denies Viral Video, Calls It AI Deepfake | TikToker

2026-01-26
Khyber News -Official Website
Why's our monitor labelling this an incident or hazard?
The presence of AI is explicit in the creation of deepfake videos, which are AI-generated synthetic media. The harm is realized as the individual's reputation and image are damaged, constituting harm to communities and violation of rights. The event describes actual harm caused by the AI system's misuse, qualifying it as an AI Incident. The focus on legal action and governance responses is complementary but secondary to the primary harm caused by the AI deepfakes.
Thumbnail Image

TikTok star Alina Amir breaks silence on 'leaked video'

2026-01-26
Daily Pakistan Global
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake videos have been used maliciously to harm Alina Amir's reputation, which is a direct harm caused by the AI system's misuse. This fits the definition of an AI Incident as the AI system's use has directly led to harm to a person and communities through digital violence and misinformation. The involvement of law enforcement and calls for stronger laws further indicate the seriousness and realized nature of the harm.
Thumbnail Image

Who leaked Alina Amir's viral video?

2026-01-26
Daily Pakistan Global
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system's use (deepfake technology) to create a fake video that has been circulated, causing harm to Alina Aamir's reputation and personal life. This constitutes a violation of rights and harassment, which are harms under the AI Incident definition. The harm is realized, not just potential, as the video is viral and causing distress. Therefore, this is classified as an AI Incident.
Thumbnail Image

TikToker Alina Amir reacts to deepfake video, offers cash reward

2026-01-26
Head Topics
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create deepfake videos, which are AI-generated synthetic media. The deepfake video has been circulated, causing reputational damage and harassment to Alina Amir and potentially other women. This constitutes harm to individuals and communities, fitting the definition of an AI Incident. The AI system's misuse in generating and spreading false content has directly led to these harms. The article also mentions calls for legal and punitive measures, confirming the recognition of harm caused. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Pakistani TikToker Alina claims her leaked MMS video is AI-generated, urges CM Maryam Nawaz to act against deepfake abuse

2026-01-27
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video, which is a form of AI-generated manipulated content. The deepfake has been widely circulated, causing reputational and emotional harm to the individual targeted, which fits the definition of harm to a person or group (harm to health and well-being). Additionally, the spread of such content facilitates scams leading to financial harm, further supporting the classification as an AI Incident. The involvement of authorities and calls for legal action underscore the seriousness and realized nature of the harm. Hence, this is not merely a potential hazard or complementary information but a clear AI Incident.
Thumbnail Image

Pakistan's social media star Alina Amir slams fake leaked obscene videos circulating online: 'This is harassment'

2026-01-27
GULF NEWS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the obscene videos circulating online are 'deepfake' created with artificial intelligence, indicating the involvement of an AI system in generating fake content. The harm here is reputational damage and harassment of the individual, which falls under harm to persons or communities. Since the AI-generated content has already been circulated and is causing harm, this qualifies as an AI Incident.
Thumbnail Image

7:11, 4:47, 3:24, or 19 Minutes 34 Seconds Viral Video Traps: Why Governments Must Act Now

2026-01-27
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate deepfake videos for defamation, which is a violation of rights and causes harm to individuals. The AI-generated content is central to the scam's operation, leading to direct harm to victims' reputations and privacy. Furthermore, the scam causes financial harm and malware infections to users who interact with the malicious links. The article details realized harms, not just potential risks, and the AI system's role is pivotal in enabling these harms. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

Alina Amir Viral Video Exposes the Dark Side of AI in 2026

2026-01-27
LatestLY
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake generation using GANs and real-time synthesis) that have directly led to harm: reputational damage, gendered harm, and exploitation of victims through viral fake videos. The use of AI deepfakes to create and spread false videos constitutes a violation of rights and harm to communities. The article describes realized harm, not just potential harm, and thus qualifies as an AI Incident. The involvement of AI in the creation and dissemination of these harmful deepfakes is central to the event.
Thumbnail Image

Alina Amir New Viral Video Sparks Debate Over AI Misuse and Online Harassment

2026-01-27
iNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake technology to create and circulate a fake video of Alina Amir, which has caused harm through harassment and false information dissemination. The AI system's use here is malicious and has directly led to harm to an individual, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The event is not merely a potential risk or a general update but a realized harm caused by AI misuse.
Thumbnail Image

Who is Alina Amir? After Payal Gaming, Arohi MMS scandal, influencer reacts to leaked MMS viral video, she is from...

2026-01-27
News24
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the leaked MMS video is AI-generated, indicating the involvement of an AI system in creating a deepfake. The harm is realized as the influencer suffers reputational damage and harassment, which falls under harm to persons and violation of rights. The event is not merely a potential risk but an actual incident of harm caused by AI misuse. Hence, it meets the criteria for an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

Pakistani Influencer Alina Amir BREAKS Silence After Alleged Video Leak; Who Is She?

2026-01-28
thedailyjagran.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the video was a deepfake created by AI and was used maliciously to harass Alina Amir. This misuse of AI-generated content has directly caused harm to her reputation and emotional well-being, which qualifies as harm to a person and community. The involvement of AI in creating the fake video and the resulting harassment meets the definition of an AI Incident, as the AI system's use has directly led to harm.
Thumbnail Image

اے آئی (AI) کے ذریعے کردار کشی مہم: ٹک ٹاکر علینہ عامر کا جعلی ویڈیو پر دبنگ ردعمل

2026-01-26
Neo TV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create a deepfake video that harms the reputation and dignity of the individual, which is a violation of rights and a form of harassment. The harm has already occurred as the video is viral and causing damage. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse in generating the fake content.
Thumbnail Image

لیک ویڈیو کا معاملہ: علینہ عامر نے خاموشی توڑ دی

2026-01-28
jang.com.pk
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the fake video was created using AI (artificial intelligence) and that it has been widely circulated, causing harassment and reputational harm to the individual. This is a direct harm to the person's rights and dignity, fitting the definition of an AI Incident under violations of human rights and harm to communities. The involvement of AI in generating the fake content is central to the harm described. The event is not merely a potential risk but a realized harm, and the call for enforcement action further confirms the seriousness of the incident.
Thumbnail Image

Famous TikToker Alina Amir's reaction to her leaked video goes viral

2026-01-26
ایکسپریس اردو
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is reasonably inferred because the video is described as a deepfake, which is a known AI-generated manipulated video technology. The harm is realized as the TikToker experiences reputational damage and harassment, which falls under violations of rights and harm to individuals. The event involves the use and malicious misuse of AI systems to create and spread harmful content. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

مشہور ٹک ٹاکر علینہ عامر کا اپنی نازیبا لیک ویڈیو پر ردعمل وائرل

2026-01-26
Daily Pakistan
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is an AI-generated manipulated video, causing reputational harm and harassment to the individual depicted. The harm is realized and ongoing, as the video is viral and causing distress. The AI system's misuse directly leads to violations of rights and harm to the community of affected individuals. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ٹک ٹاکر علینہ عامر سے منسوب 5منٹ 24سیکنڈ کی نامناسب ویڈیو کس نے بنائی؟

2026-01-26
MM News
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated deepfake content falsely attributed to a person, causing reputational harm and potential violation of rights. The AI system's use in creating and spreading this harmful content directly leads to harm to the individual and communities (harm to reputation and rights). Therefore, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

After 7-minute 11 second Viral Video, Pakistani TikToker Alina Amir's private clip leak takes over internet; influencer BREAKS silence after... | Bollywood Life

2026-01-29
BollywoodLife
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake videos that impersonate a person in intimate scenarios without consent, leading to harm to the individual's reputation, emotional well-being, and privacy. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to the individual. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in creating the false content causing the harm.
Thumbnail Image

Alina Amir Viral MMS: New Twist In Pakistani Leaked Video Saga, Influencer Announces...

2026-01-29
NewsX
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video, which is an AI-generated manipulated video, being circulated to harm the reputation of Alina Amir. This constitutes a direct harm to the individual’s reputation and emotional well-being, fitting the definition of an AI Incident under violations of rights and harm to communities. The AI system's use in creating the deepfake is central to the incident. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

From Alina Amir 4 Minutes 47 Second Pakistan Viral MMS Clip To Arohi Mim 3 Minute 24 Seconds Link: How To Spot AI And Deepfake Content Online

2026-01-30
NewsX
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create manipulated videos that deceive users, leading to phishing and malware infections, which are direct harms to individuals' financial security and reputations. The involvement of AI in generating these synthetic videos is clear, and the harms are occurring, not just potential. Hence, this event meets the criteria for an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

5-minute viral MMS video: Big twist in Alina Amir's leaked private clip, star influencer...

2026-01-30
News24
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the creation and dissemination of an AI-generated deepfake video, which is a direct use of AI technology leading to harm—specifically reputational damage and digital harassment. This fits the definition of an AI Incident as the AI system's use has directly led to harm to the individual (harm to person/community reputation). The event is not merely a warning or potential risk but describes an actual occurrence of harm caused by AI misuse.
Thumbnail Image

5-Minute Viral Video: Who is Alina Amir? Pakistani influencer dragged into alleged private clip scandal | Bollywood Life

2026-01-30
BollywoodLife
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the viral video is an AI-generated deepfake, which is a product of AI system use. The deepfake has been widely circulated, causing reputational damage and harassment to Alina Amir, which is a clear harm to the individual and community. The AI system's use in creating and spreading the manipulated video directly led to this harm. Hence, this event meets the criteria for an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Is Alina Amir on Snapchat? Know Her Verified Account Names | 👍 LatestLY

2026-01-30
LatestLY
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a deepfake video, which is an AI-generated manipulated video, used to harass Alina Amir, constituting harm to her personal rights and reputation. The subsequent scam involving fake accounts distributing malware and phishing links causes harm to users' property (personal data and devices). The AI system's development and use have directly led to these harms. Hence, this qualifies as an AI Incident under the definitions provided, as it involves violations of rights and harm to communities and individuals caused by AI-generated content and its malicious exploitation.
Thumbnail Image

Alina Amir and Marry Umair Viral Video Leaks: The January 2026 Roundup of New Timestamp Scams | 👍 LatestLY

2026-01-30
LatestLY
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfakes and bot networks manipulating search engine results to spread malicious content, which has directly led to harm such as defamation, digital harassment, malware infections, and financial scams. The AI system's role is pivotal in generating realistic fake videos and orchestrating the attack, fulfilling the criteria for an AI Incident. The harms are realized and ongoing, not merely potential, and involve violations of rights and harm to communities through misinformation and cybercrime.
Thumbnail Image

From Umairi to Alina Amir 'leaks': Cybersecurity in Pakistan under question

2026-01-28
MM News
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit in the creation and dissemination of deepfake videos, which are AI-generated synthetic media. The harm is direct, as the deepfake content falsely damages the reputation of Alina Amir and constitutes digital harassment, a violation of rights and harm to communities. The article also mentions the broader cybersecurity threats involving AI misuse, reinforcing the incident classification. Although legal and institutional responses are discussed, the main focus is on the realized harm caused by AI-generated deepfakes, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Pakistani Influencer Alina Amir Viral MMS Video: Star Says Video Is AI-Generated Fake

2026-01-29
Stack Umbrella
Why's our monitor labelling this an incident or hazard?
The event describes a specific harm caused by the malicious use of AI technology (deepfake) to create and spread a fake video that damages the reputation of a person. The AI system's misuse has directly led to harm (reputational damage, harassment) and the victim herself confirms the video is AI-generated. This fits the definition of an AI Incident as the AI system's use has directly led to harm to a person and communities (harassment, misinformation).
Thumbnail Image

5 minute viral MMS video: Who is Alina Amir? Social media influencer whose alleged private clip got leaked, she is from...

2026-01-30
News24
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video that falsely depicts the influencer, which is a direct misuse of AI technology causing reputational harm and cyber harassment. The harm is realized as the video went viral and caused outrage, constituting harm to the individual and potentially to communities. The influencer's characterization of the event as cybercrime and abuse further supports the classification as an AI Incident. There is no indication that the harm is only potential or that the article is primarily about responses or broader context, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Alina Amir viral MMS video case explained: Who is Pakistani influencer in the spotlight?

2026-01-31
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video, which is a clear example of AI-generated content causing harm. The harm includes violation of personal rights through digital abuse and reputational damage, as well as enabling scams that harm users financially. Since the AI-generated deepfake video has already been disseminated and caused harm, this qualifies as an AI Incident under the framework, specifically under violations of rights and harm to communities through digital abuse and fraud.
Thumbnail Image

'I will not stay silent': Alina Amir fights back against Deepfake Video Leak Scandal - Pakistan Observer

2026-02-10
Pakistan Observer
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate a deepfake video, which is a clear example of AI-generated manipulated content causing harm to an individual by damaging reputation and enabling cyber harassment. The harm is realized and ongoing, as evidenced by the viral spread of the video and the legal actions taken. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to the individual and community. The event is not merely a potential risk or a complementary update but a concrete case of harm caused by AI misuse.
Thumbnail Image

Alina Aamir Files FIR Over Viral Deepfake Video

2026-02-10
Khyber News -Official Website
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video, which is a clear example of AI-generated content causing harm through misinformation and harassment. The harm includes violation of personal rights and emotional distress, fitting the definition of harm to communities and violations of rights. The fact that legal action is being taken against those responsible for creating and sharing the AI-generated deepfake confirms the direct link between the AI system's use and the harm caused. Hence, this is classified as an AI Incident.
Thumbnail Image

Alina Amir Video Leak: TikToker lodges FIR after falling victim to Deepfake MMS

2026-02-10
Daily Pakistan Global
Why's our monitor labelling this an incident or hazard?
The event clearly describes the use of an AI system to create a deepfake video that caused reputational harm to the victim, which is a violation of her rights and a form of harm to the community. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The legal action and public response are part of the aftermath but do not change the classification of the event as an AI Incident.