Italian Prime Minister Sues Over Deepfake Pornographic Videos

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Italian Prime Minister Giorgia Meloni is suing two men for creating and distributing AI-generated deepfake pornographic videos featuring her likeness. The videos, viewed millions of times online, caused reputational harm and led Meloni to seek €100,000 in damages, highlighting the misuse of AI for defamation and abuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI (deepfake technology) to create and distribute a pornographic video without consent, causing harm to the individual depicted (PM Meloni). This constitutes a violation of rights and harassment, which are harms under the AI Incident definition. The involvement of AI is clear, the harm is realized, and legal action is underway, confirming this as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsSafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
Government

Harm types
ReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Italian PM Meloni Seeks Rs 90.8 Lakh Over Deepfake Porn Video; Kangana Ranaut Says 'No Woman Can Escape...'

2024-03-22
Mashable India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI (deepfake technology) to create and distribute a pornographic video without consent, causing harm to the individual depicted (PM Meloni). This constitutes a violation of rights and harassment, which are harms under the AI Incident definition. The involvement of AI is clear, the harm is realized, and legal action is underway, confirming this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Kangana Ranaut reacts to Italian PM Giorgia Meloni Deepfake porn video controversy

2024-03-22
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake pornographic videos, which are generated using AI-based deep learning techniques to manipulate video content. The unauthorized creation and circulation of these videos have led to legal action and represent a clear violation of rights and harassment, fulfilling the criteria for harm under the AI Incident definition. The involvement of AI in generating the harmful content and the resulting legal and personal harm to the individual make this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Kangana Ranaut REACTS to Giorgia Meloni's deepfake pornography lawsuit: "No woman is safe" : Bollywood News - Bollywood Hungama

2024-03-22
Bollywood Hungama
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake technology, an AI system that generates manipulated videos by swapping faces. The creation and distribution of such deepfake pornographic videos have directly harmed Giorgia Meloni by defaming and harassing her, which constitutes a violation of rights and harm to an individual. The lawsuit and investigation confirm that harm has occurred. Therefore, this qualifies as an AI Incident due to the direct involvement of AI-generated content causing realized harm.
Thumbnail Image

'No Woman Can Escape Sexism, Harassment': Kangana Ranaut REACTS To Italian PM Giorgia Meloni's Deepfake Porn Videos

2024-03-22
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The article describes the existence and impact of deepfake porn videos of Italian PM Giorgia Meloni, which are AI-generated manipulated content causing harassment and violation of rights. The involvement of AI systems in generating deepfakes is explicit and the harm (harassment, violation of rights) is realized. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Kangana Ranaut Says 'No Woman Can Escape Sexism' Over Italian PM Giorgia Meloni's Deepfake Porn Lawsuit | 🎥 LatestLY

2024-03-22
LatestLY
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates manipulated videos. The creation and viral spread of non-consensual deepfake pornographic videos constitute a violation of rights and cause harm to the individual targeted. The lawsuit and public discussion confirm that harm has occurred due to the AI system's misuse. Therefore, this event qualifies as an AI Incident due to realized harm stemming from the use of an AI system (deepfake generation) causing violations of rights and personal harm.
Thumbnail Image

Giorgia Meloni Deepfake Porn Video CONTROVERSY! Kangana Ranaut Shows Support To The Italian PM, Says 'No Woman Can Escape Sexism, Bullying And Harassment' | SpotboyE

2024-03-22
spotboye.com
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that synthesize realistic but fake visual content. The creation and dissemination of a deepfake porn video targeting a public figure constitutes a violation of rights and harassment, which are harms covered under the AI Incident definition. The article describes actual harm occurring due to the AI system's use, not just potential harm, thus qualifying as an AI Incident.
Thumbnail Image

Italy's Giorgia Meloni called to testify in deepfake porn case

2024-03-21
POLITICO
Why's our monitor labelling this an incident or hazard?
The creation and distribution of deepfake pornographic videos using Giorgia Meloni's likeness directly involves an AI system (deepfake technology) and has led to harm in the form of violation of personal rights and reputational damage. The event describes an ongoing legal case addressing this harm, indicating that the AI system's use has directly led to a violation of rights. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Italy PM Giorgia Meloni Seeks Over $100,000 In Damages Over Deepfake Pornographic Videos

2024-03-21
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
The creation and circulation of deepfake videos using AI technology directly led to harm in the form of defamation and violation of personal rights of Italy's Prime Minister Giorgia Meloni. The AI system's use here is central to the harm caused, fulfilling the criteria for an AI Incident as it involves violations of rights and harm to the individual. The event describes realized harm, not just potential harm, and involves the use of AI systems in a harmful way.
Thumbnail Image

Italian PM Meloni seeks compensation over deepfake pornography videos - Times of India

2024-03-21
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are AI-generated manipulated content. The harm caused is realized reputational damage and defamation against a public figure, which is a violation of rights under applicable law. The legal case and damages sought confirm that harm has occurred. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse in generating and distributing defamatory deepfake content.
Thumbnail Image

Italian Prime Minister Giorgia Meloni seeking damages of $108,200 in deepfake porn trial | CNN

2024-03-22
CNN
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of AI-generated deepfake technology to create non-consensual pornographic videos, which constitutes a violation of human rights and defamation. The harm has already occurred, with millions having viewed the videos and ongoing circulation. The AI system's use directly led to this harm, making it an AI Incident rather than a hazard or complementary information. The legal actions and damages sought further confirm the recognition of harm caused by the AI system's misuse.
Thumbnail Image

Italian PM Giorgia Meloni demands Rs 91 lakh damages for deep fake porn videos featuring her

2024-03-21
Economic Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are synthetic media generated by AI. The creation and dissemination of these videos have directly caused harm to the individual depicted, constituting defamation and violation of rights. The harm is realized, not just potential, as the videos have been widely viewed and have led to legal charges. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

Kangana Ranaut Speaks Out On Italian PM Giorgia Meloni's Deepfake Porn Controversy

2024-03-22
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of a deepfake video using AI technology, which has directly caused harm to the victim through defamation and harassment. The involvement of AI in generating the deepfake is explicit, and the harm is realized, not just potential. Therefore, this qualifies as an AI Incident under the framework, as it involves violations of rights and harm to an individual caused by AI misuse.
Thumbnail Image

Italy's PM vs. Deepfakes: Giorgia Meloni Seeks Rs 90 Lakh In Damages, Plans To Donate It

2024-03-21
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI-generated deepfake technology to create manipulated videos that have caused harm to an individual by violating her rights and defaming her. The harm is direct and realized, as the videos have been widely viewed and have led to legal proceedings. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and reputational harm.
Thumbnail Image

Italian PM Giorgia Meloni seeks over ₹90 lakh in damages over deepfake porn videos | TOI Original - Times of India Videos

2024-03-21
The Times of India
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that manipulates images to create realistic but fake videos. The creation and distribution of non-consensual deepfake pornographic videos constitute a violation of personal rights and privacy, which is a breach of applicable law protecting fundamental rights. Since the videos were uploaded and caused harm, this is a realized harm directly linked to the AI system's misuse. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Most victims of deepfake porn never get justice, but Italy's prime minister is out for vengeance

2024-03-21
Business Insider
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are nonconsensual and pornographic, causing harm to individuals' rights and reputations. The harm is realized, as the videos have been widely viewed and have led to legal actions. The involvement of AI in generating the harmful content and the resulting violation of rights and harm to communities fits the definition of an AI Incident. The article also discusses ongoing legal and societal responses, but the primary focus is on the harm caused by the AI-generated deepfakes.
Thumbnail Image

Georgia Meloni sues man and his father over deepfake porn videos

2024-03-20
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deepfake generation) to create manipulated pornographic videos without consent, which have been widely disseminated online. This has directly led to reputational harm and emotional distress to Giorgia Meloni, constituting a violation of rights and defamation. The AI system's use in creating and distributing this content is central to the harm caused. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to harm to a person and communities through defamation and nonconsensual explicit content.
Thumbnail Image

Italian PM Giorgia Meloni Becomes Latest Victim Of Deepfake Videos; Demands $100,000 In Damages After "Millions" Of Views!

2024-03-21
Koimoi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and distribution of deepfake videos using artificial intelligence, which directly led to reputational harm and defamation of the Italian Prime Minister. The videos were viewed millions of times, indicating significant harm to the individual and potentially to the community's trust in media authenticity. The involvement of AI in generating manipulated content that caused harm fits the definition of an AI Incident, as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

Italian Prime Minister Seeks Over $100,000 After Deepfake Porn Videos Were Viewed 'Millions Of Times'

2024-03-20
Forbes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are realistic but fake videos generated by AI techniques. The harm is direct and materialized, as the videos have been viewed millions of times and have caused defamation and reputational harm to the Prime Minister. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to the individual. The article also references the broader context of deepfake pornography and its legal and social implications, reinforcing the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Italian Prime Minister Giorgia Meloni seeking damages of $108,200 in deepfake porn trial

2024-03-22
Aol
Why's our monitor labelling this an incident or hazard?
The creation and distribution of deepfake pornographic videos of Italian Prime Minister Giorgia Meloni directly involves an AI system (deepfake technology) used maliciously, resulting in harm to her reputation and privacy. This constitutes a violation of rights and defamation, which fits the definition of an AI Incident as the AI system's use has directly led to harm. The legal case and damages sought further confirm the harm has materialized.
Thumbnail Image

Italy PM Giorgia Meloni Seeks Over $100,000 In Damages Over Deepfake Videos

2024-03-21
NDTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI (deepfake technology) to create manipulated videos that have caused harm to an individual by defaming her and damaging her reputation. The videos were distributed widely, causing significant harm. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to a person. The legal actions and damages sought further confirm the harm has materialized.
Thumbnail Image

Italian Prime Minister Giorgia Meloni Seeks Damages over AI-Generated Deepfake Porn

2024-03-22
Breitbart
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is an AI system application. The creation and distribution of these deepfake pornographic videos have directly harmed the individual depicted, violating her rights and causing reputational damage. The harm is realized and ongoing, as the videos were viewed millions of times. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in generating manipulated content infringing on human rights and legal protections.
Thumbnail Image

Italian PM Giorgia Meloni seeks Rs 90 lakh in damages over deepfake porn videos

2024-03-21
The Indian Express
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake videos, which are AI-generated manipulated content replacing a person's image convincingly. The videos caused reputational harm and defamation to the Italian Prime Minister, fulfilling the criteria of harm to rights under the AI Incident definition. The involvement of AI in creating the deepfake videos is clear, and the harm has already occurred, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Giorgia Meloni deepfake videos: Italy PM seeks over $100,000 in damages

2024-03-21
Hindustan Times
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate realistic but fabricated content. The malicious use of such AI-generated content to defame a public figure constitutes a violation of rights and causes harm to the individual and potentially to communities by spreading misinformation. Since the videos have been widely viewed and have led to legal proceedings for damages, this qualifies as an AI Incident due to realized harm stemming from the use of an AI system.
Thumbnail Image

Italy's Meloni Seeks Over $100,000 In Damages Over Deepfake Videos Featuring Her - News18

2024-03-21
News18
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are created using artificial intelligence systems capable of manipulating video content to depict individuals in false and harmful scenarios. The videos have been widely disseminated, causing harm to the individual's reputation and personal dignity, which constitutes a violation of rights and defamation under applicable law. The legal pursuit and charges of defamation further confirm the recognition of harm caused. Since the harm is realized and directly linked to the malicious use of an AI system, this event is classified as an AI Incident.
Thumbnail Image

Italy prime minister Giorgia Meloni seeks €100,000 damages over deepfake porn videos

2024-03-20
The Independent
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are created using AI systems that manipulate images and videos to produce realistic but fake content. The harm caused includes violation of personal rights, defamation, and emotional distress, fitting the definition of an AI Incident under violations of human rights and harm to individuals. The sharing of these videos online and their widespread viewing demonstrate direct harm caused by the AI system's misuse. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

5 Things: Italian PM Melonis Deepfake Porn Video Case

2024-03-21
Zee News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake videos, which are AI-generated manipulated media. The harm caused is defamation and reputational damage to a public figure, which falls under violations of human rights and fundamental rights. The AI system's use (deepfake generation) directly led to this harm. The legal pursuit and the context of the videos being widely viewed confirm the harm has materialized. Hence, this is classified as an AI Incident.
Thumbnail Image

Italy PM Giorgia Meloni Pursues Legal Action Over Deepfake Videos, Seeks 100,000 Euros In Damages

2024-03-21
Oneindia
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate deepfake videos, which are synthetic media created by AI algorithms. The circulation of these videos has caused harm to Giorgia Meloni's reputation and personal rights, constituting a violation of rights under applicable law. Since the harm has already occurred due to the AI-generated content, this qualifies as an AI Incident. The legal action and societal response are complementary but the primary event is the harm caused by the AI system's misuse.
Thumbnail Image

AI misuse: Italian PM Giorgia Meloni seeks $100k in damages over deepfake porn videos

2024-03-21
mint
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are synthetic media generated by AI. The creation and distribution of these videos have directly led to harm in the form of defamation and reputational damage to a public figure, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The legal action and damages sought further confirm the harm has materialized.
Thumbnail Image

Italian PM Giorgia Meloni seeks compensation over deepfake videos: Report

2024-03-20
India Today
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate images and videos to create realistic but fake content. In this case, the AI-generated deepfake videos have been used maliciously to defame a public figure, causing harm to her reputation and personal rights. The involvement of AI in producing these videos and the resulting legal charges for defamation indicate direct harm caused by the AI system's use. Therefore, this event qualifies as an AI Incident due to the realized harm from the AI-generated content violating rights and causing reputational damage.
Thumbnail Image

Italian PM Giorgia Meloni sues father and son over deepfake porn

2024-03-21
The Telegraph
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are created using AI systems that manipulate images and videos to produce realistic but fake content. The videos have been posted online, causing defamation and harm to the Prime Minister's reputation, which is a violation of rights under applicable law. The involvement of AI in creating the deepfakes directly led to this harm. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (defamation and violation of rights).
Thumbnail Image

Italian PM Meloni Seeks $93,365 in Damages Over Deepfake Porn Videos- Republic World

2024-03-21
Republic World
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are created by AI systems that digitally superimpose faces onto other bodies. The harm caused is a violation of personal rights and defamation, which falls under violations of human rights or breaches of applicable law protecting fundamental rights. Since the harm has already occurred and legal proceedings are underway, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'Father-Son Duo' Under Scanner As Italy PM Seeks Over Rs 90 Lakh In Damages Over Deepfake Video

2024-03-21
english
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-generated deepfake technology to create and distribute pornographic videos of a public figure, leading to legal action for defamation and damages. The AI system's use directly led to harm in the form of reputational damage and violation of rights. The harm is realized, not just potential, and the AI system's role is pivotal in causing this harm. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Italian PM seeks EUR 100,000 in damages for "deepfake" adult video

2024-03-22
en.royanews.tv
Why's our monitor labelling this an incident or hazard?
The creation and distribution of a deepfake pornographic video using AI technology directly led to harm to the individual's rights and dignity, fitting the definition of an AI Incident under violations of human rights or breach of applicable law. The AI system's use in generating manipulated content that caused reputational and personal harm qualifies this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Italy PM Giorgia Meloni files 100,000 euros defamation lawsuit over deepfake porn videos - The case so far

2024-03-21
The Financial Express
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are created by AI systems capable of generating realistic manipulated content. The circulation of these videos has caused harm to the individual's reputation and dignity, constituting a violation of rights and defamation under applicable law. The harm is direct and realized, not merely potential. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Italy PM Girogia Meloni seeks 100,000 euros over deepfake pornographic videos

2024-03-21
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology using deep learning) to create synthetic media that caused harm by defaming a public figure. The harm is realized and ongoing, as the videos were widely circulated and have led to legal action. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to the individual and community trust. The legal pursuit and societal implications further confirm the incident nature rather than a mere hazard or complementary information.
Thumbnail Image

Italian PM Giorgia Meloni's seeks 100,000 euros in compensation over deepfake videos

2024-03-22
India TV News
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create deepfake videos, which have directly led to harm in the form of defamation and violation of personal rights of the Italian Prime Minister. The videos have been viewed millions of times, indicating significant dissemination and impact. The harm is realized and ongoing, meeting the criteria for an AI Incident. The article also discusses broader societal and governmental responses to deepfake misuse, but the primary focus is on the incident involving the deepfake videos and their consequences, not just complementary information or potential hazards.
Thumbnail Image

Giorgia Meloni seeks 100,000 euros in damages over deepfake porn videos

2024-03-21
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create realistic but fake pornographic videos of a public figure, which have been widely disseminated and caused reputational and personal harm. The harm is realized and ongoing, as evidenced by the legal action and damages sought. This fits the definition of an AI Incident because the AI system's use directly led to a violation of rights (defamation and personal harm). The involvement of AI is explicit and central to the harm caused.
Thumbnail Image

Italian Prime Minister to Testify in Court Over Deepfake Porn Video

2024-03-22
PetaPixel
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake pornographic videos, which are nonconsensual and violate the rights of the individuals depicted. The harm is realized as the videos were widely viewed and remain partially online, causing reputational and personal harm. This fits the definition of an AI Incident as the AI system's use has directly led to violations of human rights and harm to individuals. The legal action and public testimony further confirm the materialized harm.
Thumbnail Image

Italian PM Giorgia Meloni sues creators of deep fake video, seeks €100,000 as compensation

2024-03-22
OpIndia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used maliciously to create non-consensual pornographic videos featuring the Prime Minister's face, leading to defamation and reputational harm. This constitutes a violation of personal rights and can be classified as harm to the individual, fitting the definition of an AI Incident. The involvement of AI in generating the deepfake content directly led to the harm described. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Italy's Prime Minister, Meloni Seeks €100,000 In Damages Over Deepfake Porn Videos

2024-03-21
Sahara Reporters
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of deepfake pornographic videos, which are AI-generated manipulated media. The videos caused harm to the individual depicted (defamation and reputational damage), which falls under violations of rights. The involvement of AI in generating the deepfakes is explicit, and the harm has already occurred. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Giorgia Meloni Deepfake Porn Videos Uploaded on Internet, Italy PM Seeks Over USD 100,000 From Accused Father-Son Duo in Damages | 🌎 LatestLY

2024-03-21
LatestLY
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used maliciously to create harmful synthetic media. The harm is realized in the form of defamation and violation of personal rights, which falls under violations of human rights and breach of applicable law. Since the harm has already occurred and legal proceedings are underway, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Giorgia Meloni sues father-son duo over pornographic deepfake

2024-03-21
WION
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create a harmful pornographic video without consent, which has been widely disseminated, causing harm to the individual depicted. This constitutes a violation of human rights and abuse of AI technology. The harm is realized and ongoing, and legal proceedings are underway. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of rights and reputational damage).
Thumbnail Image

Italy's prime minister sues against deepfake porn

2024-03-24
indy100.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is an AI system's use to create manipulated videos. The harm is realized as the videos have been widely viewed, causing reputational and personal harm to the Prime Minister. The legal action and investigation further confirm the harm's seriousness. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and personal harm).
Thumbnail Image

Italian PM Giorgia Meloni Seeks Over $100,000 from Two Accused Over Sexually Explicit Deepfake Videos

2024-03-21
International Business Times, Singapore Edition
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake technology) used maliciously to create sexually explicit videos of a public figure, causing harm through defamation and violation of rights. The harm has already occurred as the videos were widely disseminated and viewed, fulfilling the criteria for an AI Incident. The legal pursuit and investigation further confirm the recognition of harm caused by the AI system's misuse.
Thumbnail Image

Italy PM Meloni Seeks Over Rs 90 lakh for Deepfake Damage

2024-03-21
Pragativadi: Leading Odia Dailly
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos that have caused reputational damage and defamation, which are harms to individuals and communities. The Italian PM's legal case and the mention of viral deepfake videos of Indian celebrities demonstrate realized harm caused by AI misuse. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Most victims of deepfake porn never get justice, but Italy's prime minister is out for vengeance

2024-03-21
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deep learning-based deepfake video generation) that was used to create harmful, nonconsensual pornographic content, directly violating the victim's rights and causing harm. The harm is realized and ongoing, as the videos amassed millions of views and caused reputational damage. The legal pursuit and investigation confirm the incident's seriousness. Therefore, this qualifies as an AI Incident due to the direct harm to an individual caused by the use of an AI system.
Thumbnail Image

Italian PM Giorgia Meloni Pursues Legal Action Against Graphic Deepfakes

2024-03-21
NewsX
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of deepfake videos involve AI systems that generate manipulated content. The harm caused includes defamation and violation of personal rights, which falls under violations of human rights or breach of legal protections. Since the deepfake videos have been viewed by millions and have caused reputational harm, this constitutes an AI Incident. The legal action and investigation confirm that the AI system's use has directly led to harm.
Thumbnail Image

Italian PM seeks damages over deepfake porn videos - The Ghanaian Chronicle

2024-03-21
The Chronicle Online
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic media by digitally manipulating images or videos. The creation and distribution of deepfake porn videos of the Italian Prime Minister constitute a violation of her personal rights and defamation, which falls under harm to human rights and breach of legal protections. The videos have been viewed millions of times, indicating realized harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

Italian PM wants 100,000 over deepfake porn

2024-03-21
industriesnews.net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, which is an AI system, to create and distribute pornographic videos falsely depicting the Prime Minister. This use of AI has directly led to harm in the form of defamation and abuse, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the videos were viewed millions of times and have caused reputational and emotional damage. The legal actions and damages sought further confirm the recognition of harm caused by the AI system's misuse.
Thumbnail Image

Italy's PM Meloni becomes victim of deepfake, demands compensation of one lakh dollars - India TV Hindi - hindustannewshub.com

2024-03-22
hindustannewshub.com
Why's our monitor labelling this an incident or hazard?
The creation and distribution of a deepfake video using AI technology constitutes an AI Incident because it directly caused harm to an individual by violating her rights and causing reputational damage. The involvement of AI in generating the deepfake is explicit, and the harm (defamation and violation of rights) has materialized, as evidenced by legal proceedings and compensation demands. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Italy's Prime Minister Giorgia Meloni Seeks Damages Over Deepfake Videos

2024-03-21
SheThePeople
Why's our monitor labelling this an incident or hazard?
The article clearly describes the creation and distribution of deepfake videos using AI technology, which directly led to harm in the form of defamation and violation of privacy rights of a public figure. The harm is realized and ongoing, with legal actions underway. The AI system's use in fabricating deceptive content that damages reputation fits the definition of an AI Incident, as it involves violations of rights and harm to an individual caused by AI-generated manipulated media. The event is not merely a potential risk or a general discussion but a concrete case of harm resulting from AI misuse.
Thumbnail Image

Italy's PM Meloni Seeks €100,000 in Damages Over Deepfake Videos

2024-03-21
Pratidin Time
Why's our monitor labelling this an incident or hazard?
The article describes the use of deepfake AI technology to create manipulated videos of Italy's Prime Minister, which is a direct misuse of an AI system leading to reputational harm and potential broader harms such as misinformation and abuse of power. The damages claim and the lawyer's statements confirm that harm has materialized. The AI system's use here is malicious and has directly led to harm, fitting the definition of an AI Incident. The event is not merely a potential risk or a general discussion but involves realized harm caused by AI misuse.
Thumbnail Image

Exclusive: Hundreds of British celebrities victims of deepfake porn

2024-03-21
Channel 4
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake pornography, which directly harms individuals by violating their privacy and causing emotional and reputational damage. The harm is realized and widespread, affecting hundreds of celebrities and private individuals, with billions of views on such content. The use of AI to create manipulated explicit videos without consent fits the definition of an AI Incident, as it leads to violations of human rights and harm to communities. The article describes actual harm occurring, not just potential harm, and the AI system's role is pivotal in producing the deepfakes. Thus, the classification as AI Incident is appropriate.
Thumbnail Image

Θύμα deepfake πορνό η Μελόνι - Προσφεύγει στη Δικαιοσύνη και ζητά αποζημίωση | in.gr

2024-03-20
in.gr
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic media. The creation and distribution of non-consensual deepfake pornographic videos of a public figure constitute a clear violation of rights and cause reputational and psychological harm. The article describes the actual occurrence of harm, legal investigation, and court proceedings, confirming that the AI system's use has directly led to harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Θύμα deepfake πορνό η Μελόνι - Zougla

2024-03-20
zougla.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate deepfake videos, which is an AI system's use leading to harm through defamation and violation of personal rights. The harm has already occurred as the videos were posted and viewed millions of times, causing reputational and emotional damage. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to violations of rights and harm to the individual.
Thumbnail Image

Η Μελόνι ζητά αποζημίωση για deepfake βίντεο πορνό με το πρόσωπό της - iefimerida.gr

2024-03-20
iefimerida.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deepfake software) to create manipulated pornographic videos featuring the Prime Minister's face, which constitutes a violation of rights and defamation. The videos were published and viewed by millions, indicating realized harm to the individual and potentially to communities by spreading harmful misinformation and abuse. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The legal proceedings and police investigation are responses to the incident, not the primary focus of the article, so this is not merely Complementary Information.
Thumbnail Image

H Μελόνι ζητά αποζημίωση 100.000 ευρώ για deep fake βίντεο πορνό

2024-03-20
NewsIT
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deep fake video, which is a clear example of AI-generated manipulated content causing harm to an individual's reputation and dignity. The harm is direct and realized, as the video was widely viewed and has led to legal proceedings. The use of AI in generating the defamatory video meets the criteria for an AI Incident because it has directly led to violations of personal rights and reputational harm. The legal and societal responses further confirm the significance of the harm caused.
Thumbnail Image

Θύμα deep fake βίντεο πορνό η Μελόνι: Διεκδικεί αποζημίωση 100.000 ευρώ

2024-03-20
Gazzetta.gr - Sports News Portal
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI-generated deep fake technology to create pornographic videos without consent, which is a direct violation of personal rights and constitutes harm to the individual. The widespread dissemination of these videos has caused reputational and emotional harm, fitting the definition of an AI Incident due to violation of rights and harm to the individual. The involvement of AI in generating the deep fake content and the resulting harm justifies classification as an AI Incident.
Thumbnail Image

Θύμα deepfake πορνό η Μελόνι: Προσφεύγει στη Δικαιοσύνη και ζητά συμβολική αποζημίωση

2024-03-20
www.topontiki.gr
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, as deepfake technology uses AI to digitally manipulate images and videos. The use of this AI-generated content has directly led to harm, including defamation and violation of personal rights, which are recognized harms under the framework (violation of rights). Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm to an individual.
Thumbnail Image

Θύμα deep fake πορνό η Μελόνι-Ζητά 100.000 ευρώ - Aftodioikisi.gr

2024-03-20
Aftodioikisi.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create deep fake videos, which are digitally manipulated to falsely depict a person in pornographic content. This has caused direct harm to the victim's reputation and personal rights, fulfilling the criteria for an AI Incident under violations of human rights and legal protections. The legal actions and the request for compensation further confirm the harm has materialized due to the AI-generated content.
Thumbnail Image

Μελόνι: Διεκδικεί αποζημίωση 100.000 ευρώ για deep fake πορνό | Η ΚΑΘΗΜΕΡΙΝΗ

2024-03-20
H Kαθημερινή
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deep fake video, which is a digitally fabricated content generated by AI. The harm is realized as defamation and violation of the individual's rights, with the video widely disseminated online. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and reputational damage).
Thumbnail Image

Μελόνι: Στη Δικαιοσύνη για deepfake βίντεο πορνό

2024-03-20
NEWS 24/7
Why's our monitor labelling this an incident or hazard?
The creation and distribution of a deepfake pornographic video involves AI-generated content that directly harms the individual's reputation and dignity, constituting a violation of rights under applicable law. The involvement of AI in producing the deepfake video and the resulting harm to the victim meets the criteria for an AI Incident. The legal action and investigation further confirm the materialization of harm due to AI misuse.
Thumbnail Image

H Τζόρτζια Μελόνι έπεσε θύμα deepfake βίντεο

2024-03-20
Newsbeast.gr
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that synthesize realistic but fabricated visual content. The creation and dissemination of such a video targeting a public figure constitutes a violation of rights and causes harm to the individual's reputation and dignity. Since the AI system's use directly resulted in this harm, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of applicable law protecting personal rights.
Thumbnail Image

Ιταλία: Θύμα deepfake πορνό η Μελόνι - Προσφεύγει στη Δικαιοσύνη

2024-03-21
newsbomb.gr
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic media. The creation and distribution of deepfake pornographic videos of a public figure constitute a direct harm to the individual's rights and dignity, which falls under violations of human rights or breach of applicable law protecting fundamental rights. The event describes actual harm caused by the AI system's malicious use, not just a potential risk. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Μελόνι: Θύμα deep fake πορνό η ιταλίδα πρωθυπουργός - Διεκδικεί αποζημίωση 100.000 ευρώ

2024-03-20
CNN.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create deep fake videos, which are digitally manipulated to falsely depict the Prime Minister in pornographic content. This constitutes a violation of rights and defamation, causing harm to the individual. The harm is realized and ongoing, as the videos were widely viewed. The use of AI in generating the videos is central to the incident. Hence, it meets the criteria for an AI Incident involving violations of human rights and harm to the individual.
Thumbnail Image

Θύμα deepfake πορνό η Μελόνι - Προσφεύγει στη Δικαιοσύνη

2024-03-21
Cretalive
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-generated deepfake technology to create pornographic videos without consent, which is a violation of human rights and defamation under applicable law. The videos have been widely disseminated, causing harm to the victim's reputation and dignity. The AI system's use directly led to this harm, qualifying this as an AI Incident under the framework. The legal proceedings and investigation are responses to the incident, not the main focus, so the classification is AI Incident rather than Complementary Information.
Thumbnail Image

Θύμα deepfake πορνό η Μελόνι: Προσφεύγει στη Δικαιοσύνη και ζητά αποζημίωση

2024-03-20
ΣΚΑΪ
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of deepfake pornographic videos, which are generated using AI-based deepfake technology. This has directly led to harm in the form of defamation and violation of personal rights, as well as emotional and reputational damage to the victim. The involvement of AI in creating the deepfake content and the resulting harm qualifies this as an AI Incident under the framework, specifically under violations of human rights or breach of obligations protecting fundamental rights.
Thumbnail Image

Μελόνι: Έπεσε θύμα deep fake πορνό - Ζητάει αποζημίωση 100.000 ευρώ

2024-03-20
enikos.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake videos, which are generated by AI systems designed to create realistic synthetic media. The harm is realized as the videos were widely disseminated, causing defamation and personal harm to the individual depicted. The legal actions and the demand for compensation further confirm the recognition of harm caused by the AI-generated content. Hence, this is a clear case of an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Τζιόρτζια Μελόνι: Προσφεύγει στην δικαιοσύνη για deepfake βίντεο πορνό - Ζητά αποζημίωση 100.000 ευρώ

2024-03-20
Newpost.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deepfake) to create manipulated pornographic videos without consent, which is a clear violation of rights and causes harm to the individual depicted. The harm is realized as the videos were widely viewed, causing reputational and personal harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a violation of rights and harm to the individual. The involvement of AI in the creation of the deepfake content and the resulting harm meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Στην Δικαιοσύνη προσέφυγε η Μελόνι για deepfake video πορνό με το πρόσωπό της

2024-03-20
newsbreak
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is an AI-generated synthetic media technology. The video falsely depicts a public figure in a harmful and defamatory manner, constituting a violation of rights and reputational harm. The involvement of AI in creating the deepfake and the resulting harm to the individual meets the criteria for an AI Incident, as the AI system's use has directly led to harm (violation of rights and defamation).
Thumbnail Image

Μελόνι: Θύμα deepfake πορνό - Προσφεύγει στη Δικαιοσύνη και ζητά αποζημίωση | Ρεπορτάζ και ειδήσεις για την Οικονομία, τις Επιχειρήσεις, το Χρηματιστήριο, την Πολιτική

2024-03-20
mononews
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake technology) used maliciously to create pornographic videos falsely depicting a public figure. This has directly led to harm in the form of defamation, violation of personal rights, and reputational damage, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The legal response and investigation further confirm the realized harm and the AI system's pivotal role in causing it.
Thumbnail Image

Μελόνι: Διεκδικεί αποζημίωση 100.000 ευρώ για deep fake πορνό

2024-03-20
parapolitika.gr
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video created using AI technology that impersonates a public figure in a pornographic context, which is a clear violation of rights and defamation. The involvement of AI in generating the deepfake is explicit, and the harm (defamation and violation of rights) has already occurred, leading to legal proceedings. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to a person (violation of rights and reputational harm).
Thumbnail Image

Ιταλία: Η Τζόρτζια Μελόνι ζητά 100.000 ευρώ αποζημίωση για deepfake βίντεο

2024-03-20
The TOC
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of deepfake videos using AI technology, which directly led to harm in the form of defamation and violation of personal rights of Giorgia Meloni. The videos were widely viewed, indicating realized harm. The legal action and demand for compensation further confirm the recognition of harm caused by the AI-generated content. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Η Τζόρτζια Μελόνι ζητά αποζημίωση 100.000 ευρώ για deepfake πορνογραφικά βίντεο

2024-03-20
reader.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake videos, which are AI-generated manipulated media. The videos caused reputational harm and distress to the individual depicted, constituting a violation of rights and defamation, which fits the definition of harm under AI Incident (c) - violations of human rights or breach of obligations protecting fundamental rights. The harm has already occurred, and legal proceedings are in progress. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Σε βίντεο πορνό η Μελόνι - Προσφεύγει στη Δικαιοσύνη και ζητά αποζημίωση

2024-03-20
ΕΛΕΥΘΕΡΟΣ ΤΥΠΟΣ
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI deepfake technology used maliciously to create pornographic videos of a public figure without consent, constituting a violation of rights and defamation. The harm is realized and significant, involving reputational damage and personal rights violations. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (violation of rights and defamation).
Thumbnail Image

Θύμα deepfake πορνό η Τζόρτζια Μελόνι

2024-03-21
Patras Events
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake technology) used to create manipulated pornographic content, which has directly led to harm in the form of defamation and violation of personal rights. The harm is realized and ongoing, as the videos were widely disseminated and have caused reputational damage. Therefore, this qualifies as an AI Incident under the framework, specifically a violation of human rights and breach of legal protections against defamation and privacy violations.
Thumbnail Image

Μελόνι: Έπεσε θύμα deep fake πορνό

2024-03-20
ant1news.gr
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that digitally manipulates images or videos to superimpose a person's face onto another's body. The creation and distribution of such deepfake pornographic videos have directly led to reputational harm and legal consequences for the victim. Since the AI system's use has directly caused harm to an individual (a violation of rights and harm to the community), this qualifies as an AI Incident under the framework.
Thumbnail Image

Έφτιαξαν fake βίντεο πορνό με την Μελόνι - Στη Δικαιοσύνη η πρωθυπουργός της Ιταλίας

2024-03-20
Tribune.gr
Why's our monitor labelling this an incident or hazard?
The creation of fake pornographic videos using AI-based face-swapping or deepfake technology directly led to harm by defaming and violating the rights of the individual depicted. The AI system's use in generating these videos is central to the incident. The harm is realized, as the videos were widely viewed and caused reputational damage, prompting legal proceedings. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing violations of rights and harm to the individual.
Thumbnail Image

Θύμα deepfake πορνό η Meloni - Προσφεύγει στη Δικαιοσύνη και ζητά αποζημίωση

2024-03-20
bankingnews.gr
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that synthesize realistic but fake visual content. The creation and distribution of such videos constitute a violation of personal rights and defamation, which falls under violations of human rights and legal protections. Since the videos have been viewed millions of times and caused reputational damage, this is a realized harm directly linked to the AI system's use. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Η Meloni ζητά αποζημίωση 100.000 ευρώ για fake video πορνό με την ίδια πρωταγωνίστρια

2024-03-20
bankingnews.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create a digitally fabricated pornographic video. The harm is realized as defamation and violation of personal rights, which falls under violations of human rights and breach of legal protections. The involvement of AI in creating the fake video directly led to reputational harm and legal action. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Θύμα deepfake πορνό η Τζόρτζια Μελόνι

2024-03-20
Real.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video, which is a form of AI-generated synthetic media. The video has been widely disseminated, causing harm to the victim's reputation and violating legal rights, fulfilling the criteria for harm to a person and violation of rights. The AI system's use directly led to this harm. The ongoing legal proceedings further confirm the recognition of harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Θύμα deepfake πορνό η Μελόνι-Προσφεύγει στη Δικαιοσύνη και ζητά αποζημίωση

2024-03-20
ΡΕΠΟΡΤΕΡ
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake technology) used maliciously to create pornographic videos falsely depicting a public figure, which is a violation of personal rights and can be considered harm to the individual and communities. The videos have been widely disseminated, causing actual harm. Therefore, this qualifies as an AI Incident. The legal action and compensation claim are responses to the incident, but the main event is the harm caused by the AI-generated deepfake content.
Thumbnail Image

Θύμα deepfake πορνό η Μελόνι - Προσφεύγει στη Δικαιοσύνη και ζητά αποζημίωση

2024-03-20
taxydromos.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are a product of AI systems capable of generating realistic fake content. The creation and distribution of these videos have directly caused harm to the victim's rights and reputation, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The legal pursuit and demand for compensation further confirm the recognition of harm caused by the AI system's misuse.
Thumbnail Image

Μελόνι: Διεκδικεί αποζημίωση 100.000 ευρώ για deep fake πορνό

2024-03-20
www.kathimerini.com.cy
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create a deep fake video that defames and harms the reputation of a public figure. The harm is realized as the video was widely viewed and caused reputational damage, which is a violation of rights. The AI system's use in generating the video is central to the incident, fulfilling the criteria for an AI Incident involving violation of rights and harm to the individual. The legal actions and compensation claim further confirm the recognition of harm caused by the AI-generated content.
Thumbnail Image

Η Τζιόρτζια Μελόνι προσφεύγει στη Δικαιοσύνη για deepfake βίντεο πορνό - Ζητά αποζημίωση 100.000 ευρώ - Fimotro

2024-03-20
Fimotro
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is generated by AI systems capable of synthesizing realistic fake videos. The harm is realized as the video caused reputational damage and legal action is underway for defamation, a violation of rights protected by law. The AI system's use directly led to this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ιταλία: Θύμα deepfake πορνό η Τζόρτζια Μελόνι -- Προσφεύγει στη Δικαιοσύνη και ζητά αποζημίωση

2024-03-20
The PressRoom
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake technology, an AI system that digitally manipulates images and videos to create realistic but fake content. The creation and dissemination of these deepfake pornographic videos have caused harm to the individual depicted, constituting a violation of rights and defamation. The harm has already occurred, and legal proceedings are underway. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (violation of rights and reputational damage).
Thumbnail Image

Ιταλία: Aποζημίωση 100.000 ευρώ για deepfake βίντεο πορνό ζητά η Μελόνι | Parallaxi Magazine

2024-03-20
Parallaxi Magazine
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake technology) used to create manipulated pornographic content. The harm caused includes defamation and violation of personal rights, which falls under violations of human rights or breach of applicable law protecting fundamental rights. Since the harm has already occurred and legal proceedings are ongoing, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Θύμα deepfake βίντεο η Μελόνι - Προσφεύγει στη Δικαιοσύνη | Parallaxi Magazine

2024-03-20
Parallaxi Magazine
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is a product of AI technology used to generate realistic but fake visual content. The video has been posted online, causing reputational harm to a public figure, which is a violation of rights and defamation under applicable law. The harm is realized, not just potential, and the AI system's use is central to the incident. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to harm (violation of rights and defamation).
Thumbnail Image

Μελόνι: Στη Δικαιοσύνη για deepfake βίντεο πορνό - Αγώνας της Κρήτης

2024-03-20
Αγώνας της Κρήτης
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is an AI system-generated manipulated content. The harm caused is a violation of personal rights and defamation, which falls under violations of human rights and legal protections. Since the AI system's use directly led to harm (defamation and reputational damage), this qualifies as an AI Incident under the framework.
Thumbnail Image

Στη δικαιοσύνη η Τζόρτζια Μελόνι, για deepfake πορνογραφικά βίντεο - Dnews

2024-03-20
dnews.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake videos, which are generated by AI systems capable of creating realistic manipulated content. The harm caused includes defamation and violation of personal rights, which falls under violations of human rights or breach of applicable law protecting fundamental rights. Since the harm has already occurred and legal action is underway, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

Meloni called to testify at trial into fake porn video - Politics - Ansa.it

2024-03-19
ANSA.it
Why's our monitor labelling this an incident or hazard?
The creation and distribution of deepfake videos using AI-based face-swapping technology constitutes a violation of personal rights and can be considered a breach of applicable laws protecting individual rights. The harm is realized as the videos were widely viewed and caused reputational damage. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use (deepfake generation) in producing non-consensual pornographic content.
Thumbnail Image

Meloni seeks symbolic compensation over deepfake porn images | BreakingNews.ie

2024-03-22
Breaking News.ie
Why's our monitor labelling this an incident or hazard?
The creation and distribution of deepfake pornographic images using AI technology directly harms the individual by damaging reputation and violating privacy rights. The involvement of AI in generating fabricated images that cause harm to a person fits the definition of an AI Incident, as it leads to violations of human rights and harm to the individual. The event describes actual harm that has occurred, not just a potential risk, and the AI system's use is central to the harm caused.
Thumbnail Image

Report: Italy's Meloni victim of deep fake porn images, seeking symbolic compensation from suspects - WTOP News

2024-03-22
WTOP
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI-generated deep fake technology to produce pornographic images without consent, which is a clear violation of human rights and personal dignity. The harm has materialized as the images were posted online, causing reputational and emotional damage to the victim, Italian Premier Giorgia Meloni. The legal proceedings and the victim's pursuit of damages confirm that the AI system's use has directly led to harm. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Meloni seeks symbolic compensation over deepfake porn images

2024-03-22
Irish Examiner
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of deepfake pornographic images, which are generated using AI systems capable of fabricating realistic images. The harm to Giorgia Meloni's reputation and private life is direct and realized, fitting the definition of an AI Incident due to violation of rights and harm to the individual. The involvement of AI in generating the deepfake images is explicit and central to the harm caused.
Thumbnail Image

Italy's Meloni victim of deep fake porn images, seeking symbolic...

2024-03-21
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI (deep fake technology) to create harmful content (pornographic images) without consent, which is a violation of human rights and personal dignity. The harm is realized as the images were posted online, causing reputational and emotional damage to the victim. The involvement of AI in the creation of these images and the resulting harm to the individual fits the definition of an AI Incident, as the AI system's misuse has directly led to harm.
Thumbnail Image

Italy's Meloni seeks symbolic compensation from suspects over deepfake porn images

2024-03-22
The Independent
Why's our monitor labelling this an incident or hazard?
The event describes the creation and online posting of deepfake pornographic images using AI technology, which directly harms the victim's reputation and private life. The harm is realized and ongoing, and the AI system's use is central to the incident. Therefore, this qualifies as an AI Incident due to violation of rights and harm to the individual caused by AI-generated content.
Thumbnail Image

Giorgia Meloni fights back against porn videos featuring her face

2024-03-20
The Star
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of AI-based video manipulation technology (deepfake) to create non-consensual pornographic content featuring a public figure's face. This constitutes a violation of personal rights and can be considered harm to the individual and potentially to communities by spreading abusive content. Since the videos were available online and viewed extensively, the harm has materialized. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse in generating harmful content.
Thumbnail Image

Italian PM Giorgia Meloni sues father and son over viral deepfake porn

2024-03-21
NZ Herald
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI-generated deepfake technology to create and distribute fake pornographic images of a public figure, causing reputational and personal harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and defamation. The legal action and identification of the perpetrators further confirm the harm has occurred. Hence, this is classified as an AI Incident.
Thumbnail Image

Report: Italy's Meloni victim of deep fake porn images, seeking symbolic compensation from suspects

2024-03-21
Star Tribune
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deep fake) to create pornographic images without consent, which is a direct violation of human rights and privacy. The harm has already occurred as the images were posted online, and legal action is underway. Therefore, this is an AI Incident due to realized harm caused by the malicious use of an AI system.
Thumbnail Image

Report: Italy's Meloni victim of deep fake porn images, seeking symbolic compensation from suspects

2024-03-21
San Diego Union-Tribune
Why's our monitor labelling this an incident or hazard?
The creation and posting of deep fake pornographic images using Meloni's face involves the use of AI systems for generating synthetic media. This has directly led to harm in terms of violation of personal rights and dignity, which falls under violations of human rights or breach of obligations under applicable law. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm. The legal proceedings and victim's pursuit of damages further confirm the realized harm.
Thumbnail Image

World News | Report: Italy's Meloni Victim of Deep Fake Porn Images, Seeking Symbolic Compensation from Suspects | LatestLY

2024-03-21
LatestLY
Why's our monitor labelling this an incident or hazard?
The creation and posting of deep fake pornographic images using AI technology directly harms the individual by violating her rights and causing reputational and emotional damage. The involvement of AI in generating these fake images and the resulting harm to a person fits the definition of an AI Incident, as it is a violation of human rights and a breach of obligations intended to protect fundamental rights. The legal proceedings and the victim seeking symbolic compensation further confirm the harm has occurred.
Thumbnail Image

World News | Italy's Meloni Seeks Symbolic Compensation from Suspects over Deepfake Porn Images | LatestLY

2024-03-22
LatestLY
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI-generated deepfake technology to create and distribute fabricated pornographic images, which constitutes a violation of personal rights and causes harm to the individual depicted. The harm has already occurred, as evidenced by the legal trial and the injured party status of the victim. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse in generating harmful content.
Thumbnail Image

Italy's Meloni seeks symbolic compensation from suspects over deepfake porn images

2024-03-22
Financial Post
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of an AI system (deepfake technology) to create non-consensual pornographic images, which is a violation of personal rights and can be considered harm to the individual. Since the harm has already occurred and legal proceedings are underway, this qualifies as an AI Incident under the category of violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Italy's Meloni seeks compensation over deepfake porn

2024-03-22
The West Australian
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deepfake generation) to create fabricated pornographic images that harm a person's reputation and private life. The harm is realized as the images have been posted online, leading to legal action and claims for damages. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and reputational damage).
Thumbnail Image

Meloni seeks symbolic compensation over deepfake porn images

2024-03-22
Oxford Mail
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates fabricated images or videos. The event describes the creation and uploading of deepfake pornographic images, which directly harmed Giorgia Meloni's reputation and privacy. The involvement of AI in generating these images and the resulting harm to an individual's rights and reputation meets the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by AI misuse.
Thumbnail Image

Report: Italy's Meloni victim of deep fake porn images, seeking symbolic compensation from suspects - KION546

2024-03-21
KION546
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of deep fake technology, which is an AI system capable of generating realistic fake images. The harm caused is a violation of personal rights and defamation, which falls under violations of human rights or breach of applicable law protecting fundamental rights. Since the harm has already occurred and legal proceedings are underway, this qualifies as an AI Incident.
Thumbnail Image

Italy's Meloni seeks symbolic compensation from suspects over deepfake porn images

2024-03-22
Idaho State Journal
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake technology, which is an AI system capable of generating realistic fabricated images. The harm has already occurred as the images were posted online, damaging the reputation and private life of Giorgia Meloni. This fits the definition of an AI Incident because the AI system's use directly led to a violation of rights and harm to the individual. The legal proceedings and symbolic compensation sought further confirm the recognition of harm caused by the AI system's misuse.
Thumbnail Image

Italy's Meloni seeks symbolic compensation from suspects over deepfake porn images, report says

2024-03-22
The Herald Journal
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-based deepfake technology to create non-consensual pornographic images of a public figure, which is a clear violation of personal rights and dignity. The harm is realized as the images were posted online, and legal proceedings are in progress. The AI system's use directly led to this harm, fitting the definition of an AI Incident involving violations of human rights and harm to the individual. The case is not merely potential harm or a future risk, but an actual incident with ongoing legal consequences.
Thumbnail Image

Meloni stands up to a fake porn video

2024-03-22
USANews Press Release Network
Why's our monitor labelling this an incident or hazard?
The article describes a video where the Prime Minister's face was artificially superimposed onto pornographic content using graphics software, which is consistent with AI-based deepfake technology. The video was distributed widely, causing reputational and personal harm, which fits the definition of harm to a person and violation of rights. The AI system's use in creating the fake video directly led to this harm. The legal proceedings and police investigation confirm the harm has materialized. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meloni, victim of a porn video

2024-03-22
USANews Press Release Network
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of AI-enabled deepfake technology to create and distribute a non-consensual pornographic video featuring the Prime Minister's face. This constitutes a violation of human rights and personal dignity, fulfilling the criteria for an AI Incident under the framework. The harm has already occurred, and the AI system's role in generating the manipulated content is pivotal to the incident. Therefore, this is classified as an AI Incident.
Thumbnail Image

Report: Italy's Meloni victim of deep fake porn images, seeking symbolic compensation from suspects

2024-03-21
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deep fake generation) to create pornographic images without consent, which is a direct violation of human rights and privacy. The harm is realized as the images have been posted online, and the victim is seeking legal redress. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to the individual).
Thumbnail Image

Meloni seeks symbolic compensation over deepfake porn images

2024-03-22
Guernsey Press
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to create fabricated pornographic images, which have been posted online causing harm to the victim's reputation and private life. This harm is a violation of rights and is realized, not just potential. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm to a person, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals.
Thumbnail Image

Meloni seeks symbolic compensation over deepfake porn images

2024-03-22
Chelmsford Times
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of deepfake pornographic images using AI technology, which directly harms the individual depicted by damaging her reputation and private life. The use of AI to fabricate such images and the resulting harm to the victim's rights and dignity meet the criteria for an AI Incident, as the AI system's use has directly led to violations of personal rights and harm to the individual.
Thumbnail Image

जॉर्जिया मेलोनी ने डीपफेक वीडियो मामले में मांगा 1 लाख यूरो का मुआवज़ा

2024-03-22
TV9 Bharatvarsh
Why's our monitor labelling this an incident or hazard?
The use of face morphing technology to create a deepfake video constitutes the use of an AI system. The video was posted without consent, causing harm to Georgia Meloni's reputation and violating her rights, which fits the definition of an AI Incident involving violations of human rights or breach of obligations protecting fundamental rights. Therefore, this event is classified as an AI Incident.
Thumbnail Image

इटली की PM जॉर्जिया मेलोनी का डीपफेक वीडियो वायरल, बाप-बेटे ने मिलकर एडल्ट साइट पर किया अपलोड

2024-03-21
OneIndia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is created using AI-based deep learning methods. The video has been distributed widely, causing reputational harm to a public figure, which constitutes a violation of rights and harm to the individual. The involvement of AI in generating the deepfake and the resulting harm meets the criteria for an AI Incident, as the AI system's use has directly led to harm (defamation and violation of rights).
Thumbnail Image

पकड़ा गया जॉर्जिया मेलोनी का डीपफेक वीडियो बनाने वाला शख्स, इटली की पीएम ने मुआवजे में मांगी बड़ी धनराशि

2024-03-22
hindi
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake technology) used to create manipulated video content that caused harm to an individual by violating her rights and damaging her reputation. The harm has already occurred, and legal proceedings are underway. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and breach of obligations protecting fundamental rights.
Thumbnail Image

बाप-बेटे बनाते थे जियोर्जिया मेलोनी का डीपफेक VIDEO, इटली की PM ने मांगा इतना हर्जाना

2024-03-21
Hindustan
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system in the form of deepfake technology used to create manipulated videos of a public figure without consent, leading to reputational harm and violation of rights (defamation). The creation and dissemination of the deepfake video have directly led to harm to the individual's rights and reputation, fulfilling the criteria for an AI Incident. The legal response and fine are consequences of this harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

पिता और बेटे की गंदी करतूत, इटली की PM का बनाया डीपफेर तस्वीर और वीडियो; 2 जुलाई को कोर्ट जाएंगी जियोर्जिया मेलोनी - Italy Meloni victim of deep fake porn images seeking symbolic compensation from suspects

2024-03-22
दैनिक जागरण (Dainik Jagran)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of deepfake AI technology to create non-consensual pornographic images and videos of a public figure, which were then distributed online. This misuse of AI has directly led to harm in terms of violation of privacy, dignity, and potentially other legal rights. The involvement of AI in generating the deepfake content and the resulting harm to the individual meets the criteria for an AI Incident as defined. The case is ongoing with legal proceedings, but the harm has already occurred.
Thumbnail Image

इटली की PM जियोर्जिया मेलोनी ने 109,345 डॉलर हर्जाने की कि मांग, डीपफेक का इस्तेमाल कर एडल्ट वीडियो बनाए जाने पर लिया फैसला

2024-03-21
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake technology) was used to create manipulated adult videos that caused reputational harm to a public figure. This constitutes a violation of rights (defamation and likely privacy rights) and harm to the individual. Since the harm has already occurred (videos were made and broadcast online), this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of applicable law protecting fundamental rights.
Thumbnail Image

इटली की पीएम जॉर्जिया मेलोनी हुई डीपफेक वीडियो की शिकार, मांगा 90 लाख का जुर्माना | Italy PM Giorgia Meloni seeks 100,000 euros in deepfake video damages | Patrika News

2024-03-21
Patrika News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology (deepfake video generation) to create manipulated content that has directly harmed a person (the Prime Minister) by damaging her reputation and causing personal and social harm. The involvement of AI in generating the deepfake video and the resulting harm to the individual's rights and reputation fits the definition of an AI Incident, as the harm has already occurred and legal action is underway.
Thumbnail Image

इटली की पीएम मेलोनी हुईं डीपफेक की शिकार, मांगा एक लाख डॉलर का हर्जाना - India TV Hindi

2024-03-22
India TV Hindi
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create manipulated video content that has been distributed online, causing harm to the subject's reputation and dignity. This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The legal action and arrest of the accused further confirm the harm has materialized, not just a potential risk.
Thumbnail Image

AI से बना दिया महिला का नकली वीडियो, पुरुषों के प्राइवेट पार्ट से जुड़ीं गोलियां बेच डालीं!

2024-03-20
LallanTop - News with most viral and Social Sharing Indian content on the web in Hindi
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI (deepfake technology) to create a fake video of a real person without consent, which is then used to promote pharmaceutical products fraudulently. This misuse of AI has directly led to reputational harm and violation of the woman's rights, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The harm is realized and ongoing, as the video has been widely viewed and the victim has publicly expressed distress and is seeking legal recourse.
Thumbnail Image

इटली की प्रधानमंत्री Giorgia Meloni डीपफेक का शिकार: मांगा 90 लाख का मुआवजा, आरोपियों ने अडल्ट फिल्म में जॉर्जिया का चेहरा लगाया - Haribhoomi

2024-03-21
हरिभूमि
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI (deepfake technology) to create manipulated video content that has been distributed online, causing reputational harm to a public figure. This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The harm is realized, not just potential, as the video has been widely viewed and legal proceedings are ongoing.
Thumbnail Image

इटली की PM का डीपफेक वीडियो: PM मेलोनी ने मुआवाज मांगा, पोर्न स्टार के चेहरे पर लगाया था उनका चेहरा

2024-03-21
Times Network Hindi
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to create and distribute manipulated content that harms the reputation and rights of a person, fulfilling the criteria for an AI Incident under violations of human rights and breach of applicable law. The harm is direct and realized, as the video was posted online and legal action is underway.
Thumbnail Image

इटली की प्रधानमंत्री जॉर्जिया मेलोनी ने डीपफेक वीडियो के मामले में मुआवजा मांगा,मामला ससारी कोर्ट में

2024-03-22
GNS News
Why's our monitor labelling this an incident or hazard?
The presence of a deepfake video indicates the involvement of an AI system capable of generating realistic fake videos. The harm is realized as it affects the reputation and rights of the Prime Minister, leading to legal action and a claim for compensation. Since the AI-generated content has directly caused harm, this qualifies as an AI Incident under the framework, specifically a violation of rights and harm to the individual.
Thumbnail Image

इटली की PM मेलोनी डीपफेक की हुई शिकार, पोर्न साइट पर डाली अश्लील वीडियो, प्रधानमंत्री ने ठोंका 91 लाख रुपये का मुकदमा | 🌎 LatestLY हिन्दी

2024-03-21
LatestLY हिन्दी
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system in the form of deepfake technology used to create synthetic media that falsely depicts the Prime Minister in an explicit video. This use of AI has directly led to harm, specifically a violation of personal rights and reputational harm, which falls under violations of human rights and breach of applicable law. The legal response and investigation confirm the harm has occurred. Therefore, this qualifies as an AI Incident.
Thumbnail Image

WhatsApp ने कसी डीपफेक वीडियो पर लगाम, 4 भाषाओं में चेक कर सकेंगे यूजर

2024-03-21
TV9 Bharatvarsh
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the deepfake detection chatbot) designed to analyze video content and identify deepfakes. The chatbot's deployment aims to prevent harm caused by misinformation and reputational damage from deepfake videos, which are recognized harms to communities and individuals. However, the article does not report any actual harm caused by the AI system or its malfunction; rather, it describes a new tool intended to mitigate such harms. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information about a governance and societal response to AI-driven misinformation risks.
Thumbnail Image

इटली की प्रधानमंत्री मेलोनी ने 'डीपफेक' अश्लील तस्वीरों को लेकर संदिग्धों से क्षतिपूर्ति की मांग की

2024-03-22
IBC24 News : Chhattisgarh News, Madhya Pradesh News, Chhattisgarh News Live , Madhya Pradesh News Live, Chhattisgarh News In Hindi, Madhya Pradesh In Hindi
Why's our monitor labelling this an incident or hazard?
The event explicitly involves 'deepfake' images, which are AI-generated manipulated content. The creation and distribution of such images have directly harmed the victim's dignity and privacy, constituting a violation of rights. Therefore, this qualifies as an AI Incident because the AI system's misuse has directly led to harm to a person (violation of rights and personal dignity).
Thumbnail Image

इटली की पीएम ने डीपफेक वीडियो पर मांगा भारी मुआवजा, बाप-बेटे ने एडल्ट स्टार की जगह लगा दिया था मेलोनी का चेहरा

2024-03-23
Navbharat Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to create manipulated video content that caused harm to a person's reputation and violated their rights. The harm has already occurred, with the video being viewed millions of times and legal proceedings underway. This fits the definition of an AI Incident because the AI system's use directly led to a violation of rights and harm to the individual. Therefore, the classification is AI Incident.
Thumbnail Image

इटली की PM जॉर्जिया मेलोनी ने डीपफेक वीडियो केस में मांगा € 1,00,000 का मुआवज़ा

2024-03-21
TV9 Bharatvarsh
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (deepfake technology using AI and machine learning) to create manipulated videos that have caused reputational harm to a public figure. The harm is realized as defamation and violation of rights, which fits the definition of an AI Incident (violation of human rights and harm to communities). The article describes the development, use, and malicious misuse of AI systems leading to direct harm, with legal proceedings underway. Hence, it is classified as an AI Incident.
Thumbnail Image

Meloni: Porno-Videos mit ihrem Gesicht - Ministerpräsidentin klagt vor Gericht

2024-03-20
TA - Thüringer Allgemeine
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of deepfake pornographic videos using AI-based face-swapping technology, which directly harms the individual by violating her rights and causing reputational damage. The AI system's use in generating manipulated videos that were widely viewed constitutes an AI Incident under the framework, as it involves violations of rights and harm to the individual. The legal action and ongoing investigation confirm the harm has occurred and is being addressed, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Pornofilme mit Gesicht von Italiens Regierungschefin: Giorgia Meloni zieht vor Gericht

2024-03-19
T-online.de
Why's our monitor labelling this an incident or hazard?
The use of AI-based deepfake technology to create and distribute manipulated pornographic content with the face of a public figure constitutes a violation of personal rights and can be considered harm to the individual and communities. Since the AI system's use directly led to this harm, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Italiens Regierungschefin Meloni wehrt sich gegen gefälschte Pornos

2024-03-20
Bild
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (video face-swapping software, a type of deepfake AI) used maliciously to create fake pornographic content featuring a public figure without consent. This has led to a violation of personal rights and reputational harm, which fits the definition of an AI Incident under violations of human rights or breach of applicable law. The harm is realized, not just potential, and the AI system's use is central to the incident. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Sardinien: Italiens Regierungschefin Giorgia Meloni klagt gegen Porno-Fakes mit ihrem Gesicht

2024-03-20
Spiegel Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are AI systems used to manipulate and generate realistic fake content. The harm is realized as the videos misuse the Prime Minister's likeness without consent, constituting a violation of rights and personal harm. The legal complaint and demand for damages confirm that harm has occurred. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Italien: Giorgia Meloni wehrt sich gegen Porno-Videos mit ihrem Gesicht - WELT

2024-03-19
DIE WELT
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically deepfake video technology, which is used to create manipulated pornographic videos. The harm caused includes violation of personal rights and reputational damage, which falls under violations of human rights or breach of applicable law protecting fundamental rights. Since the harm has already occurred and legal proceedings are underway, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Männer wegen gefälschter Pornos mit Giorgia Meloni angeklagt

2024-03-20
Frankfurter Allgemeine
Why's our monitor labelling this an incident or hazard?
The manipulated pornographic videos were created using video software that can be reasonably inferred to involve AI-based deepfake technology, as it involved mounting Meloni's head onto porn actors' bodies. This directly led to reputational harm and defamation, which is a violation of rights under applicable law. The videos were publicly distributed and viewed millions of times, confirming realized harm. The cyberattack on the Instagram account, while quickly mitigated, also involved malicious use of digital tools to spread false information. Therefore, this event qualifies as an AI Incident due to the direct harm caused by AI-generated deepfake content and malicious digital manipulation.
Thumbnail Image

Meloni geht gegen Porno-Videos mit ihrem Gesicht vor

2024-03-19
www.Bluewin.ch
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI technology (video software used to copy Meloni's face onto porn actors) that led to the creation and distribution of harmful deepfake videos. This caused reputational and personal harm to Meloni, a violation of her rights. The harm is realized, not just potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm (violation of rights and reputational damage).
Thumbnail Image

Meloni in Italien vor Gericht: Ministerpräsidentin wehrt sich gegen Skandal-Videos

2024-03-19
Merkur.de
Why's our monitor labelling this an incident or hazard?
The use of video software to superimpose faces in videos is a known application of AI (deepfake technology). The manipulated videos have been distributed and viewed millions of times, causing harm to Meloni's reputation and personal rights. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights). The hacking incident, while harmful, does not explicitly involve AI and thus is not considered here. The article focuses on the legal case against the creators of the deepfake videos, confirming the harm has occurred.
Thumbnail Image

Ministerpräsidentin wehrt sich gegen Porno-Videos mit ihrem Gesicht

2024-03-19
MOPO.de
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (video face-swapping software) used to create manipulated pornographic content without consent, which has led to harm in the form of violation of personal rights and reputational damage to a public figure. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and harm to the individual). The legal proceedings and the victim's response do not change the classification; they are part of the incident's context. Therefore, this is an AI Incident.
Thumbnail Image

Fake-Pornos mit ihrem Gesicht: Giorgia Meloni fordert Schadenersatz von 100'000 Euro

2024-03-19
watson.ch/
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of fake pornographic videos using video software that superimposes Giorgia Meloni's face onto other bodies. This is a clear case of AI-generated deepfake content causing harm to an individual's rights and reputation. The harm is realized, as the videos were viewed millions of times and have led to legal action seeking damages. The AI system's use in generating these videos directly led to violations of rights and harm to the individual, fitting the definition of an AI Incident.
Thumbnail Image

Deepfake-Pornos mit der italienischen Ministerpräsidentin: Jetzt fordert Meloni Schadensersatz in Höhe von über 100.000 Euro

2024-03-21
Business Insider
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deep learning used to create deepfake videos) that have been used maliciously to produce non-consensual pornographic content, causing harm to the subject's rights and reputation. This harm has materialized, as evidenced by the legal claims and police investigations. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content violating personal and possibly human rights.
Thumbnail Image

Gefälschte Sexvideos mit Melonis Gesicht: 2 Männer vor Gericht

2024-03-20
stol.it
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-based video manipulation (deepfake) technology to create and distribute fake pornographic videos without consent, which constitutes a violation of personal rights and can be considered harm to the individual and community. Since the AI system's use directly led to this harm, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Giorgia Meloni wehrt sich gegen Porno-Deepfakes

2024-03-19
Baden online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used to create manipulated videos that have caused harm to an individual by violating her rights and dignity. The harm is realized and direct, as the videos were widely distributed and viewed, leading to reputational damage and personal distress. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of legal protections. The legal proceedings and the described harm confirm the incident status rather than a mere hazard or complementary information.
Thumbnail Image

Giorgia Meloni wehrt sich gegen Deepfake-Pornos - zwei Männer vor Gericht

2024-03-20
DEWEZET
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI technology (deepfake video software) used maliciously to create non-consensual pornographic videos, which directly harms the individual involved (Giorgia Meloni) by violating her rights and dignity. The videos were widely distributed and viewed, indicating realized harm. The involvement of AI in the creation of these videos and the resulting legal action and harm to the victim fit the definition of an AI Incident, as the AI system's use directly led to a violation of rights and harm to the individual.
Thumbnail Image

Primera ministra italiana pide indemnización por videos porno con su rostro - Diario Primicia

2024-03-20
Diario Primicia
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of deepfake technology, which is an AI system capable of generating realistic manipulated videos. The creation and dissemination of these videos have caused harm to the Prime Minister's reputation and personal rights, constituting a violation of rights under applicable law. Since the harm has already occurred due to the AI system's malicious use, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Giorgia Meloni declarará en los tribunales por un falso vídeo porno difundido en 2020 con su cara

2024-03-22
EL PAÍS
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the accused used software programs to manipulate images and place Meloni's face onto an existing pornographic video, which is a clear example of AI-generated deepfake content. This manipulation led to reputational damage and defamation, which are violations of personal rights and can be considered harm to the individual. The AI system's use directly caused this harm. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by the malicious use of AI for face manipulation and defamation.
Thumbnail Image

Meloni llamada a declarar en juicio en el que pide 100.000 euros por un video porno falso

2024-03-20
infobae
Why's our monitor labelling this an incident or hazard?
The article describes the creation and publication of manipulated pornographic videos using software that modifies videos by replacing faces, which is a known AI application (deepfake technology). The videos caused harm by violating the rights of the Prime Minister and damaging her reputation. The harm has already occurred, and the legal process is underway. Hence, the event meets the criteria for an AI Incident due to the direct use of AI-generated manipulated content causing harm to a person's rights and reputation.
Thumbnail Image

Giorgia Meloni va a la Justicia por unos falsos videos sexuales con su rostro: exige una indemnización de 100.000 euros

2024-03-20
La Nacion
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of software to create deepfake videos, which are AI systems that generate manipulated realistic videos by altering original footage. The harm caused is defamation and reputational damage to a public figure, which constitutes a violation of rights under applicable law. Since the AI-generated deepfake videos have already been disseminated and caused harm, this qualifies as an AI Incident. The involvement of AI in the creation of the videos and the resulting harm is direct and material.
Thumbnail Image

Meloni planta cara ante un falso vídeo porno

2024-03-22
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The event describes a clear case of harm caused by the use of AI-based image manipulation software to create a non-consensual deepfake video of a public figure. The harm is realized as reputational damage and violation of rights, with legal proceedings underway. The AI system's use in modifying the video directly led to the harm, fulfilling the criteria for an AI Incident. The article also mentions investigation and legal actions, but the primary focus is on the harm caused by the AI-enabled deepfake, not just the response, so it is not Complementary Information.
Thumbnail Image

Giorgia Meloni testificará en juicio por videos pornográficos que usaron su imagen

2024-03-21
El Universal
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of pornographic videos that were manipulated to include the face of Giorgia Meloni, which is a known application of AI-based deepfake technology. The harm is realized as defamation and violation of personal rights, with the AI system playing a pivotal role in fabricating the videos. The event involves the use and misuse of AI systems to cause harm, meeting the criteria for an AI Incident under violations of human rights and breach of obligations protecting fundamental rights.
Thumbnail Image

Giorgia Meloni testificará ante un tribunal por los vídeos porno con su cara

2024-03-22
20 minutos
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of deepfake videos, which are generated by AI systems that manipulate images and videos to superimpose faces onto other bodies. This AI use has directly caused harm to Giorgia Meloni by violating her personal rights and causing reputational damage. The legal action and demand for damages confirm that harm has occurred. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

Giorgia Meloni exige 100 mil euros por deepfake sexual

2024-03-20
Milenio.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically deepfake technology, used to create manipulated videos that caused harm to a person (Giorgia Meloni) by violating her rights and dignity. The videos were published and viewed by millions, constituting realized harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to violations of rights and personal harm. The legal proceedings and damages sought further confirm the materialization of harm rather than a potential risk or complementary information.
Thumbnail Image

Meloni testificará en el juicio por la difusión de vídeos porno "deepfake" en los que aparece

2024-03-20
La Razón
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of AI-generated deepfake videos that have caused harm to an individual by violating her rights and causing reputational damage. The AI system's use directly led to harm (violation of rights and harm to the individual), qualifying this as an AI Incident. The involvement of AI (deepfake software) is explicit, and the harm has materialized, not just potential.
Thumbnail Image

Italia: Hombres acusados fabricar imágenes pornográficas falsas de primera ministra

2024-03-21
el Nuevo Herald
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of fake pornographic images using AI-generated face replacement (deepfakes) directly violates the individual's rights and causes harm to their reputation and dignity. Since the event describes the actual occurrence of this harm and the legal case arising from it, it qualifies as an AI Incident. The AI system's use in fabricating the images is central to the harm caused. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Esta es la indemnización que reclama Giorgia Meloni, primera ministra de Italia, al padre e hijo que pusieron su cara en videos porno - Mundo - ABC Color

2024-03-20
ABC Digital
Why's our monitor labelling this an incident or hazard?
The incident involves the use of AI or AI-related technology to create manipulated videos (deepfakes) that harm the reputation and rights of the individual depicted. The harm is realized as it has led to legal action and claims for damages. This fits the definition of an AI Incident because the AI system's use (deepfake generation) has directly led to a violation of rights and harm to the individual, fulfilling criteria (c) and (d) under AI Incident definitions.
Thumbnail Image

Giorgia Meloni exige una indemnización por los videos pornos 'fake' con su rostro

2024-03-20
Urgente 24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based deepfake applications to create fake pornographic videos with Giorgia Meloni's face, which have circulated online causing reputational and personal harm. This is a clear case of an AI system's use leading directly to harm, specifically a violation of rights and abuse. Therefore, it qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to a person.
Thumbnail Image

Giorgia Meloni reclama 100.000 euros por sus vídeos porno falsos

2024-03-22
Diario Crítico
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake technology) used to create manipulated videos that have caused harm to an individual (reputational and personal rights harm). The harm has already occurred as the videos were widely viewed, and legal action is underway. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of rights and harm to the individual. The mention of legislation and regulatory responses is complementary but secondary to the primary incident of harm caused by the deepfake videos.
Thumbnail Image

21 marzo, 2024

2024-03-21
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake technology) used maliciously to create realistic but fake videos that harm the reputation and rights of a person. The videos were widely disseminated and viewed, constituting realized harm. The legal actions and the description of the videos as deepfakes confirm the AI involvement and the direct link to harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

A luglio meloni andra' in tribunale perche' vittima di video porno con deep fake realizzati da...

2024-03-20
DAGOSPIA
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI-powered deepfake software to create and distribute manipulated videos, which constitutes the use of an AI system. The harm caused includes violation of personal rights and reputational damage, which falls under violations of human rights and breach of applicable law protecting fundamental rights. Since the harm has already occurred and legal proceedings are underway, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Video hot falsi col viso della premier, Meloni teste a Sassari - Notizie - Ansa.it

2024-03-19
ANSA.it
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of deepfake videos using software that manipulates video content by replacing faces, a task typically performed by AI systems. The harm is direct and realized, involving defamation and violation of the individual's rights. The legal proceedings and the victim's civil claim further confirm the harm caused. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-enabled deepfake video manipulation.
Thumbnail Image

Video porno falsi con il volto di Giorgia Meloni, chiesta la deposizione della premier

2024-03-19
Gazzetta del Sud
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, which is an AI system capable of generating manipulated video content. The publication of these videos caused defamation and harm to the individual's reputation, constituting a violation of rights. The legal case and requested testimony confirm that harm has occurred. Hence, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

Video porno falsi col volto della Meloni, la premier chiamata a testimoniare: chiede 100mila euro

2024-03-20
Fanpage
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are AI systems that generate synthetic media. The harm caused is defamation and violation of the individual's rights, which falls under violations of human rights or breach of obligations protecting fundamental rights. Since the harm has already occurred and legal action is ongoing, this qualifies as an AI Incident.
Thumbnail Image

Pubblicarono video hot col viso della Meloni. La premier sarà in tribunale e chiederà 100 mila euro per le donne vittime di violenza

2024-03-19
QuotidianoNet
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of AI software for video manipulation (deepfake technology) to create and distribute fake pornographic videos, which constitutes a violation of rights and harm to the individual involved. The harm has already occurred as the videos were publicly available for months and viewed millions of times. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse in generating defamatory content.
Thumbnail Image

Giorgia Meloni, il suo volto nei video porno contraffatti: la premier convocata a deporre

2024-03-19
Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI software to generate deepfake videos, which are manipulated pornographic videos replacing the original actresses' faces with that of Giorgia Meloni. This is a clear example of an AI system's use leading to harm—specifically, defamation and violation of personal rights. The harm is realized, as the videos were published and caused reputational damage, prompting legal action. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm (violation of rights and defamation).
Thumbnail Image

Il video hard deepfake di Giorgia Meloni: a processo un 73enne e un 40enne

2024-03-20
Open
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI software for video manipulation (deepfake technology) to create non-consensual pornographic content, which is a clear violation of rights and causes harm to the individual involved. The harm has already occurred as the videos were widely viewed. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a violation of rights and harm to the person.
Thumbnail Image

Il volto di Meloni nei video hard contraffatti: la premier teste in aula a Sassari - L'Unione Sarda.it

2024-03-19
L'Unione Sarda.it
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of deepfake videos using software that manipulates video content by superimposing the face of the premier onto pornographic actors. This is a clear use of AI technology (deepfake generation) leading to harm—defamation and violation of personal rights. The harm is realized and ongoing, as the videos were widely viewed and the premier is pursuing legal action. Hence, it meets the criteria for an AI Incident because the AI system's use directly led to harm (violation of rights and reputational damage).
Thumbnail Image

Video porno "fake" di Meloni, a processo due sardi: 100mila euro di danni alle donne vittime di violenza - Secolo d'Italia

2024-03-20
Secolo d'Italia
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of deepfake videos, which are AI-generated synthetic media, impersonating Giorgia Meloni to promote fraudulent investment schemes. The AI system's use here directly leads to harm by enabling scams and misleading the public, which fits the definition of an AI Incident. The harm includes financial loss to victims and reputational damage to the public figure and banks whose logos are misused. The involvement of AI in generating the deepfake content is explicit, and the harm is realized, not just potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Porno con il volto di Giorgia Meloni, la premier contro gli autori del deep fake: "Risarcimento da 100mila euro"

2024-03-19
lacronaca24.it
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake video manipulation software) to create and distribute harmful content, which constitutes a violation of rights and harm to the individual depicted. The harm has already occurred, and legal proceedings are in place. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (violation of rights and reputational damage).