Italian Privacy Authority Investigates FakeYou AI Voice App Over Deepfake Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Italian Data Protection Authority has launched an investigation into FakeYou, an AI app that generates synthetic voices of public figures. The probe follows concerns about potential misuse, including the spread of deepfake audio impersonating politicians, raising risks of privacy violations and misinformation, though no specific harm has yet been reported.[AI generated]

Why's our monitor labelling this an incident or hazard?

The app FakeYou uses AI to generate synthetic voices, which is an AI system. The investigation by the privacy authority is triggered by concerns about potential misuse of personal data and the risk of misleading audio clips that could deceive people. However, the article does not describe any realized harm such as injury, rights violations, or disruption caused by the AI system. The event is about the plausible risks and regulatory response, not about an incident with actual harm. Therefore, this qualifies as an AI Hazard due to the plausible future harm from misuse of AI-generated synthetic voices and personal data, but not an AI Incident yet.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilityRobustness & digital securitySafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
GovernmentGeneral public

Harm types
Human or fundamental rightsPublic interestReputational

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Audio "assurdi" con la voce di Giorgia Meloni: il Garante indaga sull'app FakeYou

2022-10-12
Gazzetta del Sud
Why's our monitor labelling this an incident or hazard?
The app FakeYou uses AI to generate synthetic voices, which is an AI system. The investigation by the privacy authority is triggered by concerns about potential misuse of personal data and the risk of misleading audio clips that could deceive people. However, the article does not describe any realized harm such as injury, rights violations, or disruption caused by the AI system. The event is about the plausible risks and regulatory response, not about an incident with actual harm. Therefore, this qualifies as an AI Hazard due to the plausible future harm from misuse of AI-generated synthetic voices and personal data, but not an AI Incident yet.
Thumbnail Image

Il garante della privacy ha aperto un'istruttoria nei confronti della societ che consente a tutti..

2022-10-12
DAGOSPIA
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (the Fakeyou app generating synthetic voices) and concerns about potential misuse of personal data (voice data of famous individuals). However, no actual harm or incident has been reported; the privacy authority is investigating potential risks and seeking information. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to harm related to personal data misuse, but no harm has yet occurred or been confirmed.
Thumbnail Image

Deep fake, istruttoria Garante su app che falsifica voci - Software e App

2022-10-12
ANSA.it
Why's our monitor labelling this an incident or hazard?
The app Fakeyou uses AI to generate synthetic voices, which involves AI system use. The investigation by the privacy authority is due to potential risks of misuse of personal data (voice), which could lead to violations of privacy rights. However, the article does not report any actual harm or incident occurring yet, only the potential for harm and regulatory scrutiny. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm, but no harm has been confirmed or reported at this stage.
Thumbnail Image

Il Garante per la privacy apre un'istruttoria su FakeYou, l'app che simula le voci dei vip

2022-10-12
Gazzetta di Mantova
Why's our monitor labelling this an incident or hazard?
FakeYou is an AI system that generates synthetic voices of celebrities, which involves processing personal data (voice). The privacy authority's investigation is triggered by concerns about potential misuse of these AI-generated voices, which could lead to violations of privacy rights and misuse of personal data. Since no actual harm has been reported yet, but the potential for harm (privacy violations, misuse of personal data) is credible and plausible, this event qualifies as an AI Hazard rather than an AI Incident. The focus is on the plausible future risk from the AI system's use, not on realized harm.
Thumbnail Image

Deep fake, il Garante della Privacy apre un'istruttoria sull'app che falsifica le voci dei vip

2022-10-12
Gazzetta di Mantova
Why's our monitor labelling this an incident or hazard?
The app Fakeyou uses AI to generate synthetic voices of celebrities, which involves processing personal data (voice). The privacy authority's investigation is a governance response to potential privacy violations and risks of misuse of AI-generated voice data. There is no indication that harm has yet occurred, only potential risks are being assessed. Therefore, this is Complementary Information about regulatory oversight and governance related to AI risks, not an AI Incident or Hazard itself.
Thumbnail Image

Il Garante della Privacy apre un'istruttoria su FakeYou, tutti i dubbi sull'app per clonare la voce di Giorgia Meloni

2022-10-13
Fanpage
Why's our monitor labelling this an incident or hazard?
The app uses AI to clone voices, which involves processing personal data (voice samples). The privacy authority's investigation is due to concerns about improper data collection and potential misuse, which could lead to violations of privacy rights (a human rights violation). However, the article does not report any realized harm or incident, only the plausible risk and regulatory scrutiny. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm but no harm has yet been confirmed or reported.
Thumbnail Image

FakeYou, l'app che falsifica le voci: allarme del garante della privacy

2022-10-13
Il Mattino
Why's our monitor labelling this an incident or hazard?
The app FakeYou uses AI to generate realistic fake voices, which can be used to spread misinformation or impersonate individuals, posing risks to privacy and potentially causing harm to individuals and communities. The privacy authority's investigation highlights concerns about these potential harms. Since the article emphasizes potential risks and an ongoing inquiry without confirming actual harm yet, this qualifies as an AI Hazard rather than an AI Incident. The AI system's use could plausibly lead to harms such as violations of personal rights and misinformation dissemination, but no direct harm is reported as having occurred so far.
Thumbnail Image

Il Garante per la privacy ha aperto un'istruttoria su FakeYou, l'app che simula le voci dei politici

2022-10-13
Giornalettismo
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (FakeYou) that generates deepfake audio and video content. The misuse of this AI system has already led to the dissemination of fake audio clips impersonating politicians, which constitutes a violation of personal rights and identity theft, a form of harm to individuals. However, the article mainly reports the regulatory authority's investigation and concerns rather than detailing a new or specific incident of harm. Therefore, this event is best classified as Complementary Information, as it provides important context and governance response to existing AI-related harms rather than reporting a new AI Incident or a plausible future hazard.
Thumbnail Image

Privacy, le "voci contraffatte" di FakeYou nel mirino del Garante

2022-10-13
Cor.Com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (FakeYou app) that generates synthetic voices, which is an AI application. The authority's investigation is triggered by concerns about potential risks of misuse of personal data (voice), which could plausibly lead to harm such as privacy violations. However, the article does not report any realized harm or incident; it focuses on the potential risks and regulatory scrutiny. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving privacy violations, but no incident has yet occurred.