AI Photo Editing Apps Raise Digital Identity Theft Risks, Experts Warn

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Cybersecurity expert Alexander Forasko warns that AI-powered photo editing apps like Lensa may expose users to digital identity theft. Uploaded personal images can be misused for creating deepfakes, fake accounts, and scams, potentially leading to financial fraud, despite no specific incidents reported yet.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes plausible future harm stemming from the use of AI-powered photo editing apps that process personal images and could be exploited for identity theft and deepfake scams. Since no actual harm or incident is reported, but the risk is credible and linked to AI system use, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityTransparency & explainabilityAccountabilitySafetyRespect of human rights

Industries
Consumer servicesMedia, social platforms, and marketingDigital security

Affected stakeholders
Consumers

Harm types
Economic/PropertyReputationalHuman or fundamental rightsPsychological

Severity
AI hazard

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

احذر.. تطيبقات تعديل الصور الشخصية تعرضك لسرقة الهوية الرقمية - صحيفة تواصل الالكترونية

2022-12-03
صحيفة تواصل الاخبارية www.twasul.info
Why's our monitor labelling this an incident or hazard?
The article describes plausible future harm stemming from the use of AI-powered photo editing apps that process personal images and could be exploited for identity theft and deepfake scams. Since no actual harm or incident is reported, but the risk is credible and linked to AI system use, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

خبير يحذّر من بعض تطبيقات الهواتف والأجهزة الذكية - وكالة أوقات الشام الإخبارية

2022-12-05
وكالة أوقات الشام الإخبارية
Why's our monitor labelling this an incident or hazard?
The article highlights plausible future harms stemming from the use of AI-based photo editing apps, such as identity theft and deepfake scams, which could lead to violations of rights and harm to individuals. Since no actual harm or incident is reported, and the focus is on warning about potential risks, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

احذر.. تطبيقات على الأجهزة الذكية تتسبب في قرصنة الهوية الرقمية

2022-12-04
صحيفة عكاظ
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI-based photo editing and deepfake generation) and discusses the plausible future harm of digital identity theft and fraud resulting from misuse of these AI technologies. Since no actual harm is reported but a credible risk is described, this qualifies as an AI Hazard rather than an Incident. The article does not describe a realized harm but warns about potential misuse and its consequences.
Thumbnail Image

وكالة سرايا : خبير يحذر من تطبيقات تعديل الصور الشخصية

2022-12-04
(وكالة أنباء سرايا (حرية سقفها السماء
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-based photo editing apps like Lensa and warns about the potential misuse of personal images for identity theft and deepfake scams. While these risks are credible and plausible, the article does not describe any realized harm or incident resulting from these AI systems. Therefore, the event qualifies as an AI Hazard because it highlights plausible future harm stemming from the use of AI systems but does not report an actual AI Incident. It is not Complementary Information since the main focus is on the warning about potential misuse rather than updates or responses to a past incident.
Thumbnail Image

احذر.. تطيبقات تعديل الصور الشخصية تعرضك لسرقة الهوية الرقمية - صحيفة المناطق السعودية

2022-12-03
صحيفة المناطق السعودية
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI-based photo editing apps) whose use can plausibly lead to harm, specifically digital identity theft and fraud via deepfake technology. Although no specific incident of harm is reported as having occurred, the expert's warning highlights a credible risk of future harm stemming from the use of these AI systems. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

تطبيقات تعديل الصور الشخصية قد تتسبب في سرقة الهوية الرقمية - سفاري نت

2022-12-03
سفاري نت
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-based photo editing applications and the potential misuse of AI-generated deepfakes for identity theft and fraud, which are plausible harms linked to AI system use. However, it does not describe a specific event where harm has already occurred due to these AI systems. Instead, it warns about possible risks and advises caution. Therefore, this qualifies as an AI Hazard because it plausibly could lead to harm but no concrete incident is reported.
Thumbnail Image

Fez seu avatar no Lensa? Cuidado: ele põe seus dados pessoais em risco

2022-12-01
uol.com.br
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Lensa's AI for avatar generation) and discusses risks related to its use, including data privacy concerns, potential misuse or unauthorized commercial use of personal data, and biased outputs that could perpetuate discrimination. Although no direct harm has been reported, the plausible future harms include privacy violations, data breaches, and discriminatory impacts on marginalized groups. These risks stem from the AI system's development and use, fitting the definition of an AI Hazard rather than an Incident or Complementary Information. The article does not describe a realized harm event, nor is it primarily about responses or updates, so it is not Complementary Information. It is not unrelated as it clearly involves AI and its risks.
Thumbnail Image

Fazer avatar no Lensa é seguro? Entenda uso de dados e saiba se proteger

2022-12-02
TechTudo
Why's our monitor labelling this an incident or hazard?
The article centers on the privacy and data use implications of an AI-powered avatar creation app, discussing potential risks and user protections but without describing any realized harm or incident. It does not report an AI Incident (no harm occurred), nor does it describe a specific AI Hazard event (no imminent or plausible harm event is described). Instead, it provides contextual information, expert opinions, and guidance on data privacy and user rights, which fits the definition of Complementary Information as it enhances understanding of AI system impacts and governance without reporting a new incident or hazard.
Thumbnail Image

Lensa é seguro? App de imagens automáticas coleta dados do celular

2022-12-01
Terra
Why's our monitor labelling this an incident or hazard?
The Lensa app uses AI to generate avatars, so an AI system is involved. The article focuses on data collection and privacy issues, which relate to potential violations of user rights if misused. However, the article does not report any realized harm or legal violations yet, only the potential risks inherent in the app's data practices. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm (privacy violations, misuse of facial data), but no direct or indirect harm has been reported so far.
Thumbnail Image

Cyber experts train lens on Lensa-like app

2022-12-17
Economic Times
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Lensa AI and similar apps) that use AI to generate images from user photos. However, it does not describe any realized harm or incident caused by these AI systems. Instead, it presents expert warnings and potential risks related to privacy and data security, which could plausibly lead to harm if policies are violated or security is breached. Therefore, this qualifies as an AI Hazard, as it concerns plausible future harm from the development and use of AI systems but no actual incident has been reported.
Thumbnail Image

Lensa AI Can Generate Nudes! So, Maker Prisma Lab Is Working On Preventing Abuse Of The App

2022-12-14
Mashable India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lensa AI) that generates images, including non-consensual nude images, which is a violation of personal rights and can cause harm to individuals. The misuse of the AI system has directly led to the creation and dissemination of harmful content. The company's response to build filters and place responsibility on users does not negate the fact that harm has occurred. The involvement of Stable Diffusion as the underlying model trained on unfiltered data further supports the AI system's role in enabling this harm. Hence, this event meets the criteria for an AI Incident due to realized harm from the AI system's use and misuse.
Thumbnail Image

Are Lensa's magic avatars based on stolen art?

2022-12-15
Euronews English
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lensa AI using Stable Diffusion) whose development and use rely on training data that includes copyrighted artworks without explicit consent from the artists. This implicates potential violations of intellectual property rights, which is a recognized harm category. However, the article mainly presents the controversy, ethical concerns, and the artists' reactions without describing any concrete incident of harm having occurred or legal findings against the AI system or its operators. Therefore, the situation represents a plausible risk of harm (intellectual property rights violations) but not a confirmed incident. It is best classified as Complementary Information because it provides context, ongoing societal and ethical discussions, and responses related to AI-generated art and its impact on artists, rather than reporting a specific AI Incident or Hazard.
Thumbnail Image

What Is Lensa AI App -- And Is it Dangerous for Your Privacy?

2022-12-14
Entrepreneur
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (Lensa AI) that processes user images to generate portraits. While it raises significant concerns about privacy and the sexualization of women, it does not document any direct or indirect harm that has occurred due to the AI's development, use, or malfunction. The privacy concerns and ethical issues are potential risks and societal implications rather than realized harms. Additionally, the mention of legal actions against related apps and the company's privacy policies serve as contextual information rather than evidence of an AI Incident or Hazard. Thus, the article fits the definition of Complementary Information, enhancing understanding of AI's impact and responses without reporting a new incident or hazard.
Thumbnail Image

AI-generated portraits are taking over social media. Is that good or bad?

2022-12-16
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Lensa app using stable diffusion) and discusses its use and societal impacts. However, it does not report a specific AI Incident where harm has directly or indirectly occurred, nor does it describe a specific AI Hazard event where harm is plausible but not realized. Instead, it presents ongoing debates about potential harms (copyright issues, bias, and offensive content) and the company's responses to these concerns. This fits the definition of Complementary Information, as it provides supporting data and context about AI system impacts and responses without describing a new incident or hazard.
Thumbnail Image

No, the Lensa AI app technically isn't stealing artists' work - but it will majorly shake up the art world

2022-12-14
The Conversation
Why's our monitor labelling this an incident or hazard?
The article focuses on the conceptual and legal discussion around AI-generated art and its impact on artists' intellectual property rights. It does not describe any specific event where the AI system caused harm or a plausible future harm event. The concerns raised are about potential disruption and challenges for artists, but no direct or indirect harm has been reported or demonstrated. Therefore, this is best classified as Complementary Information, providing context and analysis about AI's impact on the art ecosystem rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Lensa AI app mixes up data, privacy and representation | TechTarget

2022-12-15
TechTarget
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Lensa app using Stable Diffusion) and discusses privacy risks and representation issues, which are relevant concerns. However, there is no indication that any direct or indirect harm has occurred due to the AI system's development, use, or malfunction. The concerns are more about potential privacy risks and societal implications rather than a specific AI Incident or Hazard. Therefore, this article is best classified as Complementary Information, as it provides context and discussion around AI use and its societal implications without reporting a concrete incident or hazard.
Thumbnail Image

Lensa AI app causes a stir with sexy "Magic Avatar" images no one wanted

2022-12-13
Ars Technica
Why's our monitor labelling this an incident or hazard?
Lensa AI is an AI system using generative AI (Stable Diffusion) to create avatars. The sexualized outputs represent a harm to users, particularly women, by producing unwanted and potentially harmful images, which can be considered a violation of rights and harm to communities. The harm is realized as users have experienced and reported these outputs. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs and the underlying biased training data leading to sexualization.
Thumbnail Image

This popular photo app can exploit your selfies however it wants -- how to stop it

2022-12-14
LaptopMag
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Lensa AI using Stable Diffusion) and discusses its use and data policies. However, it does not report any direct or indirect harm caused by the AI system's development, use, or malfunction. The concerns raised relate to potential misuse of user data and intellectual property rights, but no actual violation or harm has been documented in the article. The article also highlights contradictory statements in the app's policies and expert opinions on privacy risks, which are important for understanding the broader AI ecosystem and governance issues. Thus, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Lensa AI's Terms Allow It To Use Images 'Without Compensation'

2022-12-13
PetaPixel
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Lensa AI) that processes user images with AI-generated avatars. The terms grant the company broad rights to use user content, which could lead to violations of intellectual property rights or privacy if misused. However, the article does not report any realized harm or incidents resulting from this practice, only warnings and potential risks. Therefore, this situation fits the definition of an AI Hazard, as the development and use of the AI system's terms could plausibly lead to harm, but no direct or indirect harm has been reported yet.
Thumbnail Image

AI firms want to trust but not everyone's willing to put in the work to get it | Biometric Update

2022-12-14
Biometric Update
Why's our monitor labelling this an incident or hazard?
The article centers on privacy and ethical concerns regarding the use of biometric data by an AI system but does not describe any realized harm or incident. There is no direct or indirect harm reported, nor a specific event indicating plausible future harm. The discussion is about the broader context of trust and data management practices, which fits the definition of Complementary Information as it provides context and societal concerns about AI use without reporting a concrete AI Incident or AI Hazard.
Thumbnail Image

'It's just a vehicle for profit': Australian artists speak out against AI art

2022-12-15
Crikey
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Lensa and Stable Diffusion) that generate art by learning from a large dataset of images including copyrighted artworks without artists' consent. This use has directly led to harm: violation of intellectual property rights and harm to the artistic community's economic and moral interests. The article documents that artists have identified their work in AI outputs and feel violated, indicating realized harm. The AI's role is pivotal as it is the mechanism by which the unauthorized use and transformation of artworks occur. Hence, this is an AI Incident under the framework, specifically a violation of intellectual property rights (harm category c).
Thumbnail Image

The Lensa AI App Is Not Technically Stealing Artists' Work, But It Will Significantly Change The Art World, nevertheless. - Tech Gadget Central

2022-12-15
Tech Gadget Central
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lensa app using Stable Diffusion) and discusses its use and impact on artists' intellectual property and the art community. However, there is no indication that any direct harm, violation of rights, or legal breach has occurred yet. The concerns are about potential future impacts and changes in the art world, which could plausibly lead to harm or legal challenges. Therefore, this is best classified as Complementary Information, as it provides context and discussion about AI's societal and legal implications without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Is AI really stealing artwork? - Softonic

2022-12-15
Softonic
Why's our monitor labelling this an incident or hazard?
The AI system (Lensa) uses machine learning to generate images by appropriating styles and even remnants of original artists' signatures without consent, which is a breach of intellectual property rights. This has caused harm to artists by flooding platforms with AI-generated imitations, undermining their professional efforts and rights. Since the harm is realized and directly linked to the AI system's use, this qualifies as an AI Incident under the framework's category of violations of intellectual property rights and harm to communities.