AI-Generated Virtual Models Spark Debate and Ethical Concerns on OnlyFans

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI systems like MidJourney and Stable Diffusion are being used to create hyperrealistic virtual models for OnlyFans, raising concerns about potential consumer deception, economic impact on human creators, and ethical issues. While no direct harm has been reported, the trend has sparked debate over transparency and the future of adult content platforms.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article involves AI systems generating hyperrealistic images used commercially on adult content platforms. The AI's role is central to the event. However, the article does not report any realized harm such as fraud complaints, legal violations, or health impacts. Instead, it highlights a debate and potential for future harm due to possible user deception and lack of clear disclosure. Therefore, this situation fits the definition of an AI Hazard, as the AI-generated content could plausibly lead to harm (e.g., deception, consumer rights issues) if not regulated or disclosed properly. It is not Complementary Information because the main focus is not on responses or governance but on the emerging situation itself. It is not an AI Incident because no direct or indirect harm has been reported as having occurred yet.[AI generated]
AI principles
Transparency & explainabilityFairnessAccountabilityPrivacy & data governanceRespect of human rightsHuman wellbeing

Industries
Media, social platforms, and marketingArts, entertainment, and recreationConsumer services

Affected stakeholders
ConsumersWorkers

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI hazard

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Onlyfans: Modelos, creadas por Inteligencia Artificial, dominan la plataforma

2023-04-13
MARCA
Why's our monitor labelling this an incident or hazard?
The presence of AI-generated models on OnlyFans is clearly an AI system use case involving generative AI creating virtual content and interactions. However, the article does not describe any actual harm or legal violations resulting from this use. It highlights potential future impacts on human content creators but does not document any incident or credible risk of harm that has materialized or is imminent. Thus, it fits the definition of Complementary Information, as it informs about AI's influence on the platform and possible societal changes without constituting an AI Incident or AI Hazard.
Thumbnail Image

Modelos de Onlyfans creadas con Inteligencia Artificial parecen mujeres reales y ganan fortunas

2023-04-11
Crónica
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated models on OnlyFans that are indistinguishable from real humans and are actively generating income and interacting with users. This involves AI systems generating content and influencing economic outcomes. The harm is indirect but real, as it affects the economic opportunities of real human models, which can be considered harm to communities and property (economic livelihood). Therefore, this qualifies as an AI Incident due to realized harm caused by the use of AI systems.
Thumbnail Image

Modelos de OnlyFans creadas con inteligencia artificial ya empezaron a facturar

2023-04-10
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Stable Diffusion-based generative models) creating content and interacting with users, which fits the definition of AI system involvement. However, the article does not report any direct or indirect harm resulting from this use, nor does it highlight a plausible risk of harm that would qualify as an AI Hazard. The content focuses on the emergence and market impact of AI-generated models, which is informative and contextual but does not describe realized or imminent harm. Thus, it is best classified as Complementary Information, providing context on AI's evolving role in digital content creation and its societal implications without reporting an incident or hazard.
Thumbnail Image

Modelos creadas con inteligencia artificial dominan OnlyFans

2023-04-12
Milenio.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating images that are sold and shared on OnlyFans, which is a direct use of AI. However, the article does not report any realized harm such as injury, rights violations, or disruption of critical infrastructure. The main issue is the competition between AI-generated and human models and the platform's adaptation to this new content type. Since no direct or indirect harm is described, and the platform's response is a mitigation measure, this event is best classified as Complementary Information, providing context and updates on AI's impact on content monetization and platform policies.
Thumbnail Image

IA 'traviesa': Crean cuentas en OnlyFans con modelos virtuales

2023-04-11
El Financiero
Why's our monitor labelling this an incident or hazard?
The article involves AI systems generating hyperrealistic images used commercially on adult content platforms. The AI's role is central to the event. However, the article does not report any realized harm such as fraud complaints, legal violations, or health impacts. Instead, it highlights a debate and potential for future harm due to possible user deception and lack of clear disclosure. Therefore, this situation fits the definition of an AI Hazard, as the AI-generated content could plausibly lead to harm (e.g., deception, consumer rights issues) if not regulated or disclosed properly. It is not Complementary Information because the main focus is not on responses or governance but on the emerging situation itself. It is not an AI Incident because no direct or indirect harm has been reported as having occurred yet.
Thumbnail Image

Modelos creadas con Inteligencia Artificial llegan a OnlyFans

2023-04-13
tiempodigital.mx
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic virtual models, which is an AI system use. However, the article only speculates about potential future risks to human content creators without describing any realized harm or incidents. Therefore, it represents a plausible future risk (hazard) rather than an incident. Since no harm has yet occurred, and the article focuses on the potential implications, this fits the definition of an AI Hazard.
Thumbnail Image

Son bellísimas modelos de OnlyFans que ganan fortunas por sus desnudos, pero... | Viral | La Voz del Interior

2023-04-14
La Voz
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated virtual models created with Stable Diffusion on OnlyFans, indicating AI system involvement. While the AI-generated content is realistic and generating income, no direct or indirect harm is reported. The concerns are about potential future implications and ethical debates, which fits the definition of an AI Hazard (plausible future harm). There is no indication of a realized AI Incident or a complementary information update. Hence, the classification is AI Hazard.
Thumbnail Image

IA crea debate por modelos virtuales 'facturando' en OnlyFans

2023-04-12
Medio Tiempo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating hyperrealistic images used commercially on OnlyFans, which can mislead consumers and potentially cause harm through deception. This fits the definition of an AI Hazard because the AI system's use could plausibly lead to harm such as consumer deception and economic harm to real content creators. There is no clear evidence of direct or indirect realized harm yet, nor legal violations confirmed, so it is not an AI Incident. The article focuses on the emerging trend and debate rather than on a concrete incident or a governance response, so it is not Complementary Information. Therefore, the classification is AI Hazard.