Vogue Faces Backlash Over Use of AI-Generated Models in August 2025 Issue

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Vogue's August 2025 issue featured AI-generated models, prompting widespread criticism, subscription cancellations, and protests from industry professionals. The use of AI models has led to economic harm for human creatives and sparked debates about authenticity, artistry, and the future of employment in fashion.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI-generated models were used in place of human models, leading to subscription cancellations and criticism. This indicates realized harm to communities and individuals economically and socially. The AI system's use in generating these images is central to the harm described. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use in the fashion industry, affecting employment and community trust.[AI generated]
AI principles
FairnessHuman wellbeingTransparency & explainabilityAccountabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
WorkersBusiness

Harm types
Economic/PropertyReputational

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Vogue has started using AI models -- what does it mean for beauty standards?

2025-07-25
Metro
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (AI-generated models) used in advertising, which could plausibly lead to harm related to unrealistic beauty standards and societal impacts on body image and employment in modeling. However, the article does not describe any actual harm or incident caused by the AI system; it mainly discusses concerns and potential consequences. Therefore, this qualifies as an AI Hazard, as the use of AI models in fashion advertising could plausibly lead to harms such as psychological harm to individuals or disruption of the modeling industry, but no direct harm has been reported yet.
Thumbnail Image

Did You Notice Anything Strange in the Upcoming Vogue Issue?

2025-07-24
Townhall
Why's our monitor labelling this an incident or hazard?
The article discusses the use of AI systems to create digital models for fashion publishing and marketing, which involves AI-generated content. While it raises questions about the future role of human models and societal impacts related to unrealistic expectations, it does not report any actual harm such as injury, rights violations, or disruption. The concerns are speculative and relate to broader cultural effects rather than specific incidents or hazards. Therefore, this is best classified as Complementary Information providing context on AI adoption and societal implications in fashion.
Thumbnail Image

"Cheap, chintzy, lazy": Readers are canceling their Vogue subscriptions after AI-generated models appear in August issue

2025-07-25
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated models were used in place of human models, leading to subscription cancellations and criticism. This indicates realized harm to communities and individuals economically and socially. The AI system's use in generating these images is central to the harm described. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use in the fashion industry, affecting employment and community trust.
Thumbnail Image

Vogue AI Models Trigger Industry Fury and Subscription Exodus

2025-07-25
Bangla news
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated models replaced human talent, leading to subscriber exodus and protests from industry professionals whose livelihoods are threatened. This constitutes harm to people (job losses and economic harm) and harm to communities (mental health impacts and cultural harm). The AI system's use is central to these harms, fulfilling the criteria for an AI Incident. The harm is realized, not merely potential, and involves direct consequences from the AI system's deployment in a high-profile publication.
Thumbnail Image

What Guess's AI model in Vogue means for beauty standards

2025-07-27
BBC
Why's our monitor labelling this an incident or hazard?
The AI system is involved in generating a model image for advertising, which is an AI system use. The controversy and questions raised relate to societal and cultural impacts on beauty standards and diversity, but the article does not report any realized harm such as discrimination, violation of rights, or health harm. Nor does it specify a credible risk of harm that could plausibly lead to an AI Incident. Therefore, this is best classified as Complementary Information, as it provides context and discussion about AI's societal implications without describing an AI Incident or Hazard.
Thumbnail Image

Her features are flawless. But this blonde, blue-eyed Vogue model isn't real

2025-07-29
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system to generate model images, fulfilling the AI system involvement criterion. The concerns raised relate to potential harms such as reinforcing harmful beauty standards and psychological impacts, which could plausibly lead to harm, but no actual harm or incident is reported. The event focuses on societal reactions, expert opinions, and the implications of AI use in fashion imagery, which aligns with the definition of Complementary Information. There is no direct or indirect evidence of harm having occurred, nor a specific AI Hazard event described. Thus, the classification as Complementary Information is appropriate.
Thumbnail Image

The Vogue AI model backlash isn't dying down anytime soon

2025-07-28
Fast Company
Why's our monitor labelling this an incident or hazard?
The article mentions AI-generated models used in advertising, which involves AI systems generating content. However, no harm or violation is reported or implied beyond public dissatisfaction. There is no evidence of injury, rights violations, or other harms as defined. The event is primarily about societal reaction and controversy, not about harm caused by AI. Hence, it is best classified as Complementary Information, providing context on societal responses to AI use in media.
Thumbnail Image

AI model in Guess ad sparks backlash after appearing in Vogue for first time

2025-07-28
Malay Mail
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate the model image, which is central to the controversy. The harm involves indirect psychological and societal effects, including undermining diversity and promoting unrealistic beauty standards, which can harm communities and individuals' mental health. These harms have already manifested in public backlash and expert concern, meeting the criteria for an AI Incident. The event is not merely a product launch or general AI news, but a specific use of AI that has led to significant articulated harm.
Thumbnail Image

AI model in Vogue magazine raises concerns -

2025-07-29
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The AI system was used in the creation of a synthetic model for advertising, which is an AI system's use. The concerns raised relate to potential harm to mental health and societal harm due to unrealistic beauty standards, which falls under harm to communities or groups. Since no actual harm or incident is reported but plausible future harm is discussed, this fits the definition of an AI Hazard rather than an AI Incident. The article focuses on the potential implications and societal concerns rather than a realized harmful event.
Thumbnail Image

Fashion world divided as Vogue debuts AI-generated model

2025-07-28
Daily Express Sabah
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic model image for commercial advertising, which has directly led to concerns about harm to individuals' mental health and societal harm regarding diversity and inclusivity. The harms described include damage to self-esteem and increased risk of eating disorders, which are injuries to health and harm to communities. The AI system's role is pivotal as it generated the model image causing these concerns. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Vogue sparks fury with AI model ad that enforces 'impossible beauty standards' - The Mirror

2025-07-30
Mirror
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate photorealistic models for a commercial advertisement. The public reaction indicates concern about harm to mental health and societal harm due to unrealistic beauty standards, which can be considered harm to communities. Although the harm is indirect and non-physical, it is clearly articulated and linked to the AI system's use. Therefore, this qualifies as an AI Incident under the definition of harm to communities caused directly or indirectly by the AI system's use.
Thumbnail Image

Diversity in fashion vastly improved in the last 15 years. AI models will set it back

2025-07-29
Fast Company
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems to generate synthetic fashion models, which is an AI system involvement. The concern is about the potential negative societal impact on diversity and beauty standards, which is a form of harm to communities. Since no actual harm or incident is reported, but the plausible future harm is credible and consistent with AI capabilities, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI involvement and potential harm are central to the narrative.
Thumbnail Image

AI-Generated Models Now Appear in 'Vogue' Magazine

2025-07-29
PetaPixel
Why's our monitor labelling this an incident or hazard?
The article involves AI systems generating images of models, which qualifies as AI system involvement. However, the event does not describe any realized harm or a credible risk of harm directly caused by the AI system's use. The controversy and public backlash are societal reactions rather than harms caused by the AI system itself. There is no indication of injury, rights violations, or other harms as defined. Therefore, this is best classified as Complementary Information, providing context and societal response to AI's impact in fashion and media.
Thumbnail Image

Vogue's AI Model Sparks Outrage: How AI-Generated Models Are Distorting Women's Beauty Standards

2025-07-30
https://www.boldsky.com/
Why's our monitor labelling this an incident or hazard?
The AI-generated model is an AI system used in a real-world application (fashion advertising). Its use has directly led to societal and psychological harms, including distortion of beauty standards, emotional distress, and potential job displacement, which fall under harm to communities and violations of labor rights. These harms are materialized and reported as occurring, not merely potential. Therefore, this qualifies as an AI Incident under the OECD framework because the AI system's use has directly and indirectly caused significant harm.
Thumbnail Image

Vogue's AI model ad sparks backlash

2025-07-30
Euro Weekly News Spain
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (AI-generated models) used in a commercial context, which raises concerns about societal and labor impacts. However, no actual harm or violation has been reported as having occurred yet; the backlash is about potential negative consequences and ethical considerations. Therefore, this event is best classified as Complementary Information because it provides context and societal response to the use of AI in fashion advertising, without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Scandal în lumea modei. Vogue a lansat o campanie cu un model AI - Evenimentul Zilei

2025-07-29
Evenimentul Zilei
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the virtual model used in the campaign. The event stems from the use of AI in advertising. While there is no realized harm reported, the article discusses credible concerns about the future impact on human models' livelihoods and diversity representation, which could plausibly lead to harm in terms of labor rights and community harm. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information, as the harm is potential and not yet realized.
Thumbnail Image

Frumoasa care nu există: modelul AI din Vogue aprinde revolta în industria modei

2025-07-28
PLAYTECH.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system used to generate a model image for commercial advertising. The AI's use has directly led to harms including labor rights violations (displacement of real models), harm to communities (reinforcing toxic beauty standards), and harm to mental health. The controversy and criticism confirm that these harms are occurring, not just potential. The AI system's role is pivotal as it enables the creation and use of these artificial models, which are replacing real people and influencing societal perceptions. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Modelul AI publicat în paginile revistei Vogue, creat de firma unei românce, ridică întrebări despre standardele de frumusețe

2025-07-28
Ziare.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate models for a fashion advertisement, which has been published and is actively influencing societal standards and perceptions. This has caused controversy and concerns about harm to mental health and diversity in the modeling industry, which are forms of harm to communities and individuals' health. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm as described in the article.
Thumbnail Image

麻豆恐爆失業潮? GUESS使用火辣AI女模引發爭議 - 國際 - 自由時報電子報

2025-08-02
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article involves the use of an AI system to generate images of models, which is an AI system use case. The concerns raised about job losses and psychological impacts are potential harms that could plausibly arise from this use of AI, but no direct or indirect harm has been reported as having occurred. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms such as unemployment and psychological harm, but these harms are not yet realized or confirmed.
Thumbnail Image

《Vogue》首刊登AI模特兒廣告惹爭議 外界擔憂搶真人飯碗 | 聯合新聞網

2025-08-01
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate model images for advertising, which is a clear AI system involvement. The controversy and public concern relate to the potential future harm to human models' employment opportunities, which is a plausible future harm stemming from the AI system's use. Since no actual harm has been reported as having occurred, but there is credible concern about possible future harm, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential impact and controversy caused by the AI-generated models, not just an update or response to a prior incident.
Thumbnail Image

AI超模登上时尚杂志《Vogue》头条

2025-08-04
煎蛋
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating virtual human models used in high-profile media and advertising. The use of AI-generated models has directly caused harm by infringing on models' rights to control their image and potentially causing job losses, which are violations of labor and intellectual property rights. The protests and petitions by model associations highlight the real and ongoing harm. The AI system's development and use have directly led to these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

《Vogue》首刊登AI模特儿广告惹争议:产业革新还是抢饭碗危机?(图 - 其它 -

2025-08-01
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated models (an AI system) in Vogue magazine advertisements, which is a clear example of AI system use. However, the harms discussed—such as job displacement fears, aesthetic and authenticity concerns—are potential or societal-level impacts rather than direct or realized harms caused by the AI system at this time. There is no report of actual injury, rights violations, or other harms directly caused by the AI-generated images. Instead, the article focuses on the controversy, public reaction, and calls for regulation and ethical considerations. This aligns with the definition of Complementary Information, which covers societal and governance responses, debates, and contextual updates related to AI developments and their impacts, without describing a concrete AI Incident or Hazard.
Thumbnail Image

《Vogue》首刊登AI模特兒廣告惹爭議:產業革新還是搶飯碗危機?(圖 - 其它 -

2025-08-01
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated models were used in Vogue advertisements, marking the first such instance. This use of AI has caused public backlash, loss of trust, and fears of job losses among human models and associated workers, which constitute realized harms. The AI system's involvement in generating these images is central to the controversy and the harms described. Hence, this qualifies as an AI Incident because the AI system's use has directly led to significant harms including labor rights concerns and social impacts on communities.
Thumbnail Image

《 Vogue 》人工智能生成广告引发的争议不止于时尚界

2025-08-03
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating virtual models used in advertising, which has directly led to harm including economic displacement of human models, ethical concerns about consent and rights, and cultural appropriation issues. These harms fall under violations of labor rights and harm to communities. The controversy and ongoing impact on the fashion industry and models' livelihoods confirm that the AI system's use has caused realized harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

《Vogue》的AI广告引发轩然大波 不仅仅与时尚有关 - cnBeta.COM 移动版

2025-08-04
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI creating digital models) and discusses their use in advertising. However, it does not describe any actual harm or violation caused by these AI systems. The concerns raised are about potential impacts on employment, diversity, and authenticity, but these are presented as debates and reactions rather than documented incidents or credible imminent risks. The main focus is on societal and industry responses to AI adoption in fashion, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

AI μοντέλο εξώφυλλο στο Vogue: Οργή κατά Guess και περιοδικού | Pagenews.gr

2025-08-02
Pagenews.gr
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (AI-generated model images) used in a commercial context. The public reaction is significant, but the event does not describe any direct or indirect harm caused by the AI system's development, use, or malfunction. The controversy is about cultural and artistic concerns and consumer dissatisfaction, which do not meet the harm criteria defined for AI Incidents or plausible future harm for AI Hazards. The article mainly provides context and societal response to AI use in fashion and media, fitting the definition of Complementary Information.
Thumbnail Image

Μοντέλα που δεν υπάρχουν / Πώς η τεχνητή νοημοσύνη αλλάζει το τοπίο του μόντελινγκ

2025-08-01
TVXS - TV Χωρίς Σύνορα
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to generate digital fashion models, fulfilling the AI System criterion. However, no actual harm (physical, legal, or social) is documented as having occurred; the harms discussed are potential or societal concerns rather than realized incidents. The public backlash and ethical debates represent governance and societal responses to AI use in fashion, fitting the definition of Complementary Information. There is no indication of an AI Incident (harm realized) or AI Hazard (plausible future harm) as the article focuses on reactions and implications rather than a specific harmful event or credible risk event. Thus, Complementary Information is the appropriate classification.
Thumbnail Image

Μοντέλα φτιαγμένα με Τεχνητή Νοημοσύνη - Πρόοδος, ή πρόβλημα για τη βιομηχανία; Η Guess πυροδοτεί θύελλα για την τεχνητή ομορφιά

2025-08-01
TheCaller.Gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating digital models used in a high-profile advertising campaign. The AI's use led to public deception and backlash, which constitutes harm to communities and potential violation of labor rights, as activists warn about job losses and marginalization. The lack of disclosure about AI-generated images further exacerbates the harm by misleading consumers. These factors meet the criteria for an AI Incident, as the AI system's use directly led to social and labor-related harms. The event is not merely a potential risk or complementary information but a realized incident with significant societal impact.