Adobe Employees Warn AI Tools Threaten Designer Jobs

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Adobe's integration of generative AI tools like Firefly and Photoshop's text-to-image features has sparked internal concern, with employees warning these advancements could lead to significant job losses among graphic designers. While no direct harm has occurred yet, the risk of economic disruption and industry upheaval is widely debated.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI system (Adobe Firefly) used for image generation and modification. Employees express concern that the AI's integration could lead to job reductions among graphic designers, which constitutes a plausible future harm to workers and the design community. However, there is no indication that such harm has already occurred. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm but has not yet directly or indirectly caused it.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingTransparency & explainability

Industries
Arts, entertainment, and recreationMedia, social platforms, and marketing

Affected stakeholders
Workers

Harm types
Economic/PropertyPsychological

Severity
AI hazard

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

员工反对Adobe AI工具,认为会驱逐设计师

2023-07-27
中关村在线
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Adobe Firefly) used for image generation and modification. Employees express concern that the AI's integration could lead to job reductions among graphic designers, which constitutes a plausible future harm to workers and the design community. However, there is no indication that such harm has already occurred. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm but has not yet directly or indirectly caused it.
Thumbnail Image

Adobe AI产品被内部员工担忧取代人类工作

2023-07-27
中关村在线
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Adobe Firefly) integrated into Adobe's products, affecting employees' work and raising fears of job displacement. However, no direct or indirect harm such as layoffs, rights violations, or other damages have been reported as having occurred. The concerns are anticipatory and reflect employee sentiment rather than documented incidents. The article also discusses company and market responses, making it a broader societal and governance context piece rather than a report of an AI Incident or Hazard. Hence, it fits the definition of Complementary Information.
Thumbnail Image

Adobe全力冲刺人工智能!员工:自掘坟墓

2023-07-26
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Adobe Firefly) integrated into Adobe's products that can generate images from text prompts, which is a clear AI system. The concerns raised by employees about job losses for graphic designers due to AI automation indicate a plausible future harm (economic harm and labor rights impact). Since no actual job losses or harm have been reported yet, but the risk is credible and directly linked to AI use, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses or governance but on the potential negative impact of AI integration on employment.
Thumbnail Image

Photoshop最新"扩图"玩法:对话交互,一键扩图,让想象力「无限拓展」

2023-07-28
华尔街见闻
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Adobe's Generative Expand) and its use, but there is no indication that its development or use has directly or indirectly caused harm or violation of rights. The concerns about job displacement are speculative and not an incident or hazard by themselves. The article focuses on describing the AI feature, its potential, and industry reactions, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Adobe全力冲刺生成式AI 但也担心会自掘坟墓 - Adobe - cnBeta.COM

2023-07-25
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article centers on the potential future harms of AI integration in Adobe's products, particularly the plausible risk of job losses and disruption in the graphic design industry due to generative AI capabilities. There is no report of actual harm or incidents caused by AI systems at this time, only concerns and debates about possible consequences. Therefore, this qualifies as an AI Hazard because it plausibly could lead to harm (employment reduction, ethical issues) but no direct or indirect harm has yet materialized. It is not Complementary Information because the main focus is not on responses or updates to a past incident, nor is it unrelated as it clearly involves AI systems and their societal impact.
Thumbnail Image

Photoshop的新人工智能生成功能可帮你补齐裁剪后的图像 - Adobe - cnBeta.COM

2023-07-27
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI integrated into Photoshop) used for image generation and editing. However, there is no indication of any harm, violation of rights, or malfunction caused by this AI system. The article mainly discusses the feature's functionality, content moderation measures, and rollout plans, which provide context and updates about AI development and deployment. Therefore, this is Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

channelnews : Adobe Staff Fear its Firefly AI will Cause Massive Job Losses

2023-08-01
ChannelNews
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Adobe Firefly generative AI) whose deployment has already indirectly led to job losses in a billboard and advertising business, which constitutes harm to employment and livelihoods. The fears expressed by Adobe staff about further job losses and business impact reflect realized harm and potential ongoing harm. Since the AI system's use has directly contributed to employment harm, this qualifies as an AI Incident under the framework, as it involves harm to people (job loss) caused by the AI system's use.
Thumbnail Image

Adobe staffers fear its AI tools could lead to job losses: report

2023-08-01
New York Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Adobe's generative AI system Firefly and its rapid adoption, which involves AI system use. The concerns raised by employees about potential job losses represent a plausible future harm linked to the AI system's deployment. However, there is no evidence or report of actual job losses caused by the AI tool at this time. The fears and predictions about displacement are credible and align with recognized risks of AI automation in knowledge-based industries. Since no direct or indirect harm has yet materialized, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the potential harm, not on responses or ecosystem updates, and it is not Unrelated because AI involvement and plausible harm are central to the report.
Thumbnail Image

Adobe Staff Worry Their AI Could Kill the Jobs of Their Own Customers

2023-07-31
PetaPixel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the AI system (Firefly) is being used by companies to reduce graphic design staff, indicating direct harm to employment and labor rights. The harm is realized, not just potential, as one business has already downsized its team due to the AI's effectiveness. This fits the definition of an AI Incident involving harm to groups of people through labor rights violations. The concerns about future impacts on Adobe's own business do not negate the existing harm caused to customers' employees.