AI-driven automation slashes entry-level tech jobs by over 50%

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Research from SignalFire and LinkedIn reveals that AI coding tools like Google’s Jules, ChatGPT and Anthropic models have automated tasks once done by entry-level developers. Consequently, major tech companies – including Apple, Amazon, Google, Meta, Nvidia, Microsoft and Tesla – have halved graduate and junior hiring since 2022.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (AI coding tools like Google's Jules, ChatGPT, and Anthropic's models) that automate coding tasks, which were previously performed by entry-level human coders. This use of AI has directly led to a reduction in entry-level tech job opportunities, constituting harm to individuals' employment prospects, a form of economic and social harm. Since the harm is realized and directly linked to AI use, this qualifies as an AI Incident under the framework, specifically harm to people (employment harm).[AI generated]
AI principles
Human wellbeingAccountabilityFairness

Industries
IT infrastructure and hosting

Affected stakeholders
Workers

Harm types
Economic/PropertyPsychological

Severity
AI incident

Business function:
Research and developmentMonitoring and quality control

AI system task:
Content generationInteraction support/chatbotsReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Why landing your first tech job is way harder than you expected | TechCrunch

2025-05-27
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article mentions AI's role in reducing entry-level hiring but does not describe a particular AI system malfunctioning or causing harm. The discussion is about economic and employment trends influenced by AI, which is a broader societal impact but not a direct AI Incident or Hazard. It is an informative piece providing context on AI's influence on the job market, thus fitting the category of Complementary Information.
Thumbnail Image

Learn to code, they said: AI is already erasing some entry-level coding jobs

2025-05-28
Mashable
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI coding tools like Google's Jules, ChatGPT, and Anthropic's models) that automate coding tasks, which were previously performed by entry-level human coders. This use of AI has directly led to a reduction in entry-level tech job opportunities, constituting harm to individuals' employment prospects, a form of economic and social harm. Since the harm is realized and directly linked to AI use, this qualifies as an AI Incident under the framework, specifically harm to people (employment harm).
Thumbnail Image

Alarming trend as AI eats into jobs: Tech companies' hiring of new grads has plummeted over 50% since 2019

2025-05-27
Economic Times
Why's our monitor labelling this an incident or hazard?
The article describes a broad economic and labor market trend influenced by AI adoption, specifically the reduction of entry-level hiring in tech due to AI automation and changing skill demands. However, it does not report a particular AI system malfunction, misuse, or event causing direct or indirect harm to individuals or communities. Nor does it describe a specific plausible future harm event. Instead, it offers an analysis of AI's impact on employment patterns, which fits the definition of Complementary Information as it enhances understanding of AI's societal implications without detailing a discrete incident or hazard.
Thumbnail Image

AI may already be shrinking entry-level jobs in tech, new research suggests

2025-05-27
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI systems automating routine entry-level tasks, leading to fewer hires of new graduates in tech and finance. This is a direct use of AI impacting employment opportunities, which can be considered a form of harm to labor rights or economic opportunity. However, the harm is described as emerging and inferred from hiring trends rather than a specific incident causing direct injury or violation. Since the impact is ongoing and plausible but not a discrete incident with direct harm reported, this fits best as an AI Hazard reflecting plausible future or ongoing harm to employment due to AI automation.
Thumbnail Image

AI is coming for your first job: Hiring of college grads by Big Tech drops 50% since 2022

2025-05-26
The Indian Express
Why's our monitor labelling this an incident or hazard?
The report explicitly links the collapse in entry-level hiring to AI advancements that automate routine tasks, leading companies to prioritize roles requiring higher technical output and reducing opportunities for new graduates. This constitutes indirect harm to a group of people (young workers) through economic displacement, fitting the definition of an AI Incident under harm to groups of people. The AI system's role is pivotal as it changes hiring practices and job availability. Although the harm is economic and social rather than physical, it aligns with the framework's inclusion of harm to groups of people. Hence, the event is classified as an AI Incident.
Thumbnail Image

Is AI Replacing Freshers? Study Reveals 50% Drop In Tech Company Hiring

2025-05-28
News18
Why's our monitor labelling this an incident or hazard?
The article explicitly links the decline in hiring of new graduates to the impact of generative AI tools that can perform routine tasks traditionally done by entry-level employees. This constitutes a harm related to labor rights and employment opportunities, as AI is contributing to job displacement. Although the harm is economic and social rather than physical, it fits within the framework's category of violations of labor rights or significant harms to communities. Therefore, this event qualifies as an AI Incident because the use of AI systems has directly or indirectly led to harm in the form of reduced employment opportunities for new graduates.
Thumbnail Image

AI may already be shrinking entry-level jobs in tech, new research suggests

2025-05-27
TechCrunch
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems automating routine entry-level tasks, leading to a reduction in hiring of recent graduates in tech. This constitutes indirect harm to labor rights and economic opportunities for a group of people, fulfilling the criteria for an AI Incident under violations of labor rights. The harm is realized (not just potential), as hiring data shows fewer entry-level jobs being offered due to AI automation. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Replaces Entry-Level Tech Roles: Fresher Hiring Drops By Over 50 Pct

2025-05-28
TimesNow
Why's our monitor labelling this an incident or hazard?
The article explicitly links the drop in entry-level hiring to AI replacing roles, indicating AI's use in automating tasks that previously required human labor. This is a clear example of AI's use leading to harm in the form of job displacement and reduced employment opportunities for fresh graduates. Although the harm is economic and social rather than physical, it fits within the framework's scope of harm to communities or significant articulated harms where AI's role is pivotal. Therefore, this event qualifies as an AI Incident due to realized harm caused by AI use.
Thumbnail Image

Is AI replacing entry-level tech jobs? Here's what reports suggest

2025-05-28
Digit
Why's our monitor labelling this an incident or hazard?
The article describes AI's role in replacing certain job functions and reducing entry-level hiring, which is an economic and social impact but does not constitute a direct or indirect harm such as injury, rights violations, or property/community/environmental harm. There is no specific incident of harm caused by AI malfunction or misuse. The content is more about the evolving labor market and AI's influence on employment patterns, which is informative and contextual rather than reporting a discrete AI Incident or AI Hazard. Therefore, it fits best as Complementary Information, providing context on AI's societal impact and labor market changes without reporting a new harm or credible future harm event.
Thumbnail Image

Study: AI Is Already Shrinking Entry-Level Tech Jobs

2025-05-28
Tech.co
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in automating tasks that affect employment, specifically entry-level tech jobs. The research shows a realized impact on employment trends, which constitutes harm to individuals' economic rights and livelihoods, a form of harm to communities. Since the AI's role in reducing entry-level hiring is evidenced and ongoing, this qualifies as an AI Incident due to indirect harm caused by AI-driven automation in the labor market.
Thumbnail Image

Amid AI Boom, Big Tech Slashed College Graduate Hiring By 50% Since 2022, Shows Research

2025-05-27
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The article reports on research showing decreased hiring of recent graduates in big tech companies, linked to the rise of AI. While AI's influence on employment is noted, there is no direct or indirect harm to individuals, communities, or rights described, nor is there a plausible risk of harm from AI systems malfunctioning or being misused. The event is about societal and economic impact rather than an AI incident or hazard. Hence, it fits the definition of Complementary Information, as it provides context on AI's broader effects without reporting a specific AI-related harm or risk.
Thumbnail Image

AI replacing human jobs? Report reveals fresher hiring has dropped by 50% in tech companies

2025-05-28
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly links the reduction in fresher hiring and layoffs at companies like IBM to the use of AI systems that perform tasks formerly done by humans, including coding and HR functions. This constitutes harm to labor rights and economic opportunities for individuals, fulfilling the criteria for an AI Incident. The AI systems' use has directly led to these harms, not just a potential future risk, so it is not merely a hazard. The article does not focus on responses or broader ecosystem context, so it is not complementary information. Hence, the classification is AI Incident.