
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Research from SignalFire and LinkedIn reveals that AI coding tools like Google’s Jules, ChatGPT and Anthropic models have automated tasks once done by entry-level developers. Consequently, major tech companies – including Apple, Amazon, Google, Meta, Nvidia, Microsoft and Tesla – have halved graduate and junior hiring since 2022.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI coding tools like Google's Jules, ChatGPT, and Anthropic's models) that automate coding tasks, which were previously performed by entry-level human coders. This use of AI has directly led to a reduction in entry-level tech job opportunities, constituting harm to individuals' employment prospects, a form of economic and social harm. Since the harm is realized and directly linked to AI use, this qualifies as an AI Incident under the framework, specifically harm to people (employment harm).[AI generated]