Over a Million London Jobs at Risk from AI Automation, Mayor Warns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A report commissioned by London Mayor Sadiq Khan warns that over a million Londoners are at high or significant risk of job disruption due to generative AI, with administrative roles most exposed. Nearly half of the city's workforce could see tasks automated, raising concerns about economic disruption and inequality.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article involves AI systems in the context of their potential to automate tasks and impact jobs, which is a plausible future harm scenario. However, it does not report any actual harm or incident caused by AI systems at this time. The focus is on risk assessment and policy discussion rather than a concrete event of harm or malfunction. Therefore, this qualifies as an AI Hazard, reflecting credible potential for harm to labor markets and workers if AI adoption proceeds without adequate safeguards.[AI generated]
AI principles
Human wellbeing

Industries
Business processes and support services

Affected stakeholders
Workers

Harm types
Economic/Property

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AI puts one fifth of London jobs at risk - City Hall report

2026-04-28
BBC
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of their potential to automate tasks and impact jobs, which is a plausible future harm scenario. However, it does not report any actual harm or incident caused by AI systems at this time. The focus is on risk assessment and policy discussion rather than a concrete event of harm or malfunction. Therefore, this qualifies as an AI Hazard, reflecting credible potential for harm to labor markets and workers if AI adoption proceeds without adequate safeguards.
Thumbnail Image

Nearly half of London jobs at risk of AI disruption, report

2026-04-28
Euronews English
Why's our monitor labelling this an incident or hazard?
The article focuses on the potential and ongoing impact of AI on employment, including some realized workforce reductions attributed to AI use. However, it does not describe a specific AI system malfunction or incident causing direct harm, nor does it report a particular event where AI use has directly led to injury, rights violations, or other harms. Instead, it provides an analysis and forecast of AI's disruptive potential in the labor market and societal responses to it. Therefore, it fits best as Complementary Information, providing context and updates on AI's societal impact and governance responses rather than reporting a discrete AI Incident or Hazard.
Thumbnail Image

AI places at least 'two million London jobs' at risk | LBC

2026-04-28
LBC
Why's our monitor labelling this an incident or hazard?
The article focuses on the potential future impact of AI on jobs, describing a credible risk of significant labor market disruption due to AI automation. However, it does not report any actual harm or incident caused by AI systems yet. Therefore, it fits the definition of an AI Hazard, as it plausibly could lead to harm (job losses or labor market disruption) but no direct or indirect harm has occurred yet according to the article.
Thumbnail Image

Women more likely to work in jobs impacted by AI, report warns - AOL

2026-04-28
AOL.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of their potential impact on employment and labor markets, which is a plausible future harm scenario. However, it does not describe any direct or indirect harm that has already occurred due to AI use or malfunction. The focus is on risk assessment, demographic exposure, and policy responses, which aligns with the definition of an AI Hazard or Complementary Information. Since the article mainly provides data and warnings about potential future impacts and announces a governance response (the AI and Jobs Taskforce), it fits best as Complementary Information rather than an AI Hazard, which would require a more immediate or imminent risk scenario. There is no specific AI Incident described.
Thumbnail Image

AI Alert: Jobs of a million Londoners at risk from artificial intelligence, Sadiq Khan warns

2026-04-27
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, specifically generative AI, and their potential to automate tasks and roles in London's workforce. The harms described (job losses, economic disruption, wealth inequality) are plausible future harms that could result from AI's use in automating work. Since the harms are not yet realized but are credible and significant, and the event centers on warnings and preparations for these risks, it fits the definition of an AI Hazard. There is no indication of actual harm having occurred, so it is not an AI Incident. The article is more than just general AI news or policy announcements, so it is not Complementary Information or Unrelated.
Thumbnail Image

Women more likely to work in jobs impacted by AI, report warns

2026-04-28
Yahoo
Why's our monitor labelling this an incident or hazard?
The article focuses on the potential and anticipated effects of AI on jobs, particularly the risk of job content changes and possible job losses in the future. It does not describe any actual harm or incident caused by AI systems but rather highlights a credible risk and societal concern. Therefore, it fits the definition of an AI Hazard, as it plausibly leads to future harm related to employment disruption due to AI adoption.
Thumbnail Image

Sadiq Khan warns one million Londoners could lose their jobs to AI

2026-04-27
Mail Online
Why's our monitor labelling this an incident or hazard?
The article discusses the plausible future harm AI could cause to the labor market in London, specifically job losses due to automation. It involves AI systems capable of automating tasks in various occupations. Since the harm is potential and not yet realized, and the article centers on warnings and preventive measures, this fits the definition of an AI Hazard rather than an Incident. The formation of a taskforce is a governance response but does not change the primary classification.
Thumbnail Image

Almost half of London jobs could see tasks automated by AI

2026-04-28
Engineering and Technology Magazine
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (generative AI) and their potential use in automating job tasks, which could plausibly lead to economic disruption and labor market harms in the future. However, no actual harm or incident has occurred yet; the report and taskforce represent anticipatory analysis and response. Therefore, this qualifies as an AI Hazard due to the plausible future harm from AI-driven job automation and economic disruption, rather than an AI Incident or Complementary Information.
Thumbnail Image

Anthropic Study Reveals Which Jobs Are Most Exposed to Real-World AI Risks

2026-04-30
Investopedia
Why's our monitor labelling this an incident or hazard?
The article does not describe any direct or indirect harm caused by AI systems, nor does it report any event where AI malfunction or misuse led to injury, rights violations, or other harms. It focuses on empirical research findings about AI's current and potential impact on jobs, which is a form of complementary information that enhances understanding of AI's societal implications. There is no indication of an AI incident or hazard occurring or imminent. Therefore, the article fits the definition of Complementary Information as it provides important context and data about AI's role in the labor market without reporting a specific incident or hazard.
Thumbnail Image

AI Set to Reshape Up to 55% of Jobs in the US, BCG Report Finds

2026-05-01
Morocco World News
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by AI systems but rather presents a forecast and analysis of potential future changes and risks related to AI adoption in the labor market. This aligns with the definition of an AI Hazard, as it plausibly leads to significant societal and economic impacts (harm to communities through job displacement and skill shifts) but does not report an actual incident or harm that has already occurred. Therefore, the event is best classified as an AI Hazard.