Microsoft's AI-Driven Automation Leads to 15,000 Job Cuts

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Microsoft reported saving over $500 million through AI tools like Copilot, particularly in call centers, and enforced internal AI adoption. These automation gains directly contributed to the layoff of 15,000 employees in 2025, raising concerns about the human cost of large-scale AI-driven workforce reductions.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (e.g., AI-generated code, AI tools like GitHub Copilot) in Microsoft's operations that have directly contributed to cost savings and increased productivity. These AI-driven efficiencies are linked to the company's decision to reduce its workforce by thousands of jobs. The layoffs represent a harm to labor rights and employment, which falls under violations of labor rights as defined in the framework. Therefore, this qualifies as an AI Incident because the AI system's use has indirectly led to significant harm (job losses).[AI generated]
AI principles
AccountabilityHuman wellbeingRespect of human rights

Industries
Business processes and support servicesConsumer services

Affected stakeholders
Workers

Harm types
Economic/Property

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Entlassungen und KI-Investitionen: Microsoft hat mit AI rund 500 Millionen US-Dollar bei Callcentern eingespart

2025-07-10
ComputerBase
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (AI tools in call centers, AI-generated code) and their use in business. However, it does not describe any direct or indirect harm caused by AI systems, nor does it describe a plausible future harm scenario. The layoffs are related to AI adoption but are not described as harms caused by AI malfunction or misuse. The mention of Klarna's AI customer service issues is a contextual example but does not report a specific AI Incident. Therefore, the event is best classified as Complementary Information, providing context on AI deployment and its organizational impact without constituting an AI Incident or Hazard.
Thumbnail Image

Microsoft spart über 500 Millionen US-Dollar durch KI - Stellenabbau schreitet voran

2025-07-09
de.marketscreener.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems used by Microsoft to improve productivity and reduce costs, which led to workforce reductions. However, the layoffs are a business decision influenced by AI efficiency gains, not a direct or indirect harm caused by AI malfunction or misuse. There is no mention of injury, rights violations, or other harms caused by AI. The article focuses on the economic impact and strategic use of AI, which is informative but does not describe an incident or hazard. Thus, it is best classified as Complementary Information, providing context on AI's role in corporate restructuring and investment.
Thumbnail Image

Microsoft-Aktie inmitten KI-Revolution: Millionen gespart und Jobs gestrichen

2025-07-10
finanzen.at
Why's our monitor labelling this an incident or hazard?
The article focuses on the economic and business impacts of AI adoption at Microsoft, including cost savings and workforce reductions. While AI systems are clearly involved and their use leads to job cuts, this is a strategic corporate decision rather than an AI Incident as defined by direct or indirect harm caused by AI malfunction or misuse. There is no indication of injury, rights violations, or other harms directly linked to AI system failures or misuse. The mention of potential risks with Copilot relates to user experience and market expectations, not to realized harm. Therefore, the event is best classified as Complementary Information, providing context on AI's impact on business and workforce but not describing an AI Incident or Hazard.
Thumbnail Image

Microsoft preist KI als Geldsparmaßnahme an, während Stellen abgebaut werden.

2025-07-09
Quartz auf Deutsch
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (e.g., AI-generated code, AI tools like GitHub Copilot) in Microsoft's operations that have directly contributed to cost savings and increased productivity. These AI-driven efficiencies are linked to the company's decision to reduce its workforce by thousands of jobs. The layoffs represent a harm to labor rights and employment, which falls under violations of labor rights as defined in the framework. Therefore, this qualifies as an AI Incident because the AI system's use has indirectly led to significant harm (job losses).
Thumbnail Image

Microsoft hat eine halbe Milliarde Dollar mit KI in Call-Centern gespart

2025-07-11
WinFuture.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in call centers and other business areas to replace human labor, resulting in large-scale layoffs. The AI system's use has directly led to harm (job losses), which affects workers' livelihoods and communities. This fits the definition of an AI Incident because the AI system's use has directly caused harm to groups of people (economic and social harm). Although the article also discusses investments and productivity gains, the primary harm is the job losses linked to AI deployment. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Microsoft racks up over $500 million in AI savings while slashing jobs, Bloomberg News reports

2025-07-09
Reuters
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Microsoft to enhance productivity and reduce operational costs, which indirectly leads to job reductions. However, job cuts as a result of AI-driven efficiency improvements do not meet the criteria for AI Incident (no direct or indirect harm such as injury, rights violations, or community harm) or AI Hazard (no plausible future harm from AI use). The article is best classified as Complementary Information because it provides context on AI's economic impact and corporate responses without reporting a specific AI-related harm or risk.
Thumbnail Image

Microsoft uses AI to save $500 million, lays off workers - ET CIO

2025-07-10
ETCIO.com
Why's our monitor labelling this an incident or hazard?
The article describes Microsoft's use of AI tools to automate and enhance operations, resulting in substantial cost savings but also leading to layoffs affecting a large number of employees. The AI system's use is directly linked to harm in the form of job losses, which falls under harm to groups of people and labor rights violations. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI deployment in the workplace.
Thumbnail Image

Microsoft layoffs leave 9,000 jobless - then Satya Nadella-led company boasts about saving $500 million

2025-07-10
Economic Times
Why's our monitor labelling this an incident or hazard?
The layoffs are directly linked to the deployment and use of AI systems that have replaced or reduced the need for human labor, causing harm to workers through job loss. This constitutes a violation of labor rights and harm to people, fitting the definition of an AI Incident. The AI system's use in cost-saving measures leading to layoffs is a clear example of AI's role in causing harm through its use, not just potential future harm or complementary information.
Thumbnail Image

Microsoft salue l'IA comme un moyen d'économiser de l'argent tout en supprimant des emplois.

2025-07-09
Quartz en Français
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., GitHub Copilot, AI tools in call centers) being used to automate work and generate cost savings, which has directly resulted in Microsoft laying off up to 9,000 employees. This constitutes harm to people (loss of employment), which fits the definition of an AI Incident where the use of AI systems has directly led to harm. The layoffs are a direct consequence of AI-driven productivity gains and cost reductions, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Microsoft saves $500 million with AI, but with 15,000 job cuts - The Economic Times

2025-07-10
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems improving productivity and leading to cost savings, which in turn have coincided with large-scale job cuts. However, the job cuts themselves are a business decision influenced by AI adoption rather than an AI system malfunction or misuse causing harm. There is no evidence of injury, rights violations, or other harms directly caused by AI. The event describes the broader societal and economic impact of AI adoption, which fits the definition of Complementary Information as it informs about AI's role in workforce changes and corporate responses without reporting a specific AI Incident or Hazard.
Thumbnail Image

Microsoft saves over $500 million in call centers using AI By Investing.com

2025-07-09
Investing.com
Why's our monitor labelling this an incident or hazard?
The article discusses Microsoft's deployment of AI to improve productivity and reduce costs, including workforce reductions. While layoffs can be considered a social harm, the article does not explicitly link the AI system's use to violations of rights or other harms as defined in the framework. The event is a report on AI's business impact without detailing harms or risks that meet the criteria for an AI Incident or AI Hazard. Therefore, it is best classified as Complementary Information, providing context on AI's economic effects and corporate strategy.
Thumbnail Image

Microsoft racks up over $500 million in AI savings while slashing jobs, Bloomberg News reports By Reuters

2025-07-09
Investing.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to improve productivity and reduce costs, which is explicitly mentioned. However, there is no evidence or claim that the AI systems caused any harm or that there is a plausible risk of harm resulting from their use. The layoffs, while related to AI adoption, are a corporate decision and do not themselves constitute harm caused by AI. Therefore, the article does not describe an AI Incident or AI Hazard. Instead, it provides contextual information about AI's economic impact and corporate responses, fitting the definition of Complementary Information.
Thumbnail Image

Microsoft Touts $500 Million in AI Savings While Slashing Jobs

2025-07-09
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
While AI is clearly involved and is transforming work processes, the article does not describe any realized harm or a plausible immediate risk of harm directly caused by AI systems. The layoffs are a consequence of automation and efficiency gains but are not presented as an AI Incident (harm caused by AI) or AI Hazard (plausible future harm). The article mainly provides information about AI's impact on productivity and employment, which fits the definition of Complementary Information as it informs about societal and economic responses to AI adoption without describing a specific incident or hazard.
Thumbnail Image

Microsoft racks up over $500 million in AI savings while slashing jobs, Bloomberg News reports

2025-07-09
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as Microsoft uses AI tools in call centers, sales, customer service, and software engineering, including AI-generated code. The use of AI has directly led to job cuts, which constitutes harm to workers' employment—a labor rights issue. Therefore, this qualifies as an AI Incident due to the realized harm (job losses) caused by AI deployment.
Thumbnail Image

Days after laying off thousands of employees, Microsoft says AI helped save over $500 million

2025-07-10
MoneyControl
Why's our monitor labelling this an incident or hazard?
Microsoft's AI tools have directly contributed to cost savings that coincide with large-scale layoffs, indicating AI's role in workforce reduction. The layoffs represent harm to groups of people (loss of employment), which is a recognized harm under the AI Incident definition. Although the article does not explicitly say AI caused the layoffs, the cost savings attributed to AI and the timing suggest AI's indirect role in causing harm. This meets the criteria for an AI Incident because the AI system's use has indirectly led to harm to people (job loss).
Thumbnail Image

Microsoft racks up over $500 million in AI savings while slashing jobs

2025-07-10
The Indian Express
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI tools have been used to automate tasks and improve productivity, which has directly resulted in layoffs affecting thousands of workers. The harm here is the loss of employment, which is a significant and clearly articulated harm caused by the use of AI systems. Therefore, this qualifies as an AI Incident due to the direct link between AI use and harm to people (job losses).
Thumbnail Image

Microsoft says AI saved Rs 4,285 crore just days after laying off 9,000 workers

2025-07-10
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., Copilot AI assistant, AI writing code) being used to automate work and improve productivity, which has led to cost savings. The layoffs of 9,000 employees are directly connected to these AI-driven efficiency gains, representing a violation of labor rights and harm to people. The AI system's use is a contributing factor to the harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, so it is not an AI Hazard. The event is not merely complementary information or unrelated news, as it involves direct harm linked to AI use.
Thumbnail Image

Days After Axing 9000 Employees Microsoft Brags AI And Layoffs Helped Save $500 Million

2025-07-10
Mashable India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to automate tasks in call centers and software engineering, leading to the layoff of thousands of employees. This is a direct consequence of AI use causing harm to workers through job loss, which falls under violations of labor rights and harm to groups of people. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI deployment in the workplace.
Thumbnail Image

Microsoft Reveals $500m in AI Savings, Following Layoffs, Fueling Debate Over Automation's Human Cost - Tekedia

2025-07-10
Tekedia
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as Microsoft's AI tools like Copilot and AI-powered call center automation are central to the cost savings and workforce reductions. The use of AI has directly led to harm in the form of layoffs affecting thousands of employees, which constitutes harm to people (a). The layoffs and workforce restructuring are a direct consequence of AI use, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a concrete case where AI deployment has caused significant human harm through job displacement.
Thumbnail Image

Microsoft Reveals $500 Million AI Savings, Days After 9,000 Layoffs

2025-07-10
english
Why's our monitor labelling this an incident or hazard?
The article focuses on Microsoft's financial savings from AI and the timing of layoffs, but does not establish a direct or indirect causal link between AI system use and harm to people or rights. The layoffs are not explicitly caused by AI malfunction or misuse, and no harm from AI is reported. The event provides context on AI's role in corporate efficiency and workforce impact, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

AI helped Microsoft save $500 million, but at the cost of thousands jobs

2025-07-11
Windows Report | Error-free Tech Life
Why's our monitor labelling this an incident or hazard?
An AI system is involved as Microsoft uses AI in call center operations to save costs. The layoffs represent harm to workers (economic harm), which is a significant harm to groups of people. Although the company has not explicitly stated that AI replaced jobs, the cost savings from AI deployment indirectly contributed to the layoffs. This fits the definition of an AI Incident because the AI system's use indirectly led to harm (job losses).
Thumbnail Image

A week after layoffs linked to AI cost, Microsoft pledges $4B to AI education

2025-07-10
The Seattle Times
Why's our monitor labelling this an incident or hazard?
The article does not describe any AI Incident or AI Hazard. There is no direct or indirect harm caused by AI systems reported, nor is there a plausible future harm event described. Instead, the article centers on Microsoft's initiative to educate and upskill workers for AI-related roles, which is a governance and societal response to AI's impact on labor markets. Therefore, it fits the definition of Complementary Information, as it provides context and updates on AI-related developments and responses without reporting a specific incident or hazard.
Thumbnail Image

Microsoft Touts Rs 4,285 Crore AI Savings Days After Cutting 9,000 Jobs

2025-07-10
The Hans India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., AI-powered tools in call centers, GitHub Copilot for code generation) being used to increase efficiency and reduce workforce size. The layoffs of 9,000 employees are a direct harm to labor rights and employment, caused indirectly by the deployment and use of AI systems that replace human roles. This fits the definition of an AI Incident, as the AI system's use has indirectly led to a violation of labor rights and harm to people through job losses.
Thumbnail Image

Insider Claims Microsoft Saved Half a Billion Dollars by Automating Low-Level Jobs

2025-07-10
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI automation has replaced thousands of human jobs at Microsoft, resulting in layoffs of around 15,000 workers in 2024. The AI systems are actively used in call centers and other roles, directly causing economic harm to employees through job displacement. This fits the definition of an AI Incident as the AI system's use has directly led to harm to groups of people (job losses).
Thumbnail Image

Microsoft Lays Off Staff as Savings From AI Top $500 Million | PYMNTS.com

2025-07-09
PYMNTS.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as enhancing productivity and automating tasks, leading to layoffs of thousands of employees. The harm is indirect but real, as AI-driven automation displaces workers, causing economic and labor rights harm. This meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to harm to groups of people (workers losing jobs). The article does not merely discuss potential future harm or general AI developments but reports actual layoffs linked to AI use, confirming realized harm.
Thumbnail Image

A week after layoffs linked to AI cost, Microsoft pledges $4B to AI education

2025-07-10
ArcaMax
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of workforce changes and education but does not describe any realized harm or plausible future harm caused by AI systems. The layoffs are linked to AI-driven efficiency but are not presented as an AI Incident. The $4 billion investment is a societal and governance response to AI's impact on jobs, aiming to upskill workers. Therefore, this is Complementary Information, providing context and response to AI's evolving role in the workforce without describing an AI Incident or AI Hazard.
Thumbnail Image

Microsoft Saves USD 500 Million Implementing AI in Call Centres Amid Cutting 6,000 Jobs This Year | 📲 LatestLY

2025-07-10
LatestLY
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to improve productivity and reduce jobs, indicating AI system involvement in workforce changes. However, the harms described relate to job cuts and economic restructuring, which, while significant, do not constitute direct or indirect injury, rights violations, or other harms as defined for AI Incidents. There is no indication of malfunction, misuse, or plausible future harm beyond the reported layoffs. The article primarily reports on the economic impact and AI adoption strategy, which fits the definition of Complementary Information as it provides context and updates on AI's role in business restructuring and workforce changes without describing a specific AI Incident or Hazard.
Thumbnail Image

Microsoft Touts $500 Million in AI Savings While Slashing Jobs

2025-07-09
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
The article highlights AI's role in increasing productivity and cost savings at Microsoft, including workforce reductions. However, it does not describe any direct or indirect harm caused by AI systems, such as injury, rights violations, or other harms. The job cuts are a business decision influenced by AI efficiency gains but do not meet the threshold for an AI Incident or AI Hazard. Therefore, this is best classified as Complementary Information about AI's impact on the workplace and economy.
Thumbnail Image

Microsoft touts US$500 million AI savings while slashing jobs

2025-07-09
The Business Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for productivity improvements and automation, but there is no indication that these AI systems have directly or indirectly caused harm such as injury, rights violations, or disruption. The layoffs are mentioned but are not attributed primarily to AI use, and no harm from AI malfunction or misuse is described. Therefore, this is not an AI Incident or AI Hazard. The article provides contextual information about AI's role in workplace transformation and labor market effects, which fits the definition of Complementary Information as it enhances understanding of AI's societal impact without reporting a specific harm or credible hazard.
Thumbnail Image

AI saved Microsoft $500M while 15,000 lost jobs since January

2025-07-10
NewsBytes
Why's our monitor labelling this an incident or hazard?
The AI system (Microsoft's Copilot AI assistant and AI-generated code) is explicitly mentioned as contributing to job displacement through automation, which is a recognized harm under the framework (harm to people via loss of employment). The layoffs are directly linked to AI's role in increasing productivity and reducing the need for human roles, thus constituting an AI Incident due to indirect harm caused by AI use.
Thumbnail Image

​Microsoft racks up over $500 million in AI savings while slashing jobs

2025-07-10
The Manila times
Why's our monitor labelling this an incident or hazard?
While AI is clearly involved in Microsoft's operations and has led to workforce reductions, the event does not describe any direct or indirect harm caused by AI systems as per the OECD definitions. The layoffs are a business decision influenced by AI-driven efficiency gains but do not meet the criteria for AI Incident or AI Hazard. The article is best classified as Complementary Information because it provides context on AI's impact on the workforce and corporate strategy, enhancing understanding of AI's societal effects without reporting a specific harm or risk event.
Thumbnail Image

Microsoft saves $500 million using AI weeks after laying off 9,000 workers, reports $26 billion Q1 profit

2025-07-10
The American Bazaar
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to improve efficiency and reduce costs, including in call centers and software engineering. However, the layoffs are not definitively linked as caused by AI replacing workers but rather as part of broader restructuring. There is no report of harm such as injury, rights violations, or other significant harms directly or indirectly caused by AI. The focus on AI education initiatives and investments further supports this as complementary information about AI's role in society and economy. Hence, the event fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

0

2025-07-10
developpez.net
Why's our monitor labelling this an incident or hazard?
The article clearly states that AI systems are being used to automate tasks previously done by humans, leading to the elimination of 15,000 jobs at Microsoft. This constitutes harm to labor rights and workers, which falls under violations of human rights and labor rights as defined in the framework. The AI system's use is a direct contributing factor to this harm. Therefore, this event qualifies as an AI Incident due to realized harm caused by the use of AI systems in workforce reduction.
Thumbnail Image

2

2025-07-10
developpez.net
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Microsoft is using AI agents to replace human jobs, causing mass layoffs affecting thousands of employees. The AI system's deployment and use are a direct factor in the harm (job loss) experienced by workers. This meets the definition of an AI Incident because the AI system's use has directly led to harm to people (loss of employment). The harm is realized, not just potential, and the AI system's role is pivotal in the layoffs. Hence, this is classified as an AI Incident.
Thumbnail Image

Microsoft annonce en interne 500 millions de dollars d'économies réalisées grâce à l'IA~? quelques jours après avoir supprimé 9 000 emplois, tout en imposant l'utilisation de son outil d'IA Copilot en interne

2025-07-10
Developpez.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (Microsoft Copilot and other AI tools) to automate tasks previously done by humans, resulting in large-scale layoffs (9,000 jobs recently, 15,000 in total this year). This is a direct link between AI use and harm to employees through job loss, which is a recognized form of harm to people. The forced adoption of AI tools and linking AI use to performance reviews further indicates the AI system's role in workplace dynamics and pressures. Although Microsoft denies AI as the main cause, the evidence and employee reports suggest AI-driven automation is a significant factor. Hence, this qualifies as an AI Incident under the OECD framework because the AI system's use has directly or indirectly led to harm to groups of people (employees losing jobs).
Thumbnail Image

Microsoft annonce avoir réalisé 500 millions de dollars d'économies grâce à l'IA, un miroir aux alouettes masquant des licenciements massifs. 15 000 postes ont été supprimés en trois mois

2025-07-10
Developpez.com
Why's our monitor labelling this an incident or hazard?
The article explicitly links the use of AI systems for automation and productivity gains to the direct consequence of large-scale job cuts at Microsoft. The layoffs represent harm to people (loss of employment and associated social harms), which fits the definition of an AI Incident where the AI system's use has directly led to harm. Although the article also discusses strategic and economic considerations, the core event is the realized harm caused by AI-driven automation leading to mass layoffs, qualifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Microsoft President Brad Smith on AI investments, job cuts, and the uncertain future of work

2025-07-11
GeekWire
Why's our monitor labelling this an incident or hazard?
The article does not describe any specific AI system causing harm or malfunction leading to injury, rights violations, or other harms. It also does not describe a plausible future harm directly linked to AI system use or malfunction. Instead, it provides context on Microsoft's business decisions, workforce changes, and AI investment plans, which fall under broader societal and economic impacts rather than an AI Incident or Hazard. Therefore, it is best classified as Complementary Information, as it provides supporting context about AI's impact on work and corporate responses without reporting a new AI Incident or Hazard.
Thumbnail Image

Microsoft Reaps $500M Saving Through AI Amid Significant Layoffs - Tekedia

2025-07-12
Tekedia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., Copilot, ChatGPT) being used extensively in customer support and software development, generating a significant portion of new code and handling interactions that previously required human labor. The layoffs following these AI-driven savings imply that AI use has indirectly led to harm in the form of job losses and economic insecurity for thousands of employees, which constitutes harm to labor rights and communities. Although the company denies AI was the predominant factor, the correlation and financial savings attributed to AI strongly indicate AI's pivotal role in these harms. Therefore, this event qualifies as an AI Incident due to indirect harm caused by AI-driven automation leading to workforce reductions and associated social and economic impacts.
Thumbnail Image

Has the AI Apocalypse Arrived? Tens of Thousands Being Laid Off in Big Tech - AI has to Either Replace Them or AI Spending Must Stop

2025-07-12
SGT Report
Why's our monitor labelling this an incident or hazard?
The article does not report any direct or indirect harm caused by AI systems, nor does it describe a plausible future harm from AI malfunction or misuse. The layoffs are linked to AI spending and strategic shifts but not to AI systems causing harm or risk of harm. The focus is on economic and labor market impacts and corporate strategy, which fits the definition of Complementary Information about societal and governance responses to AI developments rather than an AI Incident or Hazard.
Thumbnail Image

Microsoft : Un des patrons suscite la controverse après avoir conseillé à 9 000 employés licenciés à cause de l'IA de recourir à... l'IA

2025-07-13
Tribunal Du Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in programming (20-30% of programming done with AI) leading to layoffs, which is a use of AI causing economic harm (job loss). However, the layoffs are a broader economic and organizational decision rather than a direct or indirect AI Incident as defined (no injury, no violation of rights in a legal sense, no malfunction, no direct harm caused by AI outputs). The controversial message is a social reaction, not an AI harm. Therefore, this is not an AI Incident or AI Hazard. It is Complementary Information about AI's societal impact and corporate adoption, including public reaction and strategic shifts.