Microsoft's AI Editor Causes Job Losses and Misidentification Incident at MSN News

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Microsoft replaced dozens of MSN news staffers with AI editors, resulting in job losses. The AI system subsequently misidentified two mixed-race singers from Little Mix, publishing the wrong image and causing public offense and reputational harm, highlighting risks of AI-driven editorial errors and their impact on individuals and communities.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI software replacing human editors and a specific error where the AI misidentified a mixed-race individual, leading to public offense. This constitutes harm caused by the AI system's malfunction in content generation and editorial decisions. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and malfunction.[AI generated]
AI principles
AccountabilityFairnessTransparency & explainabilityRobustness & digital securityRespect of human rightsHuman wellbeingDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
WorkersOtherGeneral public

Harm types
Economic/PropertyReputationalPsychological

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detectionOrganisation/recommendersContent generation


Articles about this incident or hazard

Thumbnail Image

Microsoft cuts MSN news staffers in move toward AI editors

2020-07-15
Daily Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI software replacing human editors and a specific error where the AI misidentified a mixed-race individual, leading to public offense. This constitutes harm caused by the AI system's malfunction in content generation and editorial decisions. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and malfunction.
Thumbnail Image

Microsoft cuts MSN news staffers in move toward AI editors

2020-07-15
St. Augustine Record
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as Microsoft is replacing human editors with AI editors. The AI system's malfunction caused the misidentification of two mixed-race singers, leading to public offense and reputational harm. This harm falls under violations of rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm through erroneous content publication.
Thumbnail Image

Microsoft cuts MSN news staffers in move toward AI editors

2020-07-15
WVVA TV
Why's our monitor labelling this an incident or hazard?
The article describes a corporate decision to replace human editors with AI editors, which involves the use of AI systems for news editing. However, there is no mention of any harm caused by this transition, such as misinformation, bias, or rights violations. The event is about organizational change and AI adoption, not about an AI incident or hazard. Therefore, it fits best as Complementary Information, providing context on AI's evolving role in media but not reporting an incident or hazard.
Thumbnail Image

Microsoft jettisons dozens of full-time MSN jobs in favor of artificial intelligence

2020-07-15
The Seattle Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to curate news stories replacing human editors, and an AI-related error occurred where the system confused two mixed-race singers, leading to offense and public complaint. This constitutes harm to individuals and communities through misrepresentation and racial insensitivity, which falls under harm to communities and violations of rights. The AI system's malfunction directly led to this harm, qualifying the event as an AI Incident.
Thumbnail Image

Microsoft cuts MSN news staffers in move toward AI editors

2020-07-15
St. Louis Post-Dispatch
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as Microsoft is moving towards AI editors for MSN news. The AI's malfunction caused a misidentification of a person in a published news story, which led to offense and reputational harm. This constitutes harm to individuals and communities through misinformation and misrepresentation, fitting the definition of an AI Incident. The harm is realized, not just potential, as the misidentification and public complaint have occurred.
Thumbnail Image

AI are replacing more journalists and editors at MSN

2020-07-14
Windows Central
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI replacing human editorial staff and causing a specific error (confusing two multiracial singers), which is a direct outcome of AI use leading to misinformation. This misinformation can harm the individuals misrepresented and the community's trust in news, fitting the definition of harm to communities. The AI system's use and malfunction (confusion/error) directly led to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Microsoft cuts MSN news staffers in move toward AI editors

2020-07-15
nwi.com
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in editorial decisions and content management, replacing human editors. The AI's malfunction (misidentification of individuals in images) directly led to harm by offending a person due to incorrect representation. This constitutes an AI Incident because the AI system's use caused realized harm related to misrepresentation and offense, which falls under harm to communities and potentially a violation of rights. Therefore, the event qualifies as an AI Incident.
Thumbnail Image

Microsoft jettisons dozens of full-time MSN jobs in favor of artificial intelligence

2020-07-15
Napa Valley Register
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to replace human editors in curating MSN news content. The AI system's error in confusing two mixed-race singers and using the wrong image caused offense and reputational harm, which is a form of harm to communities and a violation of rights. This harm has already occurred, making it an AI Incident. The event involves the use and malfunction of an AI system leading directly to harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Microsoft cuts MSN news staffers in move toward AI editors

2020-07-15
Sioux City Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI software replacing human editors and an AI error that led to the wrong image being used in a news story, which offended a person involved. This constitutes a direct harm caused by the AI system's malfunction (misidentification of individuals), fitting the definition of an AI Incident due to harm to communities or individuals. The staff cuts themselves are not harm but the AI error is a realized harm linked to the AI system's use.
Thumbnail Image

MSN News Replaces Dozens Of Human Staffers With AI Programs - Walid Shoebat

2020-07-16
Walid Shoebat
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI programs are replacing human staffers, resulting in layoffs and job insecurity. This is a direct use of AI leading to harm to people through loss of employment and economic hardship, which fits the definition of an AI Incident under harm to people or communities. The harm is realized, not just potential, as layoffs have already occurred. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

微软本周拟裁员1000人以下 或涉及MSN.com、Azure云部门

2020-07-17
chinaz.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by MSN news to replace human editors, which has directly led to layoffs of contract and full-time employees, including leadership. This is a clear example of harm to people through job loss caused by the use of an AI system. Therefore, this qualifies as an AI Incident due to realized harm resulting from the use of AI in the workplace.
Thumbnail Image

AI抢饭碗 微软对MSN网站编辑团队大裁员 - cnBeta.COM 移动版

2020-07-14
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to automate editorial decisions, which is a clear involvement of AI systems. The layoffs of human editors due to AI replacing their roles constitute a harm related to employment and labor rights, which falls under violations of labor rights. Since the harm (job loss) has already occurred due to the AI system's use, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI搶工作 傳微軟預計加速裁減MSN新聞網站人力 | Anue鉅亨 - 美股

2020-07-14
Anue鉅亨
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as replacing human editors, causing layoffs and errors in news content. The layoffs represent harm to employment, a significant social harm, and the AI's misidentification error indicates malfunction or misuse leading to harm. Therefore, this qualifies as an AI Incident due to realized harm from AI use in the workplace and content management.
Thumbnail Image

[科技新闻]微软对MSN门户裁员 用AI取代编辑引发争议

2020-07-14
mitbbs.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI software to replace human editors at MSN, resulting in layoffs (harm to labor rights) and errors in content management (potential harm to communities through misinformation or degraded news quality). The AI system's malfunction is a direct cause of harm, and the job losses are a direct consequence of AI use. Hence, the event meets the criteria for an AI Incident due to realized harms linked to AI development and use.
Thumbnail Image

AI取代微软MSN网站人工编辑 几十名员工被裁

2020-07-14
chinaz.com
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as replacing human editors in the news content curation and editing process at MSN. This use of AI has directly led to layoffs of employees, which constitutes harm to labor rights and employment. Therefore, this event qualifies as an AI Incident due to violations of labor rights caused by the use of AI systems in the workplace.
Thumbnail Image

人工智能或取代传统人力 微软对MSN网站编辑团队大裁员

2020-07-14
金融界网
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Microsoft used AI software to replace human editors, resulting in layoffs of dozens of contract workers and some full-time employees. This is a direct consequence of AI system use leading to harm in the form of job loss and labor rights impact. The AI system's role is pivotal in causing this harm, fulfilling the criteria for an AI Incident under violations of labor rights. The event is not merely a potential risk or a complementary update but a realized harm caused by AI deployment.
Thumbnail Image

微软将裁员MSN正式员工 用AI取代编辑

2020-07-15
Baidu.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to replace human editors, which is a direct use of AI. The layoffs and errors caused by the AI system indicate harm to employees (job loss) and potential harm to the business (harm to property or community). Since the harm (job loss and operational issues) is already occurring due to AI use, this qualifies as an AI Incident.
Thumbnail Image

微软再次精简MSN团队 人工智能大幅取代人工编辑

2020-07-14
Sina
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI software replaced human editors, causing layoffs and errors in news content. The AI system's malfunction (misidentification) and its role in replacing human jobs constitute direct harm. The layoffs represent harm to people (employment harm), and the content errors represent harm to communities relying on accurate information. Hence, this event meets the criteria for an AI Incident.