Duolingo lays off 10% of contractors amid AI push

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Duolingo cut around 10% of its US-based contractors, replacing them with AI-driven content generation. Humans will still review AI outputs for accuracy, though former contractors report errors in lessons since the shift. Duolingo aims to optimize costs, reflecting a broader trend of AI displacing human labor.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI chatbots (an AI system) have been used to replace human contract workers, leading to job cuts. This is a direct use of AI causing harm to workers' employment and labor rights, fitting the definition of an AI Incident under violations of labor rights. The harm is realized, not just potential, as workers were fired due to AI substitution.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingRobustness & digital securityTransparency & explainability

Industries
Education and trainingConsumer services

Affected stakeholders
WorkersConsumers

Harm types
Economic/PropertyReputationalPsychological

Severity
AI incident

Business function:
Research and developmentMonitoring and quality control

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Some Duolingo workers were fired in favor of AI

2024-01-10
Washington Post
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI chatbots (an AI system) have been used to replace human contract workers, leading to job cuts. This is a direct use of AI causing harm to workers' employment and labor rights, fitting the definition of an AI Incident under violations of labor rights. The harm is realized, not just potential, as workers were fired due to AI substitution.
Thumbnail Image

Duolingo Lays Off 10% Of Contract Workers

2024-01-10
http://www.radiojamaicanewsonline.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Duolingo laid off 10% of its contract workers to rely more on AI for content generation. This is a clear example of AI use leading to layoffs, which constitutes harm to labor rights. Although the layoffs are indirect consequences of AI adoption rather than a malfunction or misuse, the AI system's role in reducing staff is pivotal. Therefore, this qualifies as an AI Incident under the definition of harm to labor rights caused by AI use.
Thumbnail Image

Duolingo cuts contractors as it further embraces AI

2024-01-09
semafor.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to replace human labor, which directly impacts employment, a form of harm related to labor rights and economic well-being. Although the company downplays AI as the sole cause, the AI system's use is a contributing factor to the workforce reduction. This constitutes an AI Incident because it involves realized harm (job loss) linked to AI use.
Thumbnail Image

Duolingo lays off staff as language learning app shifts toward AI | CNN Business

2024-01-09
CNN
Why's our monitor labelling this an incident or hazard?
The article details a company's strategic shift to integrate AI into its platform, leading to layoffs of contract workers. While AI is involved in content creation and review, there is no direct or indirect harm to individuals, communities, or rights reported. The layoffs are a business decision related to AI adoption, not an AI Incident or Hazard. The article also references broader industry trends and responses to AI, which provide context but do not describe specific harms or risks. Therefore, this is Complementary Information about AI's impact on employment and industry practices, not an incident or hazard involving AI harm.
Thumbnail Image

The AI 'effect': Duolingo cuts 10% of contractual jobs - Times of India

2024-01-09
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI) in content creation, which has led to a reduction in contractual jobs. This is an indirect harm related to labor rights and employment caused by AI adoption. Although the company states employees are not being directly replaced by AI, the reduction in jobs due to AI use constitutes a harm under the framework. Therefore, this qualifies as an AI Incident due to indirect harm to labor rights and employment.
Thumbnail Image

Duolingo replaces 10pc of translators with AI

2024-01-09
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI has replaced human translators, resulting in layoffs of contractors. This is a direct consequence of AI use leading to harm in the form of job loss, which is a violation of labor rights and employment security. The AI system is clearly involved in the use phase, replacing human labor. The harm is realized, not just potential, so this is an AI Incident rather than a hazard or complementary information. The event is not unrelated as it directly involves AI systems causing harm.
Thumbnail Image

Duolingo fires 10% of translation contractors in favour of AI

2024-01-09
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI translation system replacing human contractors, which is an AI system use. The harm is primarily economic (job loss) and user dissatisfaction, which is a social impact but not clearly a violation of rights or other defined harms. There is no direct or indirect harm such as injury, rights violation, or critical infrastructure disruption. The article also references similar past events to provide context. Hence, it fits the definition of Complementary Information, describing societal and economic responses to AI adoption rather than a new AI Incident or Hazard.
Thumbnail Image

As Duolingo Taps AI for Translation, Human Contractors Lose Their Jobs

2024-01-08
PCMag Australia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI systems for translation and content generation, replacing human contractors. While this involves AI system use and leads to job displacement, the framework does not classify job loss alone as an AI Incident unless it involves violations of labor rights or other harms. There is no indication of legal violations, health harm, or other direct harms. The event mainly informs about AI adoption consequences and societal debate, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Another Major Industry Just Got Burned To The Ground. Watch Out College Students!

2024-01-08
The Daily Caller
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems are being used to replace human contractors at Duolingo, leading to layoffs. This is a direct harm to workers' employment and labor rights, which is a recognized category of AI harm under the framework. The AI system's use in scaling content and providing feedback is the cause of the reduced need for human labor. Therefore, this event qualifies as an AI Incident due to realized harm (job loss) caused by AI use.
Thumbnail Image

Duolingo sheds some human workers as AI threatens to upend the $65 billion translation industry

2024-01-12
Fast Company
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for translation that has directly led to the reduction of human contractor roles, which constitutes a labor rights impact. The displacement of workers due to AI automation is a recognized harm under the framework, specifically a violation or breach of labor rights. Although the company states that only a small minority were offboarded and that contracts expired, the AI's role in replacing human translation work is clear and directly linked to the harm of job loss or reduced employment opportunities. Therefore, this qualifies as an AI Incident due to realized harm to labor rights caused by AI use.
Thumbnail Image

"We No Longer Need As Many People": Duolingo Fires 10% Of Contractors, Will Replace Them With AI

2024-01-09
ZeroHedge
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI systems to replace human labor, resulting in the firing of 10% of contractors at Duolingo. This is a direct harm to workers' employment, which falls under harm to people. The AI system's deployment is the causal factor for the layoffs, fulfilling the criteria for an AI Incident. The article also references broader labor market impacts and social risks, but the primary event is the realized harm of job loss due to AI use, not just a potential future risk or complementary information.
Thumbnail Image

Duolingo employees lose their jobs because of AI, company says they don't need that many people now

2024-01-10
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Duolingo is using generative AI for text, speech, and image creation, which has reduced the need for human contractors, resulting in layoffs. This is a direct consequence of AI use leading to harm in the form of job loss, a violation of labor rights and economic harm to affected workers. The involvement of AI in the development and use phases is clear, and the harm is realized, not just potential. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

Duolingo turns to AI to generate content, cuts 10 percent of its contractors

2024-01-09
Mashable
Why's our monitor labelling this an incident or hazard?
The article details the use of AI to generate content, which has led to a reduction in contractor roles. However, there is no direct or indirect harm reported such as injury, rights violations, or other significant negative impacts. The event is about AI adoption and its impact on employment, but this is a common economic and social effect rather than an AI Incident as defined. It also does not present a plausible future harm scenario or a hazard. Therefore, it is best classified as Complementary Information, providing context on AI's impact on the workforce and company operations rather than an incident or hazard.
Thumbnail Image

Duolingo lays off 10% of its contract workers as the company puts more faith in AI

2024-01-09
TechSpot
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI is a contributing factor to the layoffs, indicating the use of generative AI to automate content creation and other tasks. The layoffs represent realized harm to the affected contractors, fulfilling the criteria for an AI Incident under harm to groups of people (employment harm). Although the company downplays AI as a "straight replacement," the causal link to AI use and job cuts is clear. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Duolingo cuts 10% of contractors as it uses more AI to create app content

2024-01-09
The Star
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Duolingo uses generative AI to produce content faster, resulting in a reduction of contractor roles. This is a direct consequence of AI use impacting employment, which falls under harm category (c) - violations of labor rights or significant labor-market disruption. The AI system's use in content generation is central to the event, and the job cuts are a direct outcome of this AI deployment. Therefore, this qualifies as an AI Incident due to realized harm linked to AI use in the workforce.
Thumbnail Image

Duolingo Fires Translators in Favor of AI

2024-01-09
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for translation and content generation) leading to workforce reductions and changes in job roles. Although this reflects significant labor market impact and potential quality concerns, there is no direct evidence of realized harm such as injury, rights violations, or other significant harms as defined. The potential for future harm exists but is not concretely demonstrated. Therefore, this event is best classified as Complementary Information, as it provides context on AI's impact on labor and service quality without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Duolingo Cuts 10% of Contractors While Expanding Use of AI

2024-01-08
PYMNTS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI systems to replace contractor work, which is a direct use of AI. The harm is economic displacement of contractors, but there is no indication of unlawful or harmful practices such as rights violations or injury. The event is a real-world example of AI's impact on labor markets, but it does not meet the threshold for an AI Incident because it does not describe a breach of rights or direct harm. It also is not an AI Hazard since the harm is already occurring, not just potential. The article also discusses broader societal concerns and responses, which aligns with the definition of Complementary Information.
Thumbnail Image

Duolingo Layoffs: US-Based Language Learning Company Lays Off 10% of Its Contract Translators and Adopts Generative AI To Develop Its Content, Says Report | 📲 LatestLY

2024-01-09
LatestLY
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI in content creation and its role in reducing contract translator jobs. However, the event does not report any direct or indirect harm to health, rights, infrastructure, property, or communities as defined for AI Incidents. The layoffs are a consequence of AI adoption but do not constitute a violation or harm under the framework. There is no indication of plausible future harm beyond the current economic impact, which is not classified as an AI Incident or Hazard under the definitions. The event is primarily an update on AI adoption and its workforce impact, fitting the category of Complementary Information.
Thumbnail Image

Some Duolingo Workers Were Fired in Favor of AI

2024-01-10
ITPro Today
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Duolingo used AI tools to replace human contractors, resulting in layoffs and economic harm to those workers. This is a direct consequence of AI use. Additionally, the reported decline in lesson quality due to AI-generated content implies harm to the community of learners relying on the app. The AI system's development and use have directly led to these harms, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as workers have been fired and quality issues are reported. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Duolingo Sounds AI Layoffs Alarm as Human Translators Replaced

2024-01-09
Tech.co
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI replacing human translators, indicating AI system involvement in translation tasks. The layoffs are a consequence of this AI use. However, the article does not report any direct harm such as injury, rights violations, or other significant harms caused by the AI system. The layoffs and social backlash are real but do not meet the threshold for an AI Incident or AI Hazard under the definitions provided. The focus is on societal response and workforce impact, making it Complementary Information.
Thumbnail Image

Duolingo Utilizes AI to Streamline Content Creation, Resulting in Contractor Reductions

2024-01-09
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Duolingo has reduced its contractor workforce by about 10% due to the adoption of generative AI for content creation. This is a direct consequence of AI use leading to job displacement, which constitutes a harm to labor rights and job security. Although full-time employees were not affected, the contractors' job losses are a clear negative impact caused by AI deployment. This fits the definition of an AI Incident because the AI system's use has directly led to harm (economic and labor-related) to a group of people. The event is not merely a potential risk or a general update; it describes an actual outcome of AI use causing harm.
Thumbnail Image

Duolingo Embraces AI Revolution, Lays Off Contract Workers

2024-01-10
The Tech Report
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI (GPT-4) in Duolingo's operations leading to layoffs of contract workers, which is a direct consequence of AI use. This constitutes a violation or breach of labor rights as workers are displaced due to AI adoption. The harm is realized and not merely potential, fulfilling the criteria for an AI Incident. The event is not merely a product launch or general AI news but involves concrete workforce impact linked to AI use, distinguishing it from Complementary Information or AI Hazard.
Thumbnail Image

Duolingo lays off staff as language learning app shifts toward AI

2024-01-09
WSIL
Why's our monitor labelling this an incident or hazard?
The article details a company's strategic shift to integrate AI into its operations, leading to layoffs of contract workers. While AI is involved in the development and use phases, there is no direct or indirect harm to individuals, communities, or infrastructure reported. The layoffs are a business decision related to AI adoption, not an AI Incident or Hazard. The article also provides context on broader industry trends regarding AI and employment. Therefore, this is Complementary Information about AI's impact on the workforce and industry, not an incident or hazard involving harm.
Thumbnail Image

Duolingo's latest move translates to concerns about the future of work

2024-01-10
The Daily Courier
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as Duolingo uses generative AI for translation tasks, replacing human contractors. The event stems from the use of AI in the workplace, leading directly to harm in the form of job losses for contractors, which is a form of economic and labor harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm (job displacement) affecting workers' rights and employment conditions. The article also discusses broader labor concerns and regulatory responses, but the core event is the realized harm from AI replacing human labor.
Thumbnail Image

Duolingo's latest move translates to concerns about the future of work

2024-01-09
East Oregonian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to replace human contractors performing translation tasks, resulting in layoffs of about 10% of contractors. This is a direct consequence of AI use leading to harm in the form of job loss and labor rights concerns. The involvement of AI is clear and the harm is realized, not just potential. The event fits the definition of an AI Incident because it involves the use of AI leading to a breach of labor rights and economic harm to workers. Although full-time employees are not affected, the contractors' displacement is significant and directly linked to AI deployment.
Thumbnail Image

Duolingo Embraces AI, Trims Contractors Amidst Efficiency Drive - Tekedia

2024-01-10
Tekedia
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (generative AI for content creation and feedback) and their use leading to workforce reductions. However, no actual harm or violation is reported; the workforce reduction is a business decision linked to AI efficiency, not an incident causing harm as defined. The concerns about labor market disruption are prospective and general, not specific harms caused by this event. The article also discusses broader societal responses and industry trends, which are complementary information. Therefore, the event is best classified as Complementary Information, as it provides context on AI adoption and its labor market implications without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Duolingo's AI-driven jobs cuts are a no brainer - here's why

2024-01-10
ITPro
Why's our monitor labelling this an incident or hazard?
The article clearly states that Duolingo's AI system (large language models) is used to automate translation tasks, directly leading to layoffs and job losses. This is a direct harm to workers' livelihoods, fitting the definition of harm to people. The AI system's use is the pivotal factor in causing this harm. Although the article frames this as a business decision and a natural progression of automation, the resulting job cuts are a realized harm caused by AI deployment. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Language-learning app Duolingo has been using artificial intelligence to replace human workers, resulting in layoffs for contract writers and translators.

2024-01-10
Bollyinside - Breaking & latest News worldwide
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI is being used to replace human workers, causing layoffs, which is a violation of labor rights and thus a harm under the AI Incident definition. Additionally, the errors in lessons caused by AI-generated content can be seen as harm to the quality of educational content, impacting users (harm to communities). Therefore, this qualifies as an AI Incident due to realized harms linked to AI use.
Thumbnail Image

Duolingo cut 10% of its contractor workforce as the company embraces AI | TechCrunch

2024-01-09
TechCrunch
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (GPT-4 and proprietary AI) used in place of human contractors for translation and content creation tasks. The reduction in workforce is directly attributed to AI adoption, causing harm to workers through job loss and economic insecurity. This fits the definition of an AI Incident as the AI system's use has directly led to harm to a group of people (contractors). Although the company disputes calling it layoffs, the effect is a reduction in jobs due to AI, which is a labor rights-related harm. Hence, the classification is AI Incident.
Thumbnail Image

Duolingo Layoffs 2024: What to Know About the Latest DUOL Job Cuts

2024-01-09
InvestorPlace
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI) in the company's operations, which has indirectly led to harm in the form of job losses (layoffs of contractors). This constitutes harm to people (workers losing jobs), even if the layoffs are not a direct one-to-one replacement but are partly attributed to AI deployment. Therefore, the event meets the criteria for an AI Incident because the AI system's use has indirectly led to harm (job loss). The article does not describe a potential future harm or a hazard scenario, nor is it merely complementary information or unrelated news. Hence, the classification is AI Incident.
Thumbnail Image

Duolingo lays off workers to replace them with AI

2024-01-09
ReadWrite
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems being used to replace human contractors, leading to layoffs or non-renewal of contracts, which is a direct labor market harm. The harm is realized, not just potential, as workers have lost contracts. The AI system's use in generating content and feedback is cited as a reason for reduced human labor needs. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a group of people (contract workers) through job displacement, a labor rights violation. Although no full-time staff were affected, the contractors' loss of work is a significant harm. The social backlash and cancellation plans further indicate community impact. Hence, the classification is AI Incident.
Thumbnail Image

"Una IA me quitó mi trabajo": Duolingo despide al 10% de contratistas, su trabajo lo hará la Inteligencia Artificial

2024-01-09
xataka.com.mx
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems are being used to perform tasks previously done by human contractors, resulting in job losses. This constitutes harm to individuals' employment, which is a significant harm under the framework (harm to people). The AI system's use is directly causing this harm, making this an AI Incident. Although the company mentions other factors, the AI's role in replacing human work is clear and direct.
Thumbnail Image

duolingo despide al 10% de sus contratistas. ahora será una ia la que cree contenido para la app en su lugar

2024-01-09
eju.tv
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Duolingo has reduced contractor positions by about 10%, replacing their work with AI-generated content. This is a direct consequence of AI system use in the company's operations, leading to job losses for contractors. The harm is a labor rights violation, which fits the AI Incident definition. Although full-time employees were not affected, the contractors' dismissal due to AI use is a clear realized harm. Hence, this is not merely a potential risk or complementary information but an AI Incident involving labor rights harm.
Thumbnail Image

Duolingo resiente la crisis tech y despide al 10% de su personal

2024-01-09
Expansión
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (GPT-4) being used to generate content and interactive tools, which led to a reduction in contractor roles. However, the reduction is a business decision due to increased efficiency, not a harm caused by AI malfunction or misuse. There is no evidence of injury, rights violations, or other harms directly or indirectly caused by the AI system. The event informs about AI's role in operational changes and product enhancements, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Duolingo despide al 10% de sus contratistas. Ahora será una IA la que cree contenido para la app en su lugar

2024-01-09
Genbeta
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Duolingo has reduced contractor positions by 10%, replacing their work with generative AI content creation. This is a direct use of AI leading to job loss, which is a form of harm to labor rights and employment. The AI system's development and use have directly led to this harm. Although the company claims no full-time employees were affected, the contractors' layoffs are a clear negative impact caused by AI deployment. Therefore, this event meets the criteria for an AI Incident due to realized harm linked to AI use in the workplace.
Thumbnail Image

Ahora tu profesor de idioma en Duolingo podría ser una IA - Digital Trends Español

2024-01-09
Digital Trends Español
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI systems in Duolingo's content creation and voice generation, which led to a reduction in contractor roles. While this reflects a significant impact of AI on employment, the event does not describe any direct or indirect harm such as labor rights violations, health injury, or other harms defined under AI Incident. The company's statement clarifies that no full-time employees were affected and that AI is used as a tool rather than a direct replacement. There is also no indication of plausible future harm beyond the current workforce adjustment. Thus, the event is an update on AI adoption and its operational impact, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Duolingo comienza a reemplazar a su personal por Inteligencia Artificial

2024-01-11
MVS Noticias
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in content generation and workforce changes, but the layoffs are officially attributed to contract completions, not AI replacement. There is no explicit or implicit direct or indirect harm caused by AI malfunction or misuse. The event describes a broader societal and economic impact of AI adoption, with no specific incident of harm or plausible imminent harm caused by AI. Thus, it fits the definition of Complementary Information, providing context on AI's role in workforce changes and corporate strategy, rather than reporting an AI Incident or Hazard.
Thumbnail Image

Los trabajadores de Duolingo se dedicaban a traducir. Ahora, parte de los que queden revisarán las traducciones de una IA

2024-01-09
Xataka
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems replacing human translators and changing job roles, indicating AI system involvement in use. However, it does not describe any direct or indirect harm caused by the AI outputs, such as injury, rights violations, or other harms defined under AI Incident. The job cuts are a consequence of AI adoption but do not constitute a direct AI Incident as per the framework. There is no indication of plausible future harm beyond the reported job displacement, which is already occurring. The article also provides broader context on AI's impact on employment, making it primarily complementary information about AI's societal effects rather than a new incident or hazard.
Thumbnail Image

Duolingo prescinde de un 10% de contratistas: los reemplazó la IA | RPP Noticias

2024-01-08
RPP noticias
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI, GPT-4) to produce content that was previously created by human contractors. This has directly led to the termination of contracts for 10% of these workers, which constitutes harm to employment and labor rights. Since the AI system's use has directly caused this harm, this qualifies as an AI Incident under the framework's definition of harm to labor rights.
Thumbnail Image

¿La inteligencia artificial está tomando el control? Duolingo despidió al 10% de sus traductores

2024-01-09
Noticias RCN | Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI is being used to generate translations, leading to the dismissal of human translators. This is a direct consequence of AI use causing harm to workers' employment, which is a violation of labor rights. The harm is realized, not just potential, as employees have been laid off. The AI system's development and use are central to this harm. Hence, the event meets the criteria for an AI Incident involving violation of labor rights.
Thumbnail Image

Duolingo despide al 10% de sus contratistas por la IA. Su decisión vuelve a crear un debate sobre riesgos laborales

2024-01-09
3D Juegos
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI system integration (GPT-4 and chatbots) in Duolingo's platform and links this to the decision to reduce contractor staff by 10%. This is a direct consequence of AI use affecting employment, which falls under violations of labor rights and harm to people. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (job loss) for workers.
Thumbnail Image

IA impacta al empleo

2024-01-12
El Financiero
Why's our monitor labelling this an incident or hazard?
The article primarily addresses the potential and ongoing impacts of AI on employment, including warnings and preparatory measures by companies and labor groups. However, it does not report any realized harm or incidents caused by AI systems, such as job losses directly attributable to AI or violations of rights. The concerns and responses indicate plausible future impacts but no confirmed incidents. Therefore, this qualifies as Complementary Information, as it provides context and updates on societal and governance responses to AI's labor market effects without describing a specific AI Incident or Hazard. The unrelated sections about the airline and business meeting do not affect this classification.
Thumbnail Image

Duolingo da de baja a contratistas y ahora utiliza IA para crear contenido en la app

2024-01-09
Portafolio.co
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI systems to create app content, which leads to the reduction of contractor roles. This shows AI system use and its impact on labor. However, the event does not describe any direct or indirect harm such as injury, rights violations, or other significant harms caused by the AI system. The labor displacement is a societal effect but not framed as a violation or harm under the definitions. There is no indication of plausible future harm beyond the current situation. The article also discusses broader industry and societal responses to AI's impact on employment, which fits the definition of Complementary Information. Hence, the event is not an AI Incident or AI Hazard but Complementary Information.
Thumbnail Image

Duolingo sustituye parte de su plantilla de traductores por inteligencia artificial

2024-01-09
Diario Primicia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI (GPT-4) replacing human translators, which is an AI system involvement in workforce changes. However, the event does not describe any direct or indirect harm such as injury, rights violations, or other significant harms caused by the AI system. The layoffs are a business decision related to AI adoption but do not constitute a breach of labor rights or other harms under the definitions. There is no indication of plausible future harm beyond the current workforce impact. The company's response and clarification also suggest no ongoing incident or hazard. Thus, the event is best categorized as Complementary Information, providing context on AI's role in changing employment but not describing an AI Incident or Hazard.
Thumbnail Image

Duolingo está reemplazando a los empleados por traductores de IA

2024-01-08
Androidphoria
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI translation systems replacing human translators, leading to layoffs and changes in employment conditions. This constitutes a violation of labor rights, a form of harm under the AI Incident definition. The harm is realized, not just potential, as employees have been dismissed and their roles altered due to AI deployment. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The lack of official confirmation does not negate the reported harm, as the evidence from affected workers and user reports is sufficient to classify this as an AI Incident.
Thumbnail Image

Duolingo replaces part of its translator workforce with AI

2024-01-08
Últimas Noticias
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (GPT-4) replacing human translators, leading to layoffs affecting workers' employment, which is a violation of labor rights under the framework. The harm is realized (workers losing jobs), and the AI system's use is the direct cause. Although the company disputes the extent of layoffs, the reported impact on workers is sufficient to classify this as an AI Incident. The event is not merely a product announcement or general AI news, but describes a concrete harm caused by AI deployment in the workforce.
Thumbnail Image

Há risco de que a IA cause extinção dos humanos, dizem cientistas

2024-01-11
Inovação Tecnológica
Why's our monitor labelling this an incident or hazard?
The article centers on expert predictions and concerns about possible future harms from AI, including existential risks and societal impacts like misinformation and manipulation. No specific AI system is described as having caused actual harm or malfunctioned leading to harm. The harms discussed are potential and forecasted, not realized incidents. Therefore, this qualifies as an AI Hazard, since it plausibly could lead to AI incidents in the future, but no incident has yet occurred. It is not Complementary Information because it is not updating or adding to a known incident or hazard, but rather presenting new survey data about risk perceptions. It is not Unrelated because it clearly involves AI systems and their potential impacts.
Thumbnail Image

Estado dos EUA busca proteger artistas da IA com nova legislação

2024-01-12
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of voice replication and deepfake technologies, which are recognized as potential sources of harm to artists' rights and intellectual property. Since the legislation is proposed to prevent such harms and no actual incident of harm has been reported, this constitutes a plausible future risk rather than a realized harm. The article mainly reports on legislative and governance responses to AI-related risks, which fits the definition of Complementary Information. There is no direct or indirect AI Incident described, nor is there an immediate AI Hazard event causing or leading to harm. Therefore, the classification is Complementary Information.
Thumbnail Image

Duolingo substitui funcionários por IA para criar lições no aplicativo

2024-01-10
Estadão
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Duolingo used AI to replace human contractors in creating lesson content, resulting in layoffs (economic harm to workers) and a decline in lesson quality (harm to users and communities). The AI system's use is directly linked to these harms. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's use in production.
Thumbnail Image

Desempregos em massa na era da Inteligência Artificial: João Mendes Miranda explica e aponta soluções

2024-01-11
Jornal Floripa - Notícias de Florianópolis - Santa Catarina Brasil
Why's our monitor labelling this an incident or hazard?
The article primarily addresses the potential future societal impacts of AI, especially regarding employment, and the importance of education and adaptation. It does not report a concrete incident where an AI system caused harm or a specific hazard event with imminent risk. The discussion of risks and calls for regulation, as well as educational responses, fit the definition of Complementary Information, as they provide context, societal responses, and updates related to AI's broader impact without describing a new AI Incident or AI Hazard.
Thumbnail Image

Duolingo demite funcionários por mudanças relacionadas à IA

2024-01-10
CNN Brasil
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI models like GPT-4) in the development and operation of Duolingo's platform, leading to workforce reductions. This is a case of AI use indirectly causing harm in the form of job losses, which is a significant and clearly articulated harm related to employment rights and labor conditions. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm (job displacement).
Thumbnail Image

Há 5% de chance de que a IA cause a extinção humana, dizem especialistas

2024-01-09
Tempo.pt | Meteored
Why's our monitor labelling this an incident or hazard?
The article centers on expert predictions and risk assessments regarding the future development of AI and its potential to cause catastrophic harm, including human extinction. This fits the definition of an AI Hazard, as it describes circumstances where AI development could plausibly lead to significant harm in the future. There is no description of realized harm or an incident caused by AI, so it is not an AI Incident. It is not Complementary Information because it is not updating or providing follow-up on a specific past AI Incident or Hazard, nor is it unrelated since it clearly involves AI and its risks. Therefore, the correct classification is AI Hazard.
Thumbnail Image

Duolingo PHK Karyawan Kontrak, Diganti Teknologi AI

2024-01-12
suara.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI technology is replacing the work of contract employees, leading to layoffs affecting 10% of the workforce. This is a direct use of AI causing harm to workers' employment status, which falls under violations of labor rights or significant harm to groups of people. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI use in the workplace.
Thumbnail Image

Duolingo PHK 10% Karyawan Gegara Ada AI

2024-01-10
detik Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (GPT-4 powered features) being used to automate content creation and improve efficiency, leading to layoffs of contract workers. However, the layoffs are a business consequence of AI adoption rather than a direct or indirect harm caused by AI malfunction, misuse, or violation of rights. There is no evidence of injury, rights violations, or other harms caused by AI. The event does not describe a plausible future harm either. It mainly informs about AI's role in changing workforce dynamics, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Duolingo PHK Karyawan Kontrak Gara-gara AI

2024-01-10
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI models like GPT-4) in Duolingo's platform to automate content creation and other tasks, which directly led to layoffs of contract employees. The layoffs represent harm to labor rights, a category of harm under AI Incidents. The harm is realized (not just potential), as workers have already been laid off due to AI adoption. The event clearly links AI use to a labor rights violation through job displacement, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Duolingo Jadi Korban Terbaru, Adakah Celah Lolos dari 'Invasi' AI?

2024-01-10
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Duolingo and other companies have laid off employees due to AI-driven efficiency improvements. This is a direct consequence of AI use impacting workers' employment, which falls under harm category (c) - violations of labor rights. The layoffs are a direct result of AI system use in automating tasks previously done by humans. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (job loss) to a group of people. The article also discusses broader economic impacts and responses, but the primary focus is on realized harm from AI use in workforce reduction.
Thumbnail Image

Imbas AI, Giliran 10% Kontraktor Duolingo Kena Pangkas

2024-01-10
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI) in Duolingo's operations, which has indirectly led to layoffs of contractors, constituting harm to groups of people (economic harm and job loss). Although the company denies direct causation, the layoffs are at least partly attributed to AI reducing the need for human labor. This fits the definition of an AI Incident because the development and use of AI systems have directly or indirectly led to harm (economic/job harm) to contractors. The event is not merely a product announcement or general AI news, but a concrete case of harm linked to AI use.
Thumbnail Image

Beralih ke AI, Duolingo PHK 10 Persen Karyawan Kontraknya

2024-01-10
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (generative AI with GPT-4) in Duolingo's operations, leading to layoffs of contract employees. This is a direct harm to labor rights and employment, fitting the definition of an AI Incident. The layoffs are not speculative or potential but have already occurred, and the AI system's role is pivotal in causing this harm. Hence, it is not merely a hazard or complementary information but an incident.
Thumbnail Image

Elküldi dolgozóinak 10 százalékát a Duolingo, dolgozik helyettük a mesterséges intelligencia

2024-01-10
hvg.hu
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (GPT-4 powered chatbot) in a real-world application (Duolingo app) that leads to layoffs of human employees. This is a direct consequence of AI system use impacting employment, which is a significant social and labor-related harm (violation of labor rights or economic harm to workers). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (job losses).
Thumbnail Image

Drasztikusan átalakul a legnépszerűbb nyelvtanulós alkalmazás

2024-01-10
Index.hu
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (GPT-4) in a popular app, but the article focuses on the company's strategic shift and workforce impact rather than any harm caused by the AI system. There is no mention or implication of injury, rights violations, or other harms resulting from the AI's use. Therefore, this is not an AI Incident or AI Hazard. It is also not primarily about responses to AI harms or governance. Hence, it fits best as Complementary Information, providing context on AI adoption and its organizational effects without describing harm or plausible harm.
Thumbnail Image

Elkezdődött: a Duolingo elbocsátja a dolgozóit a mesterséges intelligencia miatt

2024-01-10
Noizz.hu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI system use (ChatGPT-4 integration) leading to employee layoffs, which is a direct consequence of AI adoption. However, the layoffs themselves, while significant socio-economic events, do not meet the criteria for AI Incident since no direct or indirect harm as defined (injury, rights violations, critical infrastructure disruption, etc.) is described. The event is not a hazard since harm has already occurred (job loss), but the harm is not of the types specified for AI Incident classification. The article also provides context on AI's impact on employment trends, making it Complementary Information rather than an Incident or Hazard.
Thumbnail Image

A mesterséges intelligencia miatt rúgta ki a dolgozói 10 százalékát a Duolingo

2024-01-10
eduline.hu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to automate tasks previously done by humans, leading to layoffs. However, there is no report of harm such as injury, rights violations, or other direct or indirect harms caused by the AI system. The layoffs are a consequence of AI adoption but do not constitute an AI Incident under the definitions provided. There is also no indication of plausible future harm or risk from the AI system's use beyond workforce changes. The event provides context on AI's societal impact and corporate responses, fitting the definition of Complementary Information rather than Incident or Hazard.
Thumbnail Image

Sg.hu - Leépít a Duolingo, inkább MI-vel gyártanak tartalmakat

2024-01-10
Sg.hu
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (generative AI models like GPT-4 and Duolingo's Birdbrain) being used to replace human labor in content creation and review. The layoffs of contract workers are a direct harm caused by the use of AI, as the company explicitly states that AI-generated content is replacing human-generated content, leading to job losses. This fits the definition of an AI Incident because the development and use of AI systems have directly led to harm to a group of people (workers losing their jobs).
Thumbnail Image

Yapay zeka yüzünden yaşanan işten çıkarmalara bir yenisi daha eklendi

2024-01-09
Teknolojioku
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as Duolingo uses AI tools to perform tasks formerly done by human contractors. The layoffs are a direct consequence of AI use, causing harm to the affected workers through job loss. This fits the definition of an AI Incident because the AI system's use has directly led to harm (job loss) to a group of people. The event is not merely a potential risk or a general update but a realized harm caused by AI deployment.
Thumbnail Image

Araştırmacılar, yapay zekanın insanlığı yok etme ihtimalini açıkladı

2024-01-06
CHIP Online
Why's our monitor labelling this an incident or hazard?
The article does not describe any specific AI system currently causing harm or malfunctioning, nor does it report an event where AI has directly or indirectly led to harm. Instead, it presents expert survey results about plausible future risks and benefits of AI development. Therefore, it describes a credible potential risk scenario but no realized harm. This fits the definition of an AI Hazard, as it concerns plausible future harm from AI development rather than an incident or complementary information about responses or updates.
Thumbnail Image

Araştırma: Yapay zekânın insanlığı yok etme ihtimali yüzde kaç?

2024-01-07
T24
Why's our monitor labelling this an incident or hazard?
The article focuses on expert opinions and forecasts about possible future harms from AI, including existential risks and capability milestones. No current or past AI system use or malfunction causing harm is described. Therefore, it does not qualify as an AI Incident. Instead, it highlights plausible future risks and timelines, fitting the definition of an AI Hazard, as it discusses credible potential harms that could arise from AI development and deployment in the future.
Thumbnail Image

Çarpıcı araştırma: Yapay zeka 2116'da işlerin çoğunu devralacak - Sözcü Gazetesi

2024-01-06
Sözcü Gazetesi
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of their future capabilities and societal impact, specifically regarding job displacement. However, it does not describe any actual harm or incident caused by AI, nor does it report on a specific event where AI has directly or indirectly caused harm. Instead, it presents expert forecasts about potential future scenarios, which fits the definition of an AI Hazard, as it plausibly could lead to significant harm (job loss) in the future. There is no indication of complementary information or unrelated content, so the classification as AI Hazard is appropriate.
Thumbnail Image

Duolingo gasi radna mjesta, ljude će zamijeniti AI

2024-01-10
Aljazeera
Why's our monitor labelling this an incident or hazard?
The event describes the use of generative AI to replace some contract workers in content creation, which is a direct use of AI systems. However, there is no indication of any harm caused by this change, such as injury, rights violations, or other significant harms. The layoffs are a business decision related to AI adoption but do not describe an AI Incident or a plausible AI Hazard. The article mainly reports on the company's operational changes and AI integration, which fits the category of Complementary Information as it provides context on AI's impact on employment and company practices without describing harm or risk of harm.
Thumbnail Image

Na kom jeziku reæi zbogom? "Zelena sova" našla zamenu ljudima

2024-01-10
B92
Why's our monitor labelling this an incident or hazard?
The article mentions the use of generative AI to assist in content creation, but there is no indication that this has caused any harm or incident. The AI is used as a tool to improve efficiency, and human employees continue to supervise the AI's output. There is no mention of any injury, rights violation, or other harm resulting from this shift. Therefore, this is a general update about AI adoption and workforce changes, which fits the category of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

KOMPANIJA ČIJU APLIKACIJU SRBI OBOŽAVAJU POČELA DA OTPUŠTA RADNIKE! Ubuduće žele da se oslanjaju na...

2024-01-10
espreso.co.rs
Why's our monitor labelling this an incident or hazard?
The article discusses the use of generative AI by Duolingo to create content more efficiently, leading to layoffs of contract workers. However, this is a business decision related to AI adoption rather than an incident where AI caused harm or a hazard where AI could plausibly cause harm. The layoffs are a consequence of increased AI use but do not constitute an AI Incident as no harm caused by AI malfunction or misuse is described. Nor is this a hazard since the harm (job loss) is already realized but is a standard business impact of automation, not an AI system malfunction or misuse. The main focus is on the company's strategic shift and its implications, which fits best as Complementary Information about AI's societal impact and adoption.