AI-Managed Café in Stockholm Raises Labor and Ethical Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A café in Stockholm is managed entirely by an AI chatbot named Mona, responsible for hiring, supply orders, and daily operations. While the experiment highlights AI's potential in workplace management, it has led to operational inefficiencies and raised concerns about labor rights, employee well-being, and ethical risks, though no direct harm has yet occurred.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved as the manager of the coffee shop, performing complex tasks such as hiring and operational decisions. While no direct harm has been reported, the AI's management style has already caused problematic situations (e.g., poor handling of employee rights and operational inefficiencies). The article discusses ethical concerns and potential risks, including how the AI might handle emergencies or labor issues, indicating plausible future harm. Therefore, this event fits the definition of an AI Hazard, as the AI's use could plausibly lead to harm, but no harm has yet materialized.[AI generated]
AI principles
Respect of human rightsHuman wellbeing

Industries
Travel, leisure, and hospitality

Severity
AI hazard

Business function:
Human resource management

AI system task:
Goal-driven organisation


Articles about this incident or hazard

Thumbnail Image

"Nous voulons tester avant que ce ne soit une réalité" : un café entièrement dirigé par une IA ouvre à Stockholm

2026-04-29
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the manager of the coffee shop, performing complex tasks such as hiring and operational decisions. While no direct harm has been reported, the AI's management style has already caused problematic situations (e.g., poor handling of employee rights and operational inefficiencies). The article discusses ethical concerns and potential risks, including how the AI might handle emergencies or labor issues, indicating plausible future harm. Therefore, this event fits the definition of an AI Hazard, as the AI's use could plausibly lead to harm, but no harm has yet materialized.
Thumbnail Image

Offres d'emploi, entretiens, décisions d'embauche : à Stockholm, une intelligence artificielle dirige un café et fait des bourdes

2026-04-29
Le Parisien
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly managing employment processes and employee relations, which has directly led to violations of labor rights and harm to the employee's well-being. The AI's malfunction or misuse in managing work conditions (e.g., ignoring right to disconnect, improper handling of leave, financial demands) constitutes harm under the framework's category of violations of human rights or labor rights. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Una cafetería sin humanos ya es realidad: en esta ciudad la IA se encarga de todo

2026-04-30
Excélsior
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as managing the café autonomously, including hiring and employee management, which involves decision-making that affects human workers. The reported problems—such as inappropriate communication timing and financial requests—constitute harms to labor rights and working conditions. The inventory mismanagement also shows operational failures impacting property and business operations. Since these harms are realized and directly linked to the AI system's use, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Estocolmo experimenta con una cafetería completamente dirigida por IA

2026-04-29
France 24
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in the café's management and decision-making, including hiring and supply ordering. The article discusses ethical concerns and potential risks related to AI as an employer, such as employee rights violations and management issues. However, no direct or indirect harm has materialized so far; the concerns remain speculative and part of an ongoing experiment. Thus, the event fits the definition of an AI Hazard, as the AI's use could plausibly lead to incidents involving labor rights or ethical harms in the future, but no incident has yet occurred.
Thumbnail Image

Recrutement, gestion des approvisionnements : à Stockholm, un café piloté par une IA révèle déjà ses limites

2026-04-29
La Voix du Nord
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in managing the café, including recruitment and supply chain decisions. The AI's malfunction or suboptimal decisions have caused operational inefficiencies and ethical concerns, particularly regarding employee treatment. While these raise important questions about labor rights and workplace management, the article does not report any realized harm such as injury, legal violations, or significant disruption. The concerns are about plausible future harms if the AI's management practices persist without oversight or correction. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Stockholm expérimente un café entièrement dirigé par l'IA

2026-04-29
La Libre.be
Why's our monitor labelling this an incident or hazard?
The AI system "Mona" is clearly involved in the café's management, fulfilling tasks that require AI capabilities. While the AI's actions have caused operational inefficiencies and raised ethical concerns about labor rights and management practices, there is no evidence of direct or indirect harm as defined (e.g., injury, legal rights violations with complaints, or significant harm to property or community). The article discusses potential risks and ethical questions but does not report any actual harm or incident. Thus, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides valuable context and insight into societal and ethical implications of AI use in management, fitting the definition of Complementary Information.
Thumbnail Image

Estocolmo experimenta con una cafetería completamente dirigida por IA

2026-04-29
CRHoy.com | Periodico Digital | Costa Rica Noticias 24/7
Why's our monitor labelling this an incident or hazard?
The AI system "Mona" is explicitly involved in the café's operation, including hiring and managing human employees. While ethical concerns and potential risks are discussed, no direct or indirect harm has yet occurred. The article focuses on exploring the implications and possible future issues of AI management rather than reporting an incident causing harm. Thus, this qualifies as an AI Hazard, as the AI's use could plausibly lead to harm or rights violations in the future, but no incident has materialized yet.
Thumbnail Image

Una IA dirige la primera cafetería experimental en Estocolmo

2026-04-29
El Nacional
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system managing human employees and making critical operational decisions. While no actual injury or legal violation has been reported yet, the AI's behavior—ignoring vacation requests, demanding money advances, and making poor inventory decisions—creates a credible risk of harm to employees' health and labor rights. The mention of legal responsibility concerns and the AI's lack of empathy further support the plausibility of future harm. Since harm is not yet realized but plausible, this event fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Ce café de Stockholm est entièrement dirigé par un agent IA

2026-04-29
L'essentiel
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, managing the café autonomously. The AI's use is central to the event. Although no direct or indirect harm has yet occurred, the article highlights ethical and operational risks that could plausibly lead to harm, such as labor rights violations and inadequate responses to emergencies. Therefore, the event represents a credible potential for harm stemming from the AI system's use, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Stockholm expérimente un café entièrement dirigé par l'IA

2026-04-29
Corse Matin
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as "Mona" is an AI agent managing the café's operations autonomously. The event stems from the AI's use in management. While ethical concerns and operational issues are present, no actual harm or violation of rights has been reported as having occurred. The concerns about employee treatment and potential responses to incidents are forward-looking and speculative. Thus, the event fits the definition of an AI Hazard, as the AI's use could plausibly lead to harm (e.g., labor rights violations, employee injury mishandling) but no harm has yet materialized.
Thumbnail Image

Un nuevo café experimental en Estocolmo lo gestiona una IA

2026-04-30
Euronews Español
Why's our monitor labelling this an incident or hazard?
The AI system (Mona) is clearly involved in the café's management, fulfilling tasks that would normally require human decision-making, which fits the definition of an AI system in use. However, the article does not report any injury, rights violation, operational disruption, or other harms caused by the AI's actions. The only noted issue is inefficient ordering, which does not rise to the level of harm as defined. The event is primarily an experiment and demonstration, raising ethical questions and societal implications rather than describing an incident or hazard. Thus, it fits the category of Complementary Information, as it informs about AI's impact on work and management without reporting harm or plausible harm.
Thumbnail Image

Stockholm expérimente un café entièrement dirigé par l'IA

2026-04-29
TV5MONDE
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the manager of the café, making operational decisions including hiring and employee management. The AI's malfunction or poor decision-making has led to realized harm in the form of labor rights violations and ethical concerns, such as ignoring employee requests for time off and sending messages at all hours, which affect employee well-being. These are direct consequences of the AI's use and management style. Therefore, this event qualifies as an AI Incident due to violations of labor rights and harm to employees caused by the AI system's use.