AI Accent Masking in Canadian Call Centres Sparks Transparency and Labor Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Telus Digital has deployed AI technology from Tomato.ai to mask the accents of offshore call centre workers, raising concerns among Canadian unions and leaders about customer deception, lack of transparency, and potential job losses. The AI alters speech in real time, potentially misleading customers about agent locations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system is explicitly described as being used to alter accents in real time, which is a clear AI system involvement. The use of this system has already taken place internally and possibly in customer interactions, leading to concerns about deception and lack of transparency, which can be considered a violation of rights or obligations to consumers. Although no physical harm or direct legal violation is reported, the alteration of accents to mask agent identity can be seen as a breach of consumer rights and trust, constituting harm to communities or individuals. Therefore, this event meets the criteria for an AI Incident due to indirect harm caused by the AI system's use.[AI generated]
AI principles
Transparency & explainabilityAccountability

Industries
Consumer services

Affected stakeholders
WorkersConsumers

Harm types
Economic/PropertyReputational

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AI 'accent masking' at overseas call centres sparks union backlash in Canada | Globalnews.ca

2026-05-06
Global News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to alter accents in call centres, indicating AI system involvement. The concerns focus on potential harms including misleading customers and job losses, which are plausible harms related to labor rights and community impact. Since no actual harm or incident is reported, and the concerns are about possible future effects, the event qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risks of the AI system's use, not on responses or updates to past incidents. It is not unrelated because AI involvement and potential harm are central to the discussion.
Thumbnail Image

Telus using AI to alter the accents of customer-service agents

2026-05-05
The Globe and Mail
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as being used to alter accents in real time, which is a clear AI system involvement. The use of this system has already taken place internally and possibly in customer interactions, leading to concerns about deception and lack of transparency, which can be considered a violation of rights or obligations to consumers. Although no physical harm or direct legal violation is reported, the alteration of accents to mask agent identity can be seen as a breach of consumer rights and trust, constituting harm to communities or individuals. Therefore, this event meets the criteria for an AI Incident due to indirect harm caused by the AI system's use.
Thumbnail Image

Telus Digital using AI to mask accents of offshore workers

2026-05-06
MobileSyrup
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned and is used in live customer service calls to mask accents, which is a direct use of AI. The harm is indirect but significant: it involves deception of customers (a violation of rights) and labor-related harms due to outsourcing and layoffs facilitated by this technology. The AI's role is pivotal in enabling this masking and thus the potential harm. The event does not merely describe potential future harm but an ongoing use with reported consequences, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Accent-smoothing AI tools conceal customer-service outsourcing, unions allege - The Logic

2026-05-06
The Logic
Why's our monitor labelling this an incident or hazard?
The AI system (accent-smoothing speech enhancement) is explicitly mentioned and used in the context of call centres. The unions allege that this AI use conceals outsourcing, which relates to labor rights and transparency issues. However, the article does not document a realized harm such as a legal violation or injury, only an allegation and potential indirect harm. Therefore, this event is best classified as Complementary Information, as it provides context and updates on AI use and its societal implications without confirming an AI Incident or AI Hazard.
Thumbnail Image

Telus Faces Scrutiny Over Artificial Intelligence Tool That Changes Call Centre Accents

2026-05-06
Weekly Voice
Why's our monitor labelling this an incident or hazard?
The article describes an AI system actively used by Telus to alter accents in call centre interactions, which qualifies as AI system involvement. However, there is no report of actual harm such as injury, rights violations, or operational disruption. The concerns raised are about transparency and customer trust, which are important but not confirmed harms. The article focuses on the debate and scrutiny around the AI tool's use, including calls for informing customers and regulatory considerations. This fits the definition of Complementary Information, as it provides context and societal/governance responses to AI use rather than reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Ai 'accent Masking' At Overseas Call Centres Sparks Union Backlash In Canada

2026-05-06
Breaking News, Latest News, US and Canada News, World News, Videos
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (accent masking technology) in real-world call center operations, which directly affects customers and workers. The AI's role in altering speech can mislead customers about who they are speaking to, which can be considered a violation of transparency and potentially a breach of consumer rights. Additionally, the use of this AI technology is linked to concerns about job losses in Canada, indicating harm to communities and labor rights. Since the AI system's use has already occurred and is causing these harms, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Could AI replace the person helping you on the phone? | CBC News

2026-05-08
CBC News
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (AI co-pilot assisting call center agents) and discusses its use and impact on workers, including monitoring and potential job displacement. However, it does not report any realized harm such as job losses directly caused by AI, violations of rights, or other harms. The concerns expressed are about potential future impacts and workplace anxiety, which are important but do not constitute a direct or indirect AI Incident or a clear AI Hazard. The article also includes governance and legal context regarding AI surveillance and worker protections. Thus, it fits the definition of Complementary Information, providing supporting context and societal response to AI deployment in the workplace.
Thumbnail Image

As AI creeps into telecoms, call centre agents worry they'll be replaced

2026-05-08
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (AI co-pilots) used in call centers to assist agents, indicating AI system involvement. The concerns raised by employees and unions about job displacement and AI monitoring relate to the use of AI systems. However, the article does not report actual job losses or legal violations directly caused by AI use; rather, it reports fears and potential risks. Therefore, the event does not meet the threshold for an AI Incident, which requires realized harm. Instead, it represents a credible risk of future harm (job loss, labor rights violations, privacy issues) due to AI deployment, fitting the definition of an AI Hazard. The article also includes company statements and union reactions, but these serve to contextualize the hazard rather than report a resolved incident or governance response, so it is not Complementary Information. Hence, the classification is AI Hazard.