Section 1 - Risk identification and evaluation
TELUS’ approach to AI risk management is focused on data governance, responsible AI practices, trust-building, and ethical considerations. TELUS has established an AI Policy for responsible governance and use of AI across the organization. This Policy requires that initiatives involving AI Systems use our Data Enablement Plan process to identify and review risk. TELUS views AI risks as multifaceted issues that require oversight at the executive level, policy development, and the establishment of best practices for responsible and ethical use of AI technologies.
TELUS has defined a Data Risk Management Policy and Framework used across the lifecycle of advanced AI systems to align stakeholders and support risk classification. The policy defines how TELUS identifies, assesses, treats and monitors risks relating to the use and management of data. The assessment and classification of risk under this Policy and Framework determine the level at which decisions around risk, including acceptance, are made, in alignment with the requirements for use, development and deployment of AI in our AI Policy.
TELUS is committed to using AI in a way that will extend our capabilities to give back to the broader community and contribute to a friendlier future. We believe that human passion to innovate, coupled with strong ethical AI design principles, can create a powerful and positive transformation of our society. TELUS uses, develops, deploys, procures and provisions AI Systems in accordance with our Data Principles (see https://www.telus.com/Trust), our legal obligations and our Code of Ethics and Conduct (see https://www.telus.com/en/about/policies-and-disclosures/code-of-ethics-and-conduct). For example, TELUS will not use AI to deploy subliminal or purposefully manipulative techniques that distort individuals’ abilities to make informed decisions, or exploit vulnerabilities of individuals or groups based on age, disability, or social/economic status in ways that could cause harm.
TELUS has established an AI Policy for responsible governance and use of AI across the organization. It affirms TELUS’ dedication to developing, using and deploying AI technologies in a way that drives positive change, while managing risks and ensuring appropriate safeguards are in place. The AI Policy requires that AI systems are used, developed, deployed, procured and provisioned in accordance with our Data Principles (see https://www.telus.com/Trust), our legal obligations and our Code of Ethics and Conduct (see https://www.telus.com/en/about/policies-and-disclosures/code-of-ethics-and-conduct).
A key method for identification of risks across the AI lifecycle is through our Data Enablement Plan. The plan unifies our risk assessment processes for Responsible AI, Privacy Impact Assessment and Secure by Design, including cybersecurity, into a single touchpoint and improves our agility through in-business data stewardship.
Data Stewards are team members appointed by their business unit for their in-depth knowledge of their team’s data uses and the intended strategy for data. To prepare them to support in the process, they participate in a certification training program, which equips and empowers them to understand responsible use of AI through data privacy, security and governance. Across the AI lifecycle, business and technology teams are active participants in the identification of risk with the support of TELUS’ data governance processes.
AI models and systems are validated and verified before moving beyond the development stage with enhanced assessment and testing requirements identified through the enterprise Data Enablement Plan process. This provides consistency, reliability and alignment with TELUS’ commitment to Responsible AI. TELUS uses MLOps and LLMOps practices for ongoing governance of AI throughout the lifecycle.
AI models and systems are validated and verified before moving beyond the development stage with enhanced assessment and testing requirements identified through the enterprise Data Enablement Plan process. This provides consistency, reliability and alignment with TELUS’ commitment to Responsible AI.
As identified through our Data Enablement Plan, TELUS may use its cross-functional purple teaming approach for evaluation of an AI system. This is a collaborative method where team members and experts from different teams, background and knowledge of AI work to identify weaknesses, vulnerabilities, and gaps in generative AI systems through adversarial testing and address them with relevant mitigations. See: https://www.fuelix.ai/post/how-to-bake-responsible-ai-into-generative-ai-deployments---go-purple
A report is compiled on the testing methodology, major findings, recommendations, and any remaining unmitigated issues or risks in order to address them with the team before implementation of the solution.
Our risk identification and evaluation methodologies include both quantitative techniques (e.g. query acceptance rate, rejection rate) and qualitative assessments to assess the potential impacts of a given risk or vulnerability and understand the contexts in which they occur. For example, when using generative AI systems, risks and vulnerabilities are assessed using a purple team approach.
The TELUS Purple Teaming approach is a collaborative method designed to identify weaknesses, vulnerabilities, and gaps in generative AI systems through adversarial testing and address them with relevant mitigations. In addition to the participation from the Software Developers, Data Scientists and AI Engineers, it emphasizes the participation of diverse individuals with varying expertise and technical literacy to gain comprehensive insights into the system's shortcomings and how real users may interact with the solution.
Reporting of AI incidents are required by our AI policy, and reporting is accessible to any customer or individual through our support lines. Our AI systems are developed with feedback mechanisms for users to report incidents or vulnerabilities.
Using a risk-based approach, use cases are submitted for independent external tests and reviews. We work with third parties who have independent expertise and automated tooling to support this type of testing.
External parties can contact our support lines to report risks, incidents or vulnerabilities.
TELUS participates in forums to develop, advance and adopt shared standards for ensuring the trustworthiness of AI, including the Standards Council of Canada National AI & Data Governance Standards Collaborative, NIST AI Safety Consortium, Responsible AI Institute, IAPP AI Governance Global (Foundational Supporter), MILA - Quebec Artificial Intelligence Institute and Vector Institute.
We believe responsible use of AI requires input from a diverse set of voices. With a human-centred approach, we can build trust in our digital world and make a friendly future for all. We have engaged with academic and research institutions to help drive out this research. Our annual TELUS AI Report shared the views from thousands of people about their thoughts on AI - their concerns, hopes and opinions about where this powerful technology should be headed. The report is made available publicly at: www.telus.com/ResponsibleAI
TELUS is proud to have been among the first (and the first telecom) in Canada to sign the Government of Canada’s voluntary code of conduct for generative AI (GenAI), which seeks transparent, equitable and responsible development of GenAI technology. https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems.
No answer provided


























