Auditors have a role in Canadian AI governance initiatives, but it needs clarification

Private auditors provide third-party conformity assessments against principles, practices, standards, and legislation. Big names in this field include accounting firms like EY and Deloitte and technical inspection firms like Bureau Veritas and Intertek. These firms already play a significant role in conformity assessment, from environmental standards to food safety. These, and other auditing firms, are now poised to play a significant role in Canadian AI governance. This potential role merits careful scrutiny. But first, some background on Canadian and international AI governance initiatives and the roles auditors play in them.

On a national level, Canada’s AI governance initiatives began in 2017 with their National AI Strategy’s phase one launch. In 2019, Canada launched the first version of its Directive on Automated Decision-Making (DADM). In 2022, Canada launched their National AI Strategies second phase and proposed the AI and Data Act (AIDA).

On an international level, the AI governance field has seen several developments. Initial principles, like OECD’s AI Principles, have been followed by best practices and standards,  such as ISO/IEC JTC 1/SC 42 AI standardisation efforts. Some actors, such as Responsible AI Institute [RAII], CertAI, and D-seal, have developed conformity assessments and certifications to assess AI systems against these principles, best practices, and standards.

Now, governments at all levels are developing laws and regulations: Canada’s AIDA, the EU’s AI Act, and New York City’s (NYC) Automated Employment Decision Tools policy, to name just a few.

Private auditors will likely play a key role in conducting conformity assessments against these principles, best practices, standards, and legislation. In some cases, like in New York City, they already do. Yet these auditors’ assessment requirements and how their qualifications are determined have not been defined, and Canada’s AI governance initiatives are no exception.

Auditors have roles in Canadian AI governance initiatives but need precision

The Standards Council of Canada (SCC) was given $8.6 million “to advance the development and adoption of standards and a conformity assessment program related to AI” to uphold the National AI Strategy’s standards pillar. While this strategy does not mention auditors, the SCC recently announced a responsible AI pilot project between EY and ATB Financial to contribute to their AI standardisation efforts. EY will use RAII’s responsible AI conformity assessment to audit ATB Financial’s AI practices. This pilot seems geared towards testing the standards rather than auditors’ qualifications.

The DADM mentions auditors where “the Government of Canada retains the right to authorize external parties to review and audit these components as necessary,” and that AI systems classified as level 3 (high-risk decisions that often lead to difficult-to-reverse and ongoing impacts) and level 4 (very high-risk decisions that often lead to irreversible and perpetual impacts) are required to provide “results of any reviews or audits.” Like the National AI Strategy and resulting SCC pilot, the DADM does not explicitly define who these external parties are, how their reviews/audits are conducted, or how their qualifications are determined.

The AIDA mentions auditors several times. Section 7 requires those responsible for an AI system to assess whether it is a high-impact system (high-impact system criteria established in forthcoming regulations). Suppose section 7’s assessment identifies a high-impact AI system. In that case, those responsible for that AI system must “establish measures to identify, assess and mitigate the risks of harm or biased output” in Section 8 and “monitor compliance with these established measures and their effectiveness” in Section 9. Under section 15, if the Minister has reasonable grounds to believe either of these or any other section between 6-14 were contravened, they may require those responsible for the AI system to either “(a) conduct an audit with respect to the possible contravention or (b) engage the services of an independent auditor to conduct the audit.” In either instance, the audit must be conducted by someone who meets the qualifications prescribed by regulation under section 36(f). Section 36(f)’s regulations and those defining high-impact systems are forthcoming. This means we are yet to know how auditors will be engaged and what expectations for qualifications will be.

Definitions in other AI governance initiatives could provide guidance for Canada’s auditors

Other jurisdictions and AI governance standards may give some insight into the role auditors will play in Canadian AI governance.

NYC’s Automated Employment Decision Tools amendment requires all automated employment decision tools, as defined in section 20-870, to be “the subject of a bias audit” and to publicly provide “a summary of the results of the most recent bias audit.” These bias audits are “impartial evaluations by an independent auditor.” While this amendment does not define how these bias audits are conducted or how independent auditors’ qualifications are determined, it does define the auditors’ role.

ISO/IEC JTC 1/SC 42’s AI Management System standard (AIMS) offers guidance on how Canada could clarify AI auditors’ role in its AI governance initiatives. It sets requirements for managing AI systems and those performing audits and certifications. These requirements clarify the auditors’ roles, their assessment requirements and specify how their qualifications are determined. Among other requirements, it also specifies how to select auditors and make certification decisions, often the result of a positive audit result. While this standard is still a draft, it offers the most comprehensive clarification of the auditors’ role, assessment requirements, and how to determine their qualifications.

Finally, the EU’s draft AI Act requires third-party conformity assessment bodies, or auditors, to complete conformity assessments or audits. These audits are against management systems standards like ISO’s AIMS to determine an AI system’s risk level. If an AI system is classified as high-risk, its providers must notify the designated authorities of its risk level before entering the market. Further, the Act outlines conformity assessment requirements, like when they occur and what happens when a conformity assessment classifies an AI system as high-risk, among other requirements. Still, there are no requirements for conducting these audits or determining an auditor’s qualifications. ISO’s AIMS and its corresponding audit and certification body requirements could be referenced in newer Act drafts to clarify how audits are conducted and how to determine an auditor’s qualifications.

International standards bring consistency. Clear roles bring trust

Consistent with the competency requirements set for other auditing fields, Canada should put auditors’ qualifications at the centre of their continued efforts to develop AI governance. 

Future AIDA drafts could reference ISO’s AIMS and its corresponding audit and certification body requirements to clarify the AI auditors’ role, like how future EU AI Act drafts may develop. Referencing these standards can provide consistent requirements for conformity assessments, auditors, and organisations. International standards aim to provide market consistency, which can be better achieved by referencing international standards in national AI governance efforts. The SCC should also consider using its AI standardisation funding to develop an accreditation program or training material for Canadian auditors to become qualified to perform AI conformity assessments, thereby allowing audit firms to increase their competency against consistent training requirements.

There are several excellent reasons to turn to private auditors as an independent check on AI systems’ potential risks and harms. But it is critical that governments set thresholds for auditor competencies and professional conduct to ensure public trust in these future AI governance developments.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.