These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems
Advanced AI systems capable of generating content — such as ChatGPT, DALL·E 2, and Midjourney — have captured the world's attention. The general-purpose capabilities of these advanced AI systems offer enormous potential for innovation in a number of fields, and they are already being adopted and put to use in a variety of contexts. These advanced systems may be used to perform many different kinds of tasks — such as writing emails, answering complex questions, generating realistic images or videos, or writing software code.
While they have many benefits, advanced generative AI systems also carry a distinctly broad risk profile, due to the broad scope of data on which they are trained, their wide range of potential uses, and the scale of their deployment. Systems that are made publicly available for a range of different uses can present risks to health and safety, can propagate bias, and carry the potential for broader societal impacts, particularly when used by malicious actors. For example, the capability to generate realistic images and video, or to impersonate the voices of real people, can enable deception at a scale that can damage important institutions, including democratic and criminal justice systems. These systems may also have important implications for individual privacy rights, as highlighted in the G7 Data Protection and Privacy Authorities' Statement on Generative AI.
Generative systems can also be adapted by organizations for specific uses – such as corporate knowledge management applications or customer service tools – which generally present a narrower range of risks. Even so, there are a number of steps that need to be taken to ensure that risks are appropriately identified and mitigated.
To address and mitigate these risks, signatories to this code commit to adopting the identified measures. The code identifies measures that should be applied in advance of binding regulation pursuant to the Artificial Intelligence and Data Act by all firms developingFootnote1 or managing the operationsFootnote2 of a generative AI system with general-purpose capabilities, as well as additional measures that should be taken by firms developing or managing the operations of these systems that are made widely available for use, and which are therefore subject to a wider range of potentially harmful or inappropriate use. Firms developing and managing the operations of these systems both have important and complementary roles. Developers and managers need to share relevant information to ensure that adverse impacts can be addressed by the appropriate firm.
While the framework outlined here is specific to advanced generative AI systems, many of the measures are broadly applicable to a range of high-impact AI systems and can be readily adapted by firms working across Canada's AI ecosystem. It is also important to note that this code does not in any way change existing legal obligations that firms may have – for example, under the Personal Information Protection and Electronic Documents Act.
In undertaking this voluntary commitment, developers and managers of advanced generative systems commit to working to achieve the following outcomes:
- Accountability – Firms understand their role with regard to the systems they develop or manage, put in place appropriate risk management systems, and share information with other firms as needed to avoid gaps.
- Safety – Systems are subject to risk assessments, and mitigations needed to ensure safe operation are put in place prior to deployment.
- Fairness and Equity – Potential impacts with regard to fairness and equity are assessed and addressed at different phases of development and deployment of the systems.
- Transparency – Sufficient information is published to allow consumers to make informed decisions and for experts to evaluate whether risks have been adequately addressed.
- Human Oversight and Monitoring – System use is monitored after deployment, and updates are implemented as needed to address any risks that materialize.
- Validity and Robustness – Systems operate as intended, are secure against cyber attacks, and their behaviour in response to the range of tasks or situations to which they are likely to be exposed is understood.
Signatories also commit to support the ongoing development of a robust, responsible AI ecosystem in Canada. This includes contributing to the development and application of standards, sharing information and best practices with other members of the AI ecosystem, collaborating with researchers working to advance responsible AI, and collaborating with other actors, including governments, to support public awareness and education on AI. Signatories also commit to develop and deploy AI systems in a manner that will drive inclusive and sustainable growth in Canada, including by prioritizing human rights, accessibility and environmental sustainability, and to harness the potential of AI to address the most pressing global challenges of our time.
About the tool
You can click on the links to see the associated tools
Objective(s):
Purpose(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Target users:
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case