These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
CAN/DGSI 101: Ethical Design and Use of Artificial Intelligence by Small and Medium Organizations
This Standard specifies minimum requirements for incorporating ethics in the design and use of artificial intelligence by small and medium organizations, which typically have fewer than 500 employees.
This standard is limited to artificial intelligence (AI), which uses machine learning for automated decisions and includes generative AI. Artificial intelligence includes internally developed tools and third-party tools deployed for internal use by the organization.
This Standard provides a framework and process to help small and medium organizations align with international and Canadian guidance norms on the governance of automated decision systems (including the OECD’s AI Principles, the Treasury Board of Canada Secretariat’s Directive on Automated Decision Making, and NIST RMF).
This Standard applies to all organizations, including public and private companies, government entities, and not-for-profit organizations.
NOTE: Organizations with more than 500 employees may also benefit from applying this Standard for the ethical design and use of artificial intelligence in their organizations.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Impacted stakeholders:
Target sector(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
Target groups:
Target users:
Stakeholder group:
Validity:
Enforcement:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case