These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Generative AI Framework for HMG (HTML)
We have defined ten common principles to guide the safe, responsible and effective use of generative AI in government organisations. The white paper A pro-innovation approach to AI regulation, sets out five principles to guide and inform AI development in all sectors. This framework builds on those principles to create ten core principles for generative AI use in government and public sector organisations.
You can find posters on each of the ten principles for you to display in your government organisation.
- Principle 1: You know what generative AI is and what its limitations are
- Principle 2: You use generative AI lawfully, ethically and responsibly
- Principle 3: You know how to keep generative AI tools secure
- Principle 4: You have meaningful human control at the right stage
- Principle 5: You understand how to manage the full generative AI lifecycle
- Principle 6: You use the right tool for the job
- Principle 7: You are open and collaborative
- Principle 8: You work with commercial colleagues from the start
- Principle 9: You have the skills and expertise needed to build and use generative AI
- Principle 10: You use these principles alongside your organisation’s policies and have the right assurance in place
About the tool
You can click on the links to see the associated tools
Objective(s):
Purpose(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case