These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Model AI Governance Framework for Generative AI
IMDA's and AI Verify Foundation's Model AI Governance Framework for Generative AI (MGF-Gen AI) aims to address the challenges and risks associated to generative AI while facilitating innovation via setting a systematic and balance approach. This framework, built on Singapore's previous works, including its Model AI Governance Framework, includes nine dimensions.
These are:
- Accountability: putting in place the right incentive structure for different players in the AI system development life cycle to be responsible to end-users.
- Data: ensuring data quality and addressing potentially contentious training data in a pragmatic way, as data is core to model development.
- Trusted development and deployment: enhancing transparency around baseline safety and hygiene measures based on industry best practices in development, evaluation and disclosure.
- Incident reporting: implementing an incident management system for timely notification, remediation and continuous improvements, as no AI system is foolproof.
- Testing and assurance: providing external validation and added trust through third-party testing, and developing common AI testing standards for consistency.
- Security: addressing new threat vectors that arise through generative AI models.
- Content provenance: transparency about where content comes from as useful signals for end-users.
- Safety and alignment R&D: accelerating R&D through global cooperation among AI Safety Institutes to improve model alignment with human intention and values.
- AI for public good: responsible AI includes harnessing AI to benefit the public by democratising access, improving public sector adoption, upskilling workers and developing AI systems sustainably.
Via these nine outlined dimensions the framework aims at creating a trusted-environment to enable end-users to use generative AI safely, while promoting space for cutting-edge innovation. In addition, this framework aims at facilitating conversations between stakeholders, including policymakers, the industry and research community on an international scale.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Country of origin:
Type of approach:
Maturity:
Usage rights:
Target groups:
Stakeholder group:
Tags:
- ai incidents
- collaborative governance
- data governance
- data
- genAI
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case