These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Responsible AI Governance Framework for boards
Introduction
Anekanta®’s Responsible AI Governance Framework (“RAI framework”) was originally developed in 2020 for the purpose of an ethical framework for AI implementation prior to the emergence of Generative AI.
This updated RAI framework (2024) provides Boards with signposts to high level areas of responsibility and accountability through a checklist of twelve principles which could be at the heart of an AI governance policy. It is for Boards who wish to start their AI journey, or for those who recognize that AI governance may be mission critical since the emergence of generative AI (Gen AI) and large language models (LLMs).
Gen AI accelerated many Boards concerns due to the rapid adoption of these tools within their organizations without any guardrails. A high profile example of such use arose when well-meaning functions in a development business enlisted the help of Gen AI without a full appreciation of the risk and impact that may have on intellectual property, people and their sensitive data and the protection of trade secrets.
Some Boards have taken swift action to prohibit the use of public Gen AI and LLMs by their employees. This is a logical step and is reflective of a buying community attempting to mitigate risk in the absence of regulation. However, with the emergence of use case guidance produced by companies like Anekanta® and a growing community of specialized Open-Source and closed models which may be run in a private cloud, and the integration of Gen AI into off the shelf business tools, confidence is growing.
About the framework
The twelve points contained in this RAI framework represent a selection of the high-level areas of consideration a Board may include in its AI policy. Note that legal requirements must be met by organizations providing or deploying AI systems or their decisions within the European Union.
This RAI framework is not an exhaustive list of actions, instead it provides a start point. Full implementation of Responsible AI within an organization requires the engagement of all Responsible AI principles and the involvement of the entire organization developing or using AI systems and its stakeholders e.g., supply chain and customers.
Responsible AI principles originate from OECD principles which have become embedded in one form or another in every trustworthy guide, standard and pending regulation in the Western democracies.
Due consideration should be given to Responsible AI at every stage of the AI system lifecycle because development and use may have a wide-reaching impact on fundamental rights. Under the EU AI Act, impact will be traceable through the supply chain back to source. This could lead to the entire supply chain may be held to account in Court.
Additionally, new standards are emerging e.g., ISO/IEC 42001 AI Management System which has been published and can be implemented now (February 2024). This is a certifiable quality management system for AI risk management and continual monitoring, control and governance. It will also contribute to the foundations of the harmonized standards planned to meet the EU AI Act high-risk requirements.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Impacted stakeholders:
Target sector(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
Target groups:
Target users:
Stakeholder group:
Validity:
Enforcement:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Tags:
- ai governance
- ai compliance
- accountability
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case