These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
RAISE Benchmarks
The initial series of RAISE Benchmarks being announced serve three crucial purposes:
RAISE Corporate AI Policy Benchmark: This benchmark evaluates the comprehensiveness of a company's AI policies by measuring their scope and alignment with RAI Institute's model enterprise AI policy which is based on NIST AI Risk RMF. Today, RAI Institute is releasing the methodology, FAQs and an initial demo of the RAISE Policy Benchmark to guide organizations in framing their AI policies effectively to include new trustworthiness and risk considerations from generative AI and large language models (LLMs).
RAISE LLM Hallucinations Benchmark: Organizations often grapple with mitigating AI hallucinations common in LLMs when creating new AI-powered products and solutions that result in unexpected, incorrect, misleading outputs. This benchmark helps organizations using LLMs, whether commercially available, open source or proprietary, assess the risk of hallucinations and take proactive measures to minimize them.
RAISE Vendor Alignment Benchmark: This benchmark assesses whether the policies of supplier organizations align with the ethical and responsible AI policies of their purchasing counterparts. It ensures that vendors' AI practices harmonize with the values and expectations of the businesses they serve.
"Cultivating trust in AI is not a luxury, it's a necessity — one that requires diligence, flexibility and technical prowess. After leaving IBM Watson as general manager, I became intensely focused on creating an independent, community-driven non-profit to make responsible AI adoption achievable for enterprises at any level. I see this as my life’s mission. The RAISE Benchmarks are a testament to RAI Institute’s dedication to creating a more accountable AI ecosystem, where trust is sacrosanct,” said Manoj Saxena, founder and executive chairman of Responsible AI Institute.
The first three RAISE Benchmarks are currently available in private preview and will be generally available to RAI Institute Members in Q2 2024 after undergoing further refinement based on feedback from the community and businesses piloting the benchmarks throughout the first half of 2024. The RAISE Benchmarks will join the RAI Institute assessments, education modules and rigor in policy, regulation and standards to support members and the wider ecosystem.
The RAISE Benchmarks add unique value because they are vendor and technology agnostic, developed in a community-driven manner, applicable to both classic and generative AI, align with global AI standards, adaptable regionally, and improving continuously.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Objective(s):
Purpose(s):
Lifecycle stage(s):
Type of approach:
Target users:
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case