These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Credo AI Governance Platform: Reinsurance provider Algorithmic Bias Assessment and Reporting
A global provider of reinsurance used Credo AI’s platform to produce standardised algorithmic bias reports to meet new regulatory requirements and customer requests.
The team needed a way to streamline and standardise its AI risk and compliance assessment process, with the goal of continuing to showcase responsibility and governance to customers and regulators while significantly reducing the burden of governance reporting on technical teams. With Credo AI’s Responsible AI Platform, the company found a solution that met its needs.
The insurance industry, like many regulated industries, is facing increasing scrutiny around its application of AI/ML to sensitive use cases like risk prediction and fraud detection. In particular, policymakers and customers have focused on algorithmic fairness as a critical issue for insurance and reinsurance companies to address as they apply machine learning models to these areas. These concerns are reflected in regulations like Colorado’s SB21-169, which prohibits insurers from using any “algorithm or predictive model” that discriminates against an individual based on protected attributes like race and sex.
This approach allowed the reinsurance provider to systematically map, measure, and evaluate its AI models for biases based on its internal risk and compliance assessment policies and regulatory requirements.
Benefits of using the tool in this use case
Prior to using Credo AI, the compliance assessment process was managed using Excel, and putting together a risk and compliance report was incredibly burdensome on technical development teams. By implementing Credo AI, the global reinsurance company was able to reduce the amount of time that it takes for a ML model to get through its risk and compliance assessment process, while still being able to produce high-quality risk and compliance reports to share with customers and regulators.
The reinsurance company worked with Credo AI to develop a set of custom Policy Packs that operationalized the company’s internal risk and compliance assessment policies. This allowed the governance team to manage and track progress through the risk and compliance assessment process, rather than having to navigate through many different Excel spreadsheets and Word documents.
Technical teams were no longer needed to gather assessment requirements from the governance teams, nor did they need to manually write code to run standard bias and performance assessments; this approach allowed teams to generate technical evidence for governance without manual effort.
Shortcomings of using the tool in this use case
Organisations need to have access to protected demographic data to effectively run bias tests and to eventually comply with certain anti-discrimination regulations, such as NYC’s LL-144. Collection, use, and retention of demographic data itself, however, can present a tension when trying to comply with privacy laws and regulations.
This provider was only able to compile necessary data on a quarterly basis and therefore could only perform bias tests on data that was a quarter old and not realtime bias testing.
Approaches that can help overcome these self-reported data limitations include using human-annotated demographic data which relies on a human annotator’s best perception of an individual's demographic attributes or machine-inferred demographic data which relies on algorithmically inferring an individual's demographic attributes. However, both of these alternatives can present additional risks including exacerbating biases.
Link to the full use case.
This case study was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.
About the use case
You can click on the links to see the associated use cases
Objective(s):
Impacted stakeholders:
Target sector(s):
Country of origin:
Target users: