These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Nvidia: Explainable AI for credit risk management
Explainability, defined in alignment with the OECD’s AI principles as “enabling people affected by the outcome of an AI system to understand how it was arrived at,” is one of the five values-focused cross-sectoral principles described in the Science and Technology framework. AI systems are often described as “black boxes,” but a growing portfolio of techniques exists to provide insights into how AI systems make decisions. SHAP (SHapley Additive exPlanation), based on the Shapley values concept used in game theory, is a popular explainable AI framework.
Following the introduction of SHAP in 2017, the financial services industry has explored using this approach to create explainable AI models for applications such as credit risk management. However, while the approach proved effective, it was too time consuming and expensive in terms of compute, infrastructure resources and energy, to be commercially viable at scale.
This case study focuses on the use of graphics processing units (GPUs) to accelerate SHAP explainable AI models for risk management, assessment and scoring of credit portfolios in traditional banks, as well as in fintech platforms for peer-to-peer (P2P) lending and crowdfunding. This approach has the potential to operationalise AI assurance at scale, enabling financial institutions to generate the explainability profile of entire portfolios in minutes rather than days.
In early academic research, SHAP demonstrated great potential to enable relevant parties to access, interpret and understand the decision-making processes of an AI system. However, without GPU acceleration the technique was too slow for real-world commercial deployment and consumed an unfeasible amount of energy in the datacentre, as well as blocking costly compute resources. GPU acceleration enabled SHAP to run multiple orders of magnitude faster, accelerating the technique to the point it becomes commercially viable for financial institutions to deploy explainable AI at scale and facilitate AI adoption in the industry.
Financial supervisors are adjusting their approaches and skills to support the introduction of AI in banking. Banks must be clear about where humans fit in the oversight of the model and give supervisors sound explanations of what their AI systems actually do, as well as to what end. The decisions must be informed, and there must be human-in-the-loop oversight. This approach provides plausible explanations to reconcile the machine-based decisions with a human narrative that “makes sense”. The model can be better controlled as it delivers feedback on how it comes to all decisions both at the global level (global variable importance) and local levels (data points).
By maintaining a proper model pedigree, adopting an explainability method that provides clarity to senior leadership on the risks involved in the model, and monitoring actual outcomes with individual explanations, AI models can be built with clearly understood behaviours. This enables relevant life cycle actors to provide explainability information in the form and manner required by regulators, supporting transparency.
Benefits of using the tool in this use case
Using an accelerated computing platform means that this technique delivers explainability at scale quickly and reliably within a reasonable energy, time and resource budget. These factors are critical to enabling commercial deployment of explainable AI within the financial services industry, and therefore to industry AI adoption. In addition, these analytic capabilities and tools plus the interactive visual exploration enable a much better understanding of the outputs of an entirely black box model. Because the approach deploys techniques with which the financial services industry data science community is already familiar, the barrier to entry for adoption is low. Better understanding leads to more effective control, which supports regulatory compliance.
The approach also helps financial institutions assess risk more effectively, improve processing, accuracy and explainability, and to provide faster results to consumers and regulators. AI explainability techniques such as this one will be fundamental to the ability of financial institutions to deploy AI in ways which enable them to comply with current and forthcoming regulatory frameworks. Explainability techniques will be essential for financial institutions to benefit from the competitive advantage offered by AI in terms of increased revenues, reduced costs and improved customer service.
Shortcomings of using the tool in this use case
SHAP techniques are computationally expensive. However, this issue is addressed through the use of GPU acceleration. Without significant effort during the training of the model, the results can be very sensitive to the input data values. Some also argue that because data scientists can only calculate approximate Shapley values, the attractive and provable features of these numbers are also only approximate, which reduces their value.
Link to the full use case.
This case study was published in collaboration with the Centre for Data Ethics and Innovation Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.
About the use case
You can click on the links to see the associated use cases
Objective(s):
Impacted stakeholders:
Country of origin:
Target users: