Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Mind Foundry: Using Continuous Metalearning to govern AI models used for fraud detection in insurance

Sep 11, 2023

Mind Foundry: Using Continuous Metalearning to govern AI models used for fraud detection in insurance

Continuous Metalearning is an AI governance capability, which has three core objectives:

  1. Manage model risks in production: with the ability to visualise, interrogate and intervene to ensure the continued safe use of AI;
  2. Maintain model capabilities post-deployment to be as performant, if not more performant, than at deployment;
  3. Expand and augment model capabilities: optimise the models’ learning process, in order to learn new patterns and trends, in an interpretable and human-centred way.
    We use this capability to identify, prioritise and investigate fraudulent insurance claims within the insurance industry. Fraudulent claims contribute to increases in the premiums of policy-holders, and there can be large losses for insurers. Mind Foundry worked with Aioi Nissay Dowa Insurance (ANDE-UK) to understand the patterns of fraud and embed these into an AI solution to combat fraud more effectively. This model was deployed using a Continuous Metalearning capability, which enabled the model’s risks to be effectively governed while enabling the model to learn new types of fraud in production. This improved the quality of cases sent to triage and ultimately, to investigation.
    There is a lot of emphasis on ensuring that AI models are designed and developed responsibly, with many toolkits and ethical design processes related to these specific stages of the life-cycle. There is less support to ensure it continues to behave responsibly, once a model has been deployed. This is critical, as AI and the space in which AI operates, change over time. CML addresses continual monitoring, and continuous improvement, and builds transparency into the origins of the model. These principles are central to ensuring an evolving, in-production model remains responsible.

Benefits of using the tool in this use case

  • The user can continually monitor their models to ensure that performance remains consistent (or improves), with safeguards built to protect against performance degradation;
  • CML offers continuous support to models in production and automatically updates where it recognises new trends. This is a stronger assurance than statically checking for bias at the point of deployment only - as the model updates with new data, so must the assurance surrounding it;
  • The user is empowered to provide a full audit regarding the origins of the model, having access to the lineage and provenance of the data and models. This traceability will be key in holding companies accountable (and aware) of how their system of models has been built and subsequently maintained.

Shortcomings of using the tool in this use case

CML was built for models that use batch inference (could be an hourly, daily, or weekly cadence for example). More research is expected to expand the reach of CML beyond batch inference models and to expand the breadth of model coverage. Care should be given to ensure that the solution domain and modelling environment supports CML as a technique.

Link to full use case.

This case study was published in collaboration with the Centre for Data Ethics and Innovation Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.

Modify this use case

About the use case




Target sector(s):


Country of origin: