Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

18717 citations of this metric

Local Interpretable Model-agnostic Explanations (LIME) is a method developed to enhance the explainability and transparency of machine learning models, particularly those that are complex and difficult to interpret. It is designed to provide clear, localized explanations for individual predictions made by any type of machine learning model, whether it's a classifier, regressor, or neural network.

The core idea of LIME is to approximate the behavior of a complex model with a simpler, more interpretable model in the context of a specific prediction. To achieve this, LIME perturbs the input data by making small adjustments to the input features and observes how these changes influence the model’s predictions. The perturbed inputs, along with their corresponding outputs, form a dataset that is used to build a surrogate model.
 

This surrogate model, which is typically a simpler model such as linear regression or a decision tree, is trained to focus on the region around the specific instance of interest by weighting data points based on their proximity to the original input. This localized approach allows LIME to accurately capture the decision-making process of the original model in that specific context.
 

By analyzing the surrogate model, LIME identifies which features were most influential in the original model's prediction, providing a clear and interpretable explanation. This makes LIME a powerful tool for enhancing the transparency of machine learning models, enabling stakeholders to understand and trust the decisions made by these models, especially in critical fields like healthcare, finance, and legal decision-making.

About the metric


Metric type(s):





Target users:


Risk management stage(s):


Github stars:

  • 11500

Github forks:

  • 1800

Modify this metric

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.