Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Credo AI Lens



Credo AI Lens

_images/credo_ai-lens.png

 

What is Credo AI Lens (Lens)?

Lens is an open source python package whose purpose is to provide a jack-of-all-trades tool for AI assessment. Lens focuses on the dimensions of the AI systems necessary for governance, including how the system performs, its fairness characteristics, characteristics of the dataset, etc. 

While developing Lens, we kept in mind four characteristics we believe are critical to AI governance tooling: transparency, adaptability, perspective and connectivity. Transparency refers to the need of any assessment tool to be open to inspection by any interested party, which increases the correctness of the tool, improves trust, and helps foster community. Adaptability refers to the need of an AI assessment tool to grow in tandem with the evolving AI landscape. Perspective refers to the fact that Lens is opinionated, presenting a curated set of assessments that can help practitioners with little background in responsible A practices. Finally, connectivity refers to the need of an assessment tool to seamlessly integrate both with the AI development environment as well as other governance tools.

What can Lens do?

Lens has a number of evaluators that cover different dimensions of AI assessment. Particular focus is paid to dimensions that are components of an effective AI governance strategy. A list of the official assessments currently supported by Lens can be found below: 

  1. Equity: assessment of the equality of the outcomes across a sensitive feature*.
  2. Fairness: assess how a sensitive feature* relates to other features in the dataset (i.e., proxy detection) and how model performance varies based on the sensitive feature.
  3. Performance: assesses model performance according to user-specified metrics and disaggregates the information across sensitive features.
  4. Explainability: assessment of the feature importance for a model based on SHAP values.
  5. Data Profiling: provides descriptive statistics about a dataset.

Lens also includes several experimental features. Please check our official documentation page for a full overview of Lens capabilities.

How to use Lens in Python for AI Assessment?

Installing Lens is easy through pip. See the setup documentation for directions. We encourage the interested reader to run the quickstart demo to get started with Lens.

The connection between Lens and AI Governance

The value of AI assessment is truly realized within a comprehensive AI governance process. While working in unison with Credo AI Platform, Lens can translate a list of governance requirements previously defined on the Platform into a set of assessments on models and data. The assessment results—or “evidence” in Credo AI parlance—can then be exported to the Platform, where they are tracked and translated into standardized reports.
 

 

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.