These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
smartbin
Documentation with advanced tutorials: https://jacobgil.github.io/pytorch-gradcam-book
This is a package with state of the art methods for Explainable AI for computer vision. This can be used for diagnosing model predictions, either in production or while developing models. The aim is also to serve as a benchmark of algorithms and metrics for research of new explainability methods.
⭐ Comprehensive collection of Pixel Attribution methods for Computer Vision.
⭐ Tested on many Common CNN Networks and Vision Transformers.
⭐ Advanced use cases: Works with Classification, Object Detection, Semantic Segmentation, Embedding-similarity and more.
⭐ Includes smoothing methods to make the CAMs look nice.
⭐ High performance: full support for batches of images in all methods.
⭐ Includes metrics for checking if you can trust the explanations, and tuning them for best performance.
About the tool
You can click on the links to see the associated tools
Tool type(s):
Objective(s):
Purpose(s):
Country/Territory of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
License:
Target users:
Programming languages:
Github stars:
- 40
Github forks:
- 10
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case


























