These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
CLIPBERTSCORE is a simple weighted combination of CLIPScore (Hessel et al., 2021) and BERTScore (Zhang* et al., 2020) to leverage the robustness and strong factuality detection performance between image-summary and document-summary, respectively.
CLIPSBERTScore can support Explainability by providing a quantitative measure of how well AI-generated outputs (such as captions or retrievals) semantically align with reference data. This can help developers and users understand whether the system's outputs are meaningful and relevant, which is a component of making AI decisions more interpretable. However, the metric itself does not generate explanations, but rather supports the evaluation of output quality, which can be used as part of an explainability framework.
References
About the metric
You can click on the links to see the associated metrics
Objective(s):
Purpose(s):
Lifecycle stage(s):
Target users:
