Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

ETSI GR SAI 001 V 1.1.1 - Securing Artificial Intelligence (SAI) - AI Threat Ontology



The purpose of this work item is to define what would be considered an AI threat and how it might differ from threats to traditional systems. The starting point that offers the rationale for this work is that currently, there is no common understanding of what constitutes an attack on AI and how it might be created, hosted and propagated. The AI Threat Ontology deliverable will seek to align terminology across the different stakeholders and multiple industries. This document will define what is meant by these terms in the context of cyber and physical security and with an accompanying narrative that should be readily accessible by both experts and less informed audiences across the multiple industries. Note that this threat ontology will address AI as system, an adversarial attacker, and as a system defender, © Copyright 2023, ETSI

The information about this standard has been compiled by the AI Standards Hub, an initiative dedicated to knowledge sharing, capacity building, research, and international collaboration in the field of AI standards. You can find more information and interactive community features related to this standard by visiting the Hub’s AI standards database here. To access the standard directly, please visit the developing organisation’s website.

About the tool



Tool type(s):



Type of approach:



Usage rights:


Geographical scope:


Tags:

  • robustness
  • Security and resilience
  • safety

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.