Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

ETSI EG 203341 V 1.1.1 - Core Network and Interoperability Testing (INT) - Approaches for Testing Adaptive Networks



The characteristics of 'adaptive networks' such as virtualization, self-organization, self-configuration, self-optimization, self-healing and self-learning offer huge advantages in future networks. While technologies such as Network Functions Virtualization (NFV), Self-Organizing Networks (SON), Mobile Edge Computing (MEC) and Autonomic Network Infrastructure (AFI) may not each exhibit all the characteristics they do have one thing in common: they are all dynamic rather than static, reacting to dynamic traffic conditions, applications, service demands as well as to changes in the eco-system environment. This work item will develop a methodology (guide) that will extend current experience and testing approaches © Copyright 2023, ETSI

The information about this standard has been compiled by the AI Standards Hub, an initiative dedicated to knowledge sharing, capacity building, research, and international collaboration in the field of AI standards. You can find more information and interactive community features related to this standard by visiting the Hub’s AI standards database here. To access the standard directly, please visit the developing organisation’s website.

About the tool



Tool type(s):


Objective(s):


Target sector(s):


Type of approach:



Usage rights:


Geographical scope:


Tags:

  • System architecture
  • Interoperability

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.