These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
IEEE 2830-2021 - IEEE Standard for Technical Framework and Requirements of Trusted Execution Environment based Shared Machine Learning
This standard defines a framework and architectures for machine learning in which a model is trained using encrypted data that has been aggregated from multiple sources and is processed by a third party trusted execution environment (TEE). A distinctive feature of this technique is the essential use of a third-party TEE for computations. The standard specifies functional components, workflows, security requirements, technical requirements, and protocols. © IEEE 2022 All rights reserved.
The information about this standard has been compiled by the AI Standards Hub, an initiative dedicated to knowledge sharing, capacity building, research, and international collaboration in the field of AI standards. You can find more information and interactive community features related to this standard by visiting the Hub’s AI standards database here. To access the standard directly, please visit the developing organisation’s website.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Type of approach:
Maturity:
Usage rights:
Geographical scope:
Tags:
- privacy
- System architecture
- Data collection
- Data processing
- Security and resilience
- data protection
- data sharing
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case