These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
The most general time-based metric measures the time until the adversary’s success. It assumes that the adversary will succeed eventually, and is therefore an example of a pessimistic metric. This metric relies on a definition of success, and varies depending on how success is defined in a scenario. For example, success in a communication system can be if the adversary identifies n out of N of the target’s possible communication partners. Success can also be when the adversary first compromises a communication path. In an onion routing system such as Tor, path compromise happens when the adversary controls all relays on a user’s onion routing path.
Related use cases :
Users get routed: Traffic correlation on Tor by realistic adversaries
Uploaded on Nov 3, 2022We present the first analysis of the popular Tor anonymity network that indicates the security of typical users against reasonably realistic adversaries in the Tor network or i...
About the metric
You can click on the links to see the associated metrics
Purpose(s):
Lifecycle stage(s):
Target users:
Risk management stage(s):