Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

AIShield AI Security Product



AIShield AI Security Product

AIShield’s AI Security product is an API-based solution extensively utilized by organizations and system integrators to mitigate AI risks before and post-deployment. It fortifies AI systems, making them trustworthy and secure, thereby enhancing safety and compliance with AI regulations and cybersecurity guidelines. AIShield’s AI Security product acts as the last layer of defense for the AI/ML model itself, protecting it from adversarial threats such as model extraction, model evasion, data poisoning and membership inference – threats that the typical network security measures can’t suffice for. 

Highlights

  • API based AI Security vulnerability assessment and defense: Analysis of various types of AI/ML models against attacks such as theft, poisoning, evasion, and inference for image classification, sentiment analysis, time series forecasting/classification, and tabular classification are currently available. Report security incidents via SIEM connectors to Splunk; Threat hunting capabilities aided by vulnerability analysis and active monitoring.
  • Wide coverage of AI attacks: Supports 200+ attack types across 20+ models and data type variations (e.g.: image classification, time series forecasting etc.)
  • Ease of implementation: Easy-to-use APIs with ready reference implementations in Jupyter Notebooks, product guides, POSTMAN configuration files, and API documentation. Easy integration with MLOps platform with product API. SIEM/SOAR connectivity via containerized defense (customer to deploy).

About the tool


Developing organisation(s):






Country of origin:



Type of approach:




License:










Required skills:



Tags:

  • ai incidents
  • ai responsible
  • ai risks
  • building trust with ai
  • demonstrating trustworthy ai
  • trustworthy ai
  • validation of ai model
  • ai assessment
  • ai governance
  • ai reliability
  • ai auditing
  • fairness
  • ai risk management
  • ai compliance
  • ai vulnerabilities
  • ai security
  • ml security
  • robustness
  • explainability
  • adversarial ai

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.