These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
AIShield AI Security Product
AIShield’s AI Security product is an API-based solution extensively utilized by organizations and system integrators to mitigate AI risks before and post-deployment. It fortifies AI systems, making them trustworthy and secure, thereby enhancing safety and compliance with AI regulations and cybersecurity guidelines. AIShield’s AI Security product acts as the last layer of defense for the AI/ML model itself, protecting it from adversarial threats such as model extraction, model evasion, data poisoning and membership inference – threats that the typical network security measures can’t suffice for.
Highlights
- API based AI Security vulnerability assessment and defense: Analysis of various types of AI/ML models against attacks such as theft, poisoning, evasion, and inference for image classification, sentiment analysis, time series forecasting/classification, and tabular classification are currently available. Report security incidents via SIEM connectors to Splunk; Threat hunting capabilities aided by vulnerability analysis and active monitoring.
- Wide coverage of AI attacks: Supports 200+ attack types across 20+ models and data type variations (e.g.: image classification, time series forecasting etc.)
- Ease of implementation: Easy-to-use APIs with ready reference implementations in Jupyter Notebooks, product guides, POSTMAN configuration files, and API documentation. Easy integration with MLOps platform with product API. SIEM/SOAR connectivity via containerized defense (customer to deploy).
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Impacted stakeholders:
Purpose(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
License:
Target groups:
Target users:
Stakeholder group:
Validity:
Enforcement:
Benefits:
Geographical scope:
People involved:
Technology platforms:
Tags:
- ai incidents
- ai responsible
- ai risks
- building trust with ai
- demonstrating trustworthy ai
- trustworthy ai
- validation of ai model
- ai assessment
- ai governance
- ai reliability
- ai auditing
- fairness
- ai risk management
- ai compliance
- ai vulnerabilities
- ai security
- ml security
- robustness
- explainability
- adversarial ai
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case