Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

NB Defense



NB Defense

Brought to you by Protect AI, NB Defense is a JupyterLab Extension as well as a CLI tool that encourages you to think about security throughout every step of your data science practice within Jupyter. It scans and detects items within your notebooks and can surface those insights immediately to you in Jupyter Lab, this let's you see exactly where the problem is and react to it before sharing any problematic code or content with your peers or the world. 

Specifically it detects issues with:

  1. Secrets - API keys, private keys, authentication tokens, and other security credentials.
  2. PII - Sensitive data and personally identifiable information.
  3. CVEs - CVE vulnerabilities and exposures in ML OSS frameworks, libraries, and packages.
  4. Third Party Licenses - Non-permissive licenses in ML OSS frameworks, libraries, and packages. 

The JupyterLab Extension and CLI can both be easily configured to scan for specific types of secrets, PII, and third-party licenses. This allows you to set the appropriate sensitivity of the scan and tailor the security review process to your specific needs.

The NB Defense CLI is designed to facilitate scanning of an entire Git Repo or folders containing notebooks, enabling you to leverage NB Defense's entire security capabilities outside of Jupyter environment. The CLI tool can be inserted into Continuous Integration (CI) systems as a pre-commit hook, ensuring a streamlined development process.

About the tool


Developing organisation(s):


Tool type(s):



Impacted stakeholders:


Country/Territory of origin:






License:




Stakeholder group:



Required skills:


Tags:

  • regulation compliance
  • ai security
  • ml security

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
Partnership on AI

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.