Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

SynthID



SynthID

SynthID uses two deep learning models — one for watermarking and another for identifying — which were trained together on a diverse set of images. 

The combined model is optimised on a range of objectives, including correctly identifying watermarked content and improving imperceptibility by visually aligning the watermark to the original content.

Watermarking

SynthID uses an embedded watermarking technology that adds a digital watermark directly into the pixels of AI-generated images, making it imperceptible to the human eye.

SynthID generates an imperceptible digital watermark for AI-generated images.

We designed SynthID so it doesn’t compromise image quality, and allows the watermark to remain detectable, even after modifications like adding filters, changing colours, and saving with various lossy compression schemes — most commonly used for JPEGs.

The watermark is detectable even after modifications like adding filters, changing colours and brightness.

Identification

SynthID can scan the image for its digital watermark and help users assess whether the content was generated by Imagen.

The tool provides three confidence levels for interpreting the results of watermark identification. If a digital watermark is detected, part of the image is likely generated by Imagen.

About the tool


Developing organisation(s):





Type of approach:


Target groups:



Stakeholder group:


Geographical scope:



Risk management stage(s):


Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.