These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
SynthID
SynthID uses two deep learning models — one for watermarking and another for identifying — which were trained together on a diverse set of images.
The combined model is optimised on a range of objectives, including correctly identifying watermarked content and improving imperceptibility by visually aligning the watermark to the original content.
Watermarking
SynthID uses an embedded watermarking technology that adds a digital watermark directly into the pixels of AI-generated images, making it imperceptible to the human eye.
SynthID generates an imperceptible digital watermark for AI-generated images.
We designed SynthID so it doesn’t compromise image quality, and allows the watermark to remain detectable, even after modifications like adding filters, changing colours, and saving with various lossy compression schemes — most commonly used for JPEGs.
The watermark is detectable even after modifications like adding filters, changing colours and brightness.
Identification
SynthID can scan the image for its digital watermark and help users assess whether the content was generated by Imagen.
The tool provides three confidence levels for interpreting the results of watermark identification. If a digital watermark is detected, part of the image is likely generated by Imagen.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Objective(s):
Purpose(s):
Lifecycle stage(s):
Type of approach:
Target groups:
Target users:
Stakeholder group:
Geographical scope:
Required skills:
Risk management stage(s):
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case