These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Watermarking Generative AI
Watermarking ChatGPT, DALL-E and other generative AIs could help protect against fraud and misinformation.
About the tool
You can click on the links to see the associated tools
Objective(s):
Purpose(s):
Lifecycle stage(s):
Type of approach:
Target users:
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case