Privacy-enhancing technologies (PETs) foster trust in the collaborative development and sharing of AI models while safeguarding privacy, intellectual property, and sensitive information. This report identifies two key types of PET use cases. The first is enhancing the performance of AI models through the confidential and minimal use of input data. The second is the confidential co-creation and sharing of AI models using tools such as differential privacy, trusted execution environments, and homomorphic encryption.
PETs can reduce the need for additional data collection, facilitate data-sharing partnerships, and help address risks in AI governance. However, they are not silver bullets. While combining different PETs can help compensate for their limitations, balancing utility, efficiency, and usability remains a challenging task. Governments and regulators can encourage PET adoption through policies, including guidance, regulatory sandboxes, and R&D support, which would help build sustainable PET markets and promote trustworthy AI innovation.