
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Google's top scientist, Sergei Vassilvitskii, warned EU regulators that a proposal requiring Google to share search engine data with rivals like OpenAI could expose users' private information. Google fears modern AI tools could re-identify anonymized data, posing significant privacy risks if safeguards are not implemented.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems through Google's AI red team and the potential for AI tools to re-identify anonymized data, posing a privacy risk. The event stems from the use and potential misuse of AI in processing shared search data. No actual harm has been reported yet, but the risk of privacy violations is credible and plausible if the EU's data sharing proposal is enacted without stronger safeguards. Hence, it fits the definition of an AI Hazard, as it describes a credible potential for harm related to AI use, but not an AI Incident since harm has not materialized.[AI generated]