
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Zoom has partnered with World, Sam Altman's biometric identity company, to verify meeting participants are human and not AI-generated deepfakes. This move follows major financial losses, including a $25 million fraud at Arup in Hong Kong, caused by deepfake-enabled video call scams targeting businesses globally.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (World's Deep Face biometric verification technology) used to counteract AI-generated deepfake fraud, which has already caused substantial financial harm to companies. The AI system's deployment is a direct response to these harms, indicating the AI system's involvement in the use phase to prevent further incidents. The harms described (financial losses due to deepfake fraud) are materialized and significant, fulfilling the criteria for an AI Incident. Although the article also discusses regulatory and privacy issues, these are complementary concerns and do not overshadow the primary fact that the AI system is involved in addressing an ongoing AI-related harm. Hence, the event is best classified as an AI Incident.[AI generated]