
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
The U.S. Department of Homeland Security is developing AI-powered smart glasses for immigration enforcement agents, enabling real-time biometric identification and access to watchlist data in the field. The project, slated for deployment by 2027, raises significant concerns about privacy, civil liberties, and potential misuse of AI surveillance technologies.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and intended use of AI systems (smart glasses with facial recognition and biometric databases) by DHS/ICE for surveillance purposes. The potential harms include violations of civil rights, privacy, and mass surveillance, which are serious human rights concerns. However, the article does not report any actual harm or incident resulting from the use of these glasses yet, only the plans and concerns about their future use. Thus, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident but has not yet done so.[AI generated]