WhoFi AI System Enables Covert Biometric Tracking via Wi-Fi Signal Distortion

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers at La Sapienza University of Rome developed WhoFi, an AI system that uses deep learning to identify and track individuals by analyzing how their bodies disrupt Wi-Fi signals. The technology raises significant privacy concerns, enabling covert surveillance without cameras or consent, and poses risks of unauthorized biometric tracking.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (deep neural network analyzing Wi-Fi signal alterations) developed for biometric identification. Although no harm has yet occurred, the article highlights significant ethical concerns and the plausible risk of covert surveillance and privacy violations if the technology is deployed. Since the research is not yet in practical use but could plausibly lead to harm, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityDemocracy & human autonomy

Industries
Digital security

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Research and development

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Scientists develop method to identify people by how their bodies disrupt Wi-Fi

2025-07-23
TechSpot
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deep neural network analyzing Wi-Fi signal alterations) developed for biometric identification. Although no harm has yet occurred, the article highlights significant ethical concerns and the plausible risk of covert surveillance and privacy violations if the technology is deployed. Since the research is not yet in practical use but could plausibly lead to harm, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Wi-Fi radiation as a spy: New 'WhoFi' technology detects people without cameras

2025-07-24
Notebookcheck
Why's our monitor labelling this an incident or hazard?
WhoFi is an AI system employing deep learning to analyze Wi-Fi signal patterns for person re-identification. The article highlights the system's high accuracy and ability to identify people invisibly, which could be used for surveillance without consent. Although no direct harm is reported, the technology's capabilities pose credible risks of privacy violations and unauthorized surveillance, which are harms to human rights and communities. Therefore, this event represents an AI Hazard due to the plausible future harm from the system's use.
Thumbnail Image

Your body can be fingerprinted and tracked using Wi-Fi signals

2025-07-23
PCWorld
Why's our monitor labelling this an incident or hazard?
The system involves an AI component (neural network) processing Wi-Fi signal data to identify and track people, fulfilling the AI system criterion. The event does not report actual harm occurring yet but highlights the plausible future harm of privacy violations and unauthorized surveillance, which are violations of human rights. The system's development and potential use could lead to significant harms, making it an AI Hazard rather than an Incident. It is not merely complementary information because the main focus is on the potential for harm, not on responses or ecosystem context. Hence, AI Hazard is the appropriate classification.
Thumbnail Image

WhoFi: New surveillance technology can track people by how they disrupt Wi-Fi signals

2025-07-24
Tech Xplore
Why's our monitor labelling this an incident or hazard?
WhoFi is an AI system involving deep neural networks interpreting Wi-Fi signal disruptions to identify individuals, which fits the definition of an AI system. The article does not report any realized harm but highlights significant privacy concerns and the potential for misuse in surveillance, which could plausibly lead to violations of privacy rights and harm to communities. Since the system is not yet deployed and no harm has occurred, it does not qualify as an AI Incident. The main focus is on the potential risks and implications, making it an AI Hazard under the framework.
Thumbnail Image

WhoFi: Wi-Fi Signal Distortion Enables 93% Accurate Biometric ID

2025-07-23
WebProNews
Why's our monitor labelling this an incident or hazard?
The event describes the development and potential use of an AI system (WhoFi) that leverages machine learning to identify individuals based on Wi-Fi signal interference patterns. The system's passive and stealthy nature enables tracking without consent, which constitutes a violation of privacy rights and could lead to mass surveillance harms. Although the system is currently in testing and not yet widely deployed, the article highlights the plausible and significant risk of harm to human rights and communities if adopted, fulfilling the criteria for an AI Hazard. However, since the article also discusses the system's tested accuracy and potential current use cases, and the privacy harms are already recognized as occurring or imminent, this qualifies as an AI Incident due to realized or ongoing harm related to privacy and rights violations.
Thumbnail Image

Wi-Fi signals can now track you through walls without cameras or phones, here's how

2025-07-25
Digit
Why's our monitor labelling this an incident or hazard?
WhoFi is an AI system using neural networks to analyze Wi-Fi signal changes for tracking individuals. Although no harm has occurred yet, the article explicitly warns about potential misuse for secret monitoring, which could plausibly lead to violations of privacy and human rights. Since the system is still in research and not deployed, and the harm is potential rather than realized, this qualifies as an AI Hazard under the framework.
Thumbnail Image

You're Already Being Tracked -- And You Don't Even Need to Carry a Device

2025-07-25
impactlab.com
Why's our monitor labelling this an incident or hazard?
WhoFi is an AI system that uses advanced AI models to analyze Wi-Fi signal distortions for biometric identification and tracking. The article highlights that this technology can identify and track people without their knowledge or consent, representing a violation of privacy and potentially human rights. Although no specific harm is reported as having occurred yet, the technology's deployment could plausibly lead to significant harms such as unauthorized surveillance, privacy violations, and breaches of fundamental rights. Therefore, this event constitutes an AI Hazard because it plausibly leads to AI incidents involving violations of rights and harms to communities through covert biometric tracking.
Thumbnail Image

Category: Intellectual Property

2025-07-25
impactlab.com
Why's our monitor labelling this an incident or hazard?
WhoFi uses AI to analyze Wi-Fi signal patterns to identify and track people without their consent, which constitutes a violation of privacy and potentially human rights. Although no direct harm is reported yet, the system's use inherently breaches fundamental rights to privacy and consent, which are protected under applicable laws. Therefore, this event represents an AI Hazard because it plausibly could lead to violations of human rights and privacy harms if deployed or misused.
Thumbnail Image

This New Tracking System Can Use Just Wi-Fi Signals to Identify You

2025-07-25
Gadgets 360
Why's our monitor labelling this an incident or hazard?
The Who-Fi system is an AI system as it uses a transformer-based neural network to analyze Wi-Fi signal data for biometric identification and tracking. Although no actual harm has been reported yet, the technology's capability to identify and track individuals covertly without their knowledge or consent poses a credible risk of violations of privacy and human rights. This potential for harm aligns with the definition of an AI Hazard, as the development and possible future use of this AI system could plausibly lead to an AI Incident involving breaches of privacy and surveillance-related harms.
Thumbnail Image

Who-Fi: New Technology Can Track People By How They Disrupt Wi-Fi Signals

2025-07-26
ETV Bharat News
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Who-Fi) that uses AI (transformer-based neural network) to analyze Wi-Fi signal distortions for biometric identification and tracking of individuals. Although no harm has yet occurred as it is still experimental, the technology's nature and intended use plausibly could lead to violations of privacy and human rights if deployed improperly. Therefore, this qualifies as an AI Hazard because it could plausibly lead to an AI Incident involving violations of human rights or privacy.
Thumbnail Image

What is Who-Fi? The AI-Powered Technology Transforming Wi-Fi Signals Into Biometric Monitoring Tools

2025-07-27
Gizbot
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Who-Fi) that uses AI to analyze Wi-Fi signal distortions to identify and track individuals biometrically. Although the technology is still experimental, it has demonstrated capabilities that could directly lead to violations of privacy and human rights if deployed, such as covert surveillance without consent. The system's evasion capabilities increase the risk of undetected monitoring, which is a clear harm to individuals' rights and privacy. Since the article focuses on the technology's potential to cause harm through surveillance and privacy violations, this qualifies as an AI Hazard, as the harm is plausible but not yet reported as realized.
Thumbnail Image

क्या है Who-Fi? जो बिना कैमरे के आपकी एक्टिविटी को करेगा ट्रैक, जानें

2025-07-28
hindi.moneycontrol.com
Why's our monitor labelling this an incident or hazard?
The article describes an AI system that uses Wi-Fi signals to track and identify individuals' biometric signatures and activities without cameras, which directly implicates privacy rights and personal data protection. This constitutes a violation of human rights related to privacy and data protection, thus qualifying as an AI Incident due to realized harm to individuals' privacy through AI-enabled surveillance.
Thumbnail Image

Who-Fi: वाईफाई की दुनिया में आया नया खिलाड़ी, इस टेक्नोलॉजी ने सबकी बढ़ाई टेंशन - India TV Hindi

2025-07-26
India TV Hindi
Why's our monitor labelling this an incident or hazard?
Who-Fi is an AI system that uses Wi-Fi signals and transformer-based neural networks to identify and track people, which directly implicates privacy rights and could lead to violations of human rights if deployed without safeguards. While no actual harm has yet occurred since it is still experimental, the technology's capabilities plausibly pose a risk of harm to individuals' privacy and rights if used in real-world surveillance. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving violations of privacy and human rights.
Thumbnail Image

अब कैमरे के बिना पता लग जाएगा कमरे में हैं कितने लोग, प्राइवेसी पर बड़ा अटैक!, जानें क्या है Who-Fi ?

2025-07-28
NDTV Gadgets 360 Hindi
Why's our monitor labelling this an incident or hazard?
Who-Fi involves an AI system that analyzes Wi-Fi signal patterns to identify and track people, which directly relates to privacy and biometric data. The technology's use could lead to violations of human rights, specifically privacy rights, if deployed or misused. Since the article indicates the system is still in testing and no actual harm has yet occurred, but the plausible future harm is significant and credible, this event qualifies as an AI Hazard rather than an AI Incident. The article does not describe any realized harm but highlights the potential for serious privacy breaches.
Thumbnail Image

Who-Fi ने मचाया हंगामा, बिना कैमरे ट्रैक होगी एक्टिविटी, दीवार के पीछे से भी हो जाएगी पहचान

2025-07-27
Times Network Hindi
Why's our monitor labelling this an incident or hazard?
The event involves the development and potential use of an AI system (Who-Fi) that can track individuals' activities and biometric data without cameras, which could plausibly lead to violations of privacy and human rights. Although no actual harm has been reported yet, the technology's capabilities and potential deployment represent a credible risk of future harm. Therefore, this qualifies as an AI Hazard rather than an Incident, as the harm is plausible but not yet realized.
Thumbnail Image

क्या है Who-Fi टेक्नोलॉजी, कैसे हो रही जासूसी? प्राइवेसी पर मंडराया खतरा

2025-07-27
TV9 Bharatvarsh
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Who-Fi) that uses advanced AI techniques (transformer-based neural networks) to analyze WiFi signals for biometric identification and activity tracking without traditional sensors like cameras. This technology's use directly implicates privacy violations and surveillance harms, which are violations of human rights and privacy. Since the system is already capable of identifying and monitoring individuals, the harm is realized or ongoing, qualifying this as an AI Incident under the framework.
Thumbnail Image

Who-Fi क्या है... बिना कैमरे के कर लेगा किसी भी व्यक्ति की पहचान... एडवांस टेक्नोलॉजी... लेकिन प्राइवेसी के लिए खतरा... जानें कैसे

2025-07-28
Good News Today
Why's our monitor labelling this an incident or hazard?
Who-Fi is explicitly described as an AI system using advanced neural networks to analyze Wi-Fi signal disturbances for identification and tracking. Its use directly implicates privacy violations, a breach of fundamental rights, as it can track individuals covertly without their consent. Although the technology is currently in testing and not widely deployed, the article focuses on its capabilities and the privacy harm it causes or could cause. Since the article discusses the technology's ability to identify and track people, which constitutes a violation of privacy rights, and this harm is either occurring or imminent, this qualifies as an AI Incident under the framework.
Thumbnail Image

Comment le Wi-Fi peut permettre de vous pister, même à travers les murs

2025-07-24
01net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (WhoFi) that uses AI algorithms to analyze Wi-Fi signals for biometric identification and tracking of individuals. The use of this AI system directly leads to a violation of privacy rights, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. The system's capability to track people through walls and without devices is a clear harm to individuals' privacy and autonomy. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in surveillance and tracking.