Israeli AI-Powered Xaver 1000 Enables 'See-Through-Walls' Surveillance, Raising Privacy Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Israeli company Camero-Tech has developed the Xaver 1000, an AI-powered device that uses radar and advanced algorithms to detect and visualize people and objects through walls in real time. While intended for military and law enforcement, its capabilities raise significant concerns about privacy violations and potential misuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI-based tracking algorithm, confirming the involvement of an AI system. The technology's intended use in tactical operations and its ability to detect detailed information behind walls imply potential for misuse or harm, such as violations of privacy or human rights. However, since no actual harm or incident is reported, and the article focuses on the technology's unveiling and capabilities, this event fits the definition of an AI Hazard, as it plausibly could lead to harm but has not yet done so.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsAccountabilityTransparency & explainabilityRobustness & digital securityDemocracy & human autonomy

Industries
Government, security, and defenceRobots, sensors, and IT hardwareDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interestPsychological

Severity
AI hazard

Business function:
Monitoring and quality controlCompliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

New Israeli military technology allows operators to 'see through walls'

2022-06-26
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI-based tracking algorithm, confirming the involvement of an AI system. The technology's intended use in tactical operations and its ability to detect detailed information behind walls imply potential for misuse or harm, such as violations of privacy or human rights. However, since no actual harm or incident is reported, and the article focuses on the technology's unveiling and capabilities, this event fits the definition of an AI Hazard, as it plausibly could lead to harm but has not yet done so.
Thumbnail Image

Israel's AI-powered system that can 'SEE' through walls

2022-06-27
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The Xaver 1000 system is explicitly described as AI-powered and used for detecting and classifying targets behind walls, which involves AI algorithms for image processing and target identification. Its deployment in military and law enforcement contexts implies potential for harm, such as injury or violation of rights, if misused or malfunctioning. However, the article does not report any actual harm or incident resulting from its use, only describing the capabilities and potential applications. The Capella-2 satellite system also uses advanced radar imaging but is stated not to image inside homes, and no harm is reported. Therefore, the event represents a plausible future risk (AI Hazard) rather than a realized harm (AI Incident).
Thumbnail Image

Israeli military can 'see through walls' with radar device, insider claims

2022-06-28
The Sun
Why's our monitor labelling this an incident or hazard?
The Xaver 1000 is an AI system as it uses AI-powered software to track and map objects and people through walls. However, the article does not describe any realized harm or incident caused by the device. It discusses potential uses and benefits, implying possible future impacts but without evidence of harm or misuse. Therefore, this event qualifies as an AI Hazard because the technology could plausibly lead to harms related to privacy violations, misuse in military or law enforcement contexts, or other risks, but no incident has yet occurred. It is not Complementary Information since it is not updating or responding to a prior incident, nor is it unrelated as it clearly involves AI technology with potential implications.
Thumbnail Image

New AI-based military technology will allow soldiers to see through walls

2022-06-27
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used for live tracking and detailed imaging behind walls, which is a clear AI system involvement. The system is intended for military and law enforcement use, where misuse or malfunction could lead to injury, violations of rights, or other harms. Although no actual harm or incident is described, the potential for such harm is credible and significant given the nature of the technology and its applications. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its implications are central to the report.
Thumbnail Image

The Xaver 1000 'Sees Through Walls' and is Made for the Israeli Army

2022-06-29
PetaPixel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Xaver 1000) used for detecting people through walls, which involves AI algorithms for target tracking and visualization. Although no actual harm or incident is reported, the system's military application and capabilities to identify and measure targets behind walls imply a credible potential for harm, including injury or violation of human rights during operations. The event concerns the development and deployment of this AI system, which could plausibly lead to AI incidents in the future. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Israeli Tech Start-up Camero-Tech's AI-Powered Xaver 1000 Allows Operators to See Through Walls

2022-06-26
Tech Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-based tracking system integrated into the Xaver 1000, qualifying it as an AI system. The event concerns the use and deployment of this AI system but does not describe any realized harm or incident. Given the nature of the technology—advanced surveillance and detection through walls with real-time tracking—there is a credible potential for misuse or harm in the future, such as violations of privacy, human rights, or escalation in military conflicts. Since no harm has yet occurred but plausible future harm exists, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

This Startup Claims That Its New Military Tech Can See Through Walls

2022-06-28
Wonderful Engineering
Why's our monitor labelling this an incident or hazard?
The device clearly involves AI systems (AI algorithms for detecting and distinguishing objects behind walls). The event concerns the use and development of this AI system. Although no direct harm is reported, the technology's capability to see through walls and detect people raises credible risks of privacy violations and surveillance abuses, which could lead to violations of human rights or harm to communities. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to significant harms in the future.
Thumbnail Image

New Israeli military technology allows operators to 'see through walls'

2022-06-26
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the AI-based tracking algorithm in the Xaver 1000) used for detecting people and objects behind walls. Although no direct harm or incident is reported, the technology's nature and intended use in military and law enforcement contexts plausibly could lead to harms such as violations of human rights or harm to communities. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future.
Thumbnail Image

Israeli AI-Based System XAVER 1000 Can See Through Walls - DailyAlts -

2022-06-28
DailyAlts
Why's our monitor labelling this an incident or hazard?
The system clearly involves AI in processing radar data to generate detailed images and classifications of objects behind walls. While the article does not report any harm caused by the system's use, the technology's deployment in military and law enforcement contexts implies potential for significant harm if misused or malfunctioning, such as privacy violations or misuse in conflict. However, since no actual harm or incident is reported, and the article mainly describes the system's capabilities and intended use, this constitutes an AI Hazard due to the plausible risk of harm from its use or misuse in sensitive operations.
Thumbnail Image

Ejército de Israel creó un dispositivo para ver a través de las paredes

2022-06-30
El Tiempo
Why's our monitor labelling this an incident or hazard?
The device described is an AI-enabled system (radar 3D imaging with real-time detection and classification of human postures and objects) used for military and police surveillance. While the article does not report any specific harm caused by the device, its capabilities and intended use imply potential risks such as privacy violations, unauthorized surveillance, or misuse leading to harm. However, since no actual harm or incident is reported, and the article focuses on the technology's features and authorized use, this event constitutes an AI Hazard due to the plausible future risks associated with such surveillance technology.
Thumbnail Image

El ejército de Israel está usando una IA que puede 'ver' a través de las paredes

2022-06-30
MuyInteresante.es
Why's our monitor labelling this an incident or hazard?
The described AI system (Xaver 1000) is explicitly mentioned as using algorithms to detect and classify living beings through walls, which qualifies as an AI system. Its use by the military to identify targets directly impacts operational decisions and could lead to harm or protection of persons. Given the military context and the system's capability to identify and track humans, the AI's use is directly linked to potential harm or protection in conflict situations. Therefore, this event involves the use of an AI system that has directly or indirectly led to or could lead to harm, qualifying it as an AI Incident.
Thumbnail Image

Una nueva tecnología militar israelí permite 'ver' a través de las paredes

2022-06-27
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using an AI-based tracking algorithm for detecting people through walls. The system's use in military and law enforcement contexts implies potential for misuse or harm, such as privacy violations or escalation in conflict scenarios. However, the article does not mention any realized harm or incident caused by the system so far. Given the nature and intended use of the AI system, it plausibly could lead to harms defined under the AI Hazard category, but no direct or indirect harm has yet occurred or been reported. Thus, the event is best classified as an AI Hazard.
Thumbnail Image

El radar impulsado por Inteligencia Artificial de Israel ve a través de las paredes

2022-07-01
Agencia de Noticias "News Front" España
Why's our monitor labelling this an incident or hazard?
The article details a new AI system (Xaver 1000) that uses AI to track and distinguish living targets through walls. However, it does not report any actual harm or incidents resulting from its use, nor does it describe any realized or imminent harm. The system's intended use in military and law enforcement contexts implies potential future risks, but no specific plausible harm or incident is described. Therefore, this event is best classified as Complementary Information, providing context on AI technology development and its potential applications without reporting an AI Incident or AI Hazard.