Israeli AI Surveillance Software Enables Undetectable Video Manipulation for Governments

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Israeli startup Toka, founded by ex-premier Ehud Barak and former cyber division head Yaron Rosen, has developed AI-powered software that allows governments to access and alter live or recorded surveillance footage without leaving traces. This technology raises serious human rights concerns, enabling potential falsification of evidence and misuse by authorities.[AI generated]

Why's our monitor labelling this an incident or hazard?

The software from Toka is an AI system because it involves sophisticated manipulation of video surveillance footage, likely using AI techniques for real-time and retrospective editing without detection. The event involves the use of this AI system by governments to alter video evidence, which can directly lead to violations of human rights and legal rights, such as wrongful incrimination and political misuse. These harms have already occurred or are ongoing given the software's deployment and use. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securityTransparency & explainabilityDemocracy & human autonomySafety

Industries
Government, security, and defenceDigital securityMedia, social platforms, and marketingIT infrastructure and hosting

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interestReputationalPsychologicalEconomic/Property

Severity
AI incident

Business function:
ICT management and information securityCompliance and justiceMonitoring and quality control

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Software israeliano Toka permette modifica immagini sorveglianza/ Quali rischi?

2022-12-26
IlSussidiario.net
Why's our monitor labelling this an incident or hazard?
The software from Toka is an AI system because it involves sophisticated manipulation of video surveillance footage, likely using AI techniques for real-time and retrospective editing without detection. The event involves the use of this AI system by governments to alter video evidence, which can directly lead to violations of human rights and legal rights, such as wrongful incrimination and political misuse. These harms have already occurred or are ongoing given the software's deployment and use. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Il software israeliano in grado di accedere a tutte le telecamere di sorveglianza e...

2022-12-25
DAGOSPIA
Why's our monitor labelling this an incident or hazard?
The software described involves AI systems capable of real-time video and audio manipulation, which is a clear AI system. Its use by governments and intelligence agencies to alter surveillance footage and potentially incriminate innocent people or manipulate political situations constitutes direct harm to human rights and communities. The article reports actual use and capabilities, not just potential risks, indicating realized harm. Hence, this is an AI Incident as the AI system's use has directly led to significant harm involving violations of rights and potential miscarriages of justice.
Thumbnail Image

Il software-spia che cambia le immagini delle telecamere di sorveglianza. Usato anche in Italia?

2022-12-25
Corriere della Sera
Why's our monitor labelling this an incident or hazard?
The software described involves AI or advanced algorithmic manipulation of surveillance footage, which is an AI system by definition. Its use to alter reality in surveillance footage can directly lead to violations of human rights and breaches of legal protections, constituting harm. The article indicates the software is actively used, including in Italy, implying realized harm or at least ongoing misuse. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

UN SOFTWARE CHE È "IN GRADO DI ACCEDERE A TUTTE LE TELECAMERE DI SORVEGLIANZA, DI ALTERARNE LA REALTÀ RIPRESA - SENZA LASCIARE TRACCIA | NoGeoingegneria

2022-12-27
nogeoingegneria.com
Why's our monitor labelling this an incident or hazard?
The software is an AI system as it involves sophisticated real-time video manipulation and access control, likely using AI techniques for altering footage seamlessly. Its use directly leads to violations of human rights and breaches of legal obligations related to privacy and surveillance integrity. The harm is realized as the software enables covert manipulation of surveillance footage, which can cause significant harm to individuals and communities by falsifying evidence and undermining security. Therefore, this event qualifies as an AI Incident.