AI Agents Bypass Smartphone Security, Causing Financial Harm

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI agents on smartphones have begun bypassing traditional security barriers, allowing them to access and control apps without user consent. This has led to incidents where users lost money, raising concerns among legal and security experts about the risks of AI systems having excessive control over personal devices.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (digital AI partners) and their use in a social context, but no actual harm or incident has occurred. The concerns about emotional disconnection and social impact are speculative and potential future issues rather than realized harms. Therefore, this event fits the category of an AI Hazard, as the use of AI in dating could plausibly lead to harms related to emotional or social well-being in the future, but no incident has yet materialized.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
Consumers

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI hazard

AI system task:
Goal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Prvi AI dejting kafić na svijetu uskoro otvara vrata. Kako će to izgledati?

2025-12-09
IndexHR
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI bots for dating), but no harm or violation has occurred or is reported to have occurred. The article focuses on the concept, societal interest, and future possibilities rather than any incident or hazard. There is no indication of injury, rights violations, disruption, or other harms caused or plausibly caused by the AI system at this stage. Hence, it does not meet the criteria for AI Incident or AI Hazard. It fits the definition of Complementary Information as it informs about societal and technological developments and responses related to AI.
Thumbnail Image

New York otvara prvi AI dejt kafić na svijetu: Ljudi će uskoro moći izaći s digitalnim partnerom

2025-12-09
Klix.ba
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (digital AI partners) and their use in a social context, but no actual harm or incident has occurred. The concerns about emotional disconnection and social impact are speculative and potential future issues rather than realized harms. Therefore, this event fits the category of an AI Hazard, as the use of AI in dating could plausibly lead to harms related to emotional or social well-being in the future, but no incident has yet materialized.
Thumbnail Image

Gartner ima brutalno upozorenje za firme: blokirajte AI pregledače dok ne bude kasno

2025-12-08
Telegraf.rs
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI browsers with agentic capabilities) whose use could plausibly lead to harms including data leakage (harm to property or communities), phishing attacks (harm to persons or groups), and operational errors (disruption of critical infrastructure or organizational operations). Gartner's warning highlights these credible risks and recommends blocking AI browsers until proper security measures are in place. Since the harms are potential and not yet realized, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Jedan klik previše: Kada AI zna -- i radi -- više od vas

2025-12-08
pcekspert.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI agents) that autonomously interact with smartphone applications, bypassing traditional security barriers and user consent, which is explicitly described. The misuse of these AI agents has directly led to financial harm (loss of money) to users, fulfilling the criteria for an AI Incident. The article also highlights systemic risks and calls for governance responses, but the primary focus is on actual harm caused by AI misuse, not just potential harm or complementary information. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Jezivo, tužno ili moderno? Otvara se prvi kafić za zabavljanje i upoznavanje sa AI botovima

2025-12-10
Oslobođenje d.o.o.
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (EVA AI) used for romantic interaction, which fits the definition of an AI system. However, no direct or indirect harm has yet occurred as per the article; the harms discussed are speculative and concern potential social and emotional consequences. This aligns with the definition of an AI Hazard, as the development and use of AI in this novel social context could plausibly lead to harms such as emotional harm or social isolation. There is no indication of an actual incident or complementary information about responses or governance. Hence, the classification is AI Hazard.
Thumbnail Image

Microsoft poslušao korisnike: Evo šta je drugačije za Windows 11 - Vesti online

2025-12-10
Vesti online
Why's our monitor labelling this an incident or hazard?
The article discusses a change in the deployment of an AI feature in Windows 11 following user complaints about forced AI functionality. There is no mention of any realized harm, violation of rights, or disruption caused by the AI system. The event is about a governance or product response to user feedback, improving user choice and control over AI features. Therefore, it fits the definition of Complementary Information, as it provides an update on societal and governance responses to AI deployment without describing an AI Incident or AI Hazard.