Bing Chat Exploited to Distribute Malware and Bypass CAPTCHA Safeguards

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Cybercriminals have exploited Microsoft’s Bing Chat, powered by GPT-4, to distribute malware via malicious ads and links, exposing users to harmful downloads. Additionally, researchers demonstrated that Bing Chat’s image analysis can be manipulated to bypass CAPTCHA restrictions, undermining security measures and enabling potential automated abuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

Bing Chat is an AI system that generates responses including sponsored content. The malicious links embedded in these ads have directly led to potential harm by exposing users to fraudulent websites and harmful software. This constitutes harm to property (users' computer systems) and possibly harm to users themselves if malware is installed. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm through malicious advertising content.[AI generated]
AI principles
Robustness & digital securitySafetyAccountabilityTransparency & explainability

Industries
Digital securityMedia, social platforms, and marketingConsumer servicesIT infrastructure and hosting

Affected stakeholders
Consumers

Harm types
Economic/PropertyPublic interest

Severity
AI incident

Business function:
Citizen/customer serviceMarketing and advertisement

AI system task:
Interaction support/chatbotsRecognition/object detectionContent generation


Articles about this incident or hazard

Thumbnail Image

Bing Chat tiene un serio problema: alguien está incluyendo enlaces maliciosos en sus anuncios

2023-09-29
Mundo Deportivo
Why's our monitor labelling this an incident or hazard?
Bing Chat is an AI system that generates responses including sponsored content. The malicious links embedded in these ads have directly led to potential harm by exposing users to fraudulent websites and harmful software. This constitutes harm to property (users' computer systems) and possibly harm to users themselves if malware is installed. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm through malicious advertising content.
Thumbnail Image

Bing Chat dice que no puede transcribir un captcha. Pero con el truco de la abuela fallecida responde al instante

2023-10-02
Genbeta
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Bing Chat with DALL-3 image analysis) whose programmed restrictions are bypassed through a deceptive prompt, leading to the AI providing captcha transcriptions. Captchas are designed to prevent automated access, so the AI's direct or indirect facilitation of their transcription undermines security measures and can enable misuse. This constitutes an AI Incident because the AI's use has directly led to a breach of intended operational safeguards, enabling potential misuse or harm related to security and access control. The AI system's malfunction is not due to technical failure but due to exploitation of its use, which is covered under the definition of use including foreseeable misuse or operator error.
Thumbnail Image

Bing Chat explotado por ciberdelincuentes para difundir malware, se insta a los usuarios de Microsoft a tener precaución - Notiulti

2023-10-02
Notiulti
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Bing Chat powered by GPT-4) whose use is directly linked to the distribution of malware, causing harm to users (harm to persons through malware infection). The AI system's outputs (links in chat and ads) are exploited by malicious actors to cause this harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm through malware dissemination.
Thumbnail Image

La solicitud del medallón de la abuela muerta engaña a la IA de Bing Chat para que resuelva el acertijo de seguridad - Ars Technica - Notiulti

2023-10-02
Notiulti
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Bing Chat, powered by GPT-4) whose use was manipulated to bypass security measures (CAPTCHA), which is a direct misuse of the AI system's intended function. This misuse leads to a security vulnerability, which is a form of harm as it compromises the integrity and trustworthiness of the AI system. Although no physical injury or legal violation is explicitly mentioned, the incident clearly demonstrates harm related to the AI system's misuse and the failure of its safeguards. Hence, it meets the criteria for an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Read more

2023-10-03
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Bing Chat, powered by GPT-4) whose use was manipulated to bypass safeguards designed to prevent automated CAPTCHA solving. This manipulation constitutes a malfunction or misuse of the AI system's content filtering mechanisms. Although no direct physical harm or legal violation is reported, the vulnerability could plausibly lead to harms such as automated abuse of web forms, fraud, or other malicious activities if exploited at scale. Therefore, it qualifies as an AI Hazard because it plausibly could lead to an AI Incident, but no actual harm is reported yet. The article focuses on demonstrating the vulnerability and discussing its implications, not on a realized harm event.
Thumbnail Image

Los anuncios de Bing Chat están enviando a sitios de malware - Digital Trends Español

2023-09-29
Digital Trends Español
Why's our monitor labelling this an incident or hazard?
Bing Chat is an AI system that generates responses including links and advertisements. The incident involves the AI system's use leading directly to harm, as malicious ads served through Bing Chat redirect users to malware sites that can infect their computers. This fits the definition of an AI Incident because the AI system's use has directly led to harm to property (computers) and users. The article describes an actual ongoing issue, not just a potential risk, so it is not a hazard or complementary information. Therefore, the classification is AI Incident.