AI-Generated YouTube Videos Used to Spread Malware and Steal Data

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Cybercriminals are increasingly using AI-generated personas and avatars in YouTube videos to trick users into downloading malware disguised as cracked software. This tactic, which has surged 200-300% month-on-month, enables the theft of personal and financial data, directly harming individuals through AI-enabled deception.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating realistic video personas to deceive users into downloading malware, which directly leads to harm (theft of personal and financial information). The AI-generated content is a key factor in the success of the malware distribution, making this an AI Incident due to realized harm caused by the AI system's use in malicious activity.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilitySafetyAccountabilityDemocracy & human autonomy

Industries
Digital securityMedia, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AI-generated YouTube videos spreading info-stealing malware, Here's how

2023-03-14
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic video personas to deceive users into downloading malware, which directly leads to harm (theft of personal and financial information). The AI-generated content is a key factor in the success of the malware distribution, making this an AI Incident due to realized harm caused by the AI system's use in malicious activity.
Thumbnail Image

No Paid Software Is Free! AI-Generated YouTube Videos Spreading Malware Through Links

2023-03-14
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos and synthetic personas used by threat actors to spread malware. The malware leads to theft of sensitive information, which is a clear harm to individuals' security and privacy. The AI system's use in creating convincing fake personas and videos is pivotal in enabling this harm. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's outputs facilitating malicious activity.
Thumbnail Image

YouTube is next on the AI-generated malware scam list | Digital Trends

2023-03-13
Digital Trends
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content being used maliciously to trick users into downloading malware that steals private data, which constitutes harm to individuals' health and property (privacy and data security). The AI system's role in generating deceptive videos and facilitating the scam is pivotal to the incident. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in malware distribution and account takeover.
Thumbnail Image

YouTube videos made with AI are spreading malware

2023-03-13
TechRadar
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deceptive video content to trick users into downloading malware, which results in harm to individuals through theft of sensitive data. This meets the criteria for an AI Incident because the AI system's use directly leads to realized harm (the spread of malware and data theft). The involvement of AI-generated personas and videos is central to the incident, not merely background information. Therefore, this is classified as an AI Incident.
Thumbnail Image

Hackers use AI videos to steal sensitive data: Here's how to stay vigilant?

2023-03-15
mint
Why's our monitor labelling this an incident or hazard?
The use of AI-generated videos to facilitate the scam directly contributes to the harm by making the malicious content more convincing and increasing the likelihood of users downloading the malware. The malware's data-stealing activity causes realized harm to users' personal and financial data, fitting the definition of an AI Incident where the AI system's use indirectly leads to harm to persons. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Look out! Those AI-generated YouTube tutorials are actually spreading dangerous malware

2023-03-13
Tom's Guide
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos being used maliciously to spread malware, which directly leads to harm by compromising users' data and devices. The AI system's use in creating these videos is a contributing factor to the malware campaign's effectiveness and reach. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm involving data theft and malware infection.
Thumbnail Image

PSA: Beware of these AI-generated YouTube videos that spread malware

2023-03-15
Android Authority
Why's our monitor labelling this an incident or hazard?
The presence of AI-generated videos spreading malware links directly leads to harm (theft of sensitive data) when users click on those links. The AI system's use in generating these videos is a contributing factor to the incident. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to people (theft of confidential data).
Thumbnail Image

AI-generated personas are pushing malware on YouTube

2023-03-14
TechSpot
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic personas and videos used maliciously to deceive users into downloading malware. The AI-generated content is a key factor enabling the cybercriminals to spread information-stealing malware effectively. The resulting harm includes theft of sensitive personal and financial data, which is a direct harm to individuals and communities. The AI system's role is pivotal in the development and use of these deceptive videos, meeting the criteria for an AI Incident.
Thumbnail Image

How hackers are using YouTube videos to steal user information

2023-03-13
Gadget Now
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated videos to facilitate the spread of malware that steals sensitive user data. The AI system's use is integral to the attack vector, as it creates convincing fake tutorial videos that trick users into installing malware. This leads directly to realized harm through data theft and privacy violations. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to persons through malware infection and data theft.
Thumbnail Image

AI-Generated YouTube Videos Are Being Used To Spread Malware - SlashGear

2023-03-15
SlashGear
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in generating videos that are used maliciously to deceive users into downloading malware, which causes harm to individuals by stealing passwords, credit card details, and files. This constitutes direct harm to people (harm to health or property through theft and fraud). Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated content facilitating malware distribution.
Thumbnail Image

Cybercriminals are using AI-generated YouTube videos to spread malware

2023-03-15
Neowin
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of AI-generated avatars to create deceptive videos that facilitate malware distribution. The AI system's use directly leads to harm by enabling cybercriminals to steal sensitive data, which is a violation of human rights and causes harm to individuals and communities. Therefore, this qualifies as an AI Incident due to realized harm stemming from the use of AI in malicious activities.
Thumbnail Image

Malware threat from AI videos rises

2023-03-13
The Telegraph
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos being used by malicious actors to distribute malware, which directly harms individuals by compromising their information security. The AI system's use in generating synthetic personas and videos is a contributing factor to the harm caused by the malware distribution. Therefore, this qualifies as an AI Incident due to the realized harm facilitated by AI-generated content.
Thumbnail Image

New Report Detects 200-300% Jump In AI-generated YouTube Videos To Spread Stealer Malware

2023-03-13
BW Businessworld
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions the use of AI-generated personas in videos that spread stealer malware, which directly harms users by tricking them into downloading malicious software. The AI system's role in generating synthetic personas is pivotal in making the videos appear trustworthy and thus facilitating the malware spread. This meets the definition of an AI Incident as the AI system's use has directly led to harm (malware infections) to persons and communities. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

200-300% Increase in AI-Generated YouTube Videos to Spread Stealer Malware

2023-03-13
Dark Reading
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated personas in YouTube videos that are part of a malware distribution campaign. This AI involvement directly contributes to the harm by making the malicious content more convincing and widespread, leading to actual theft of sensitive information from victims. Therefore, this qualifies as an AI Incident because the AI system's use in generating deceptive content has directly led to harm to people through malware infection and data theft.
Thumbnail Image

Scammers are using AI to create malware infected YouTube videos

2023-03-16
Android Headlines
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (e.g., Synthesia, D-ID) used to generate avatars and voices in scam videos. The AI-generated content is used maliciously to deceive users into downloading malware, leading to direct harm through theft of personal and financial data. This meets the definition of an AI Incident because the AI system's use directly leads to harm to people and communities. The harm is realized, not just potential, as users are being actively targeted and victimized.
Thumbnail Image

Threat actors turn to AI-generated YouTube videos to spread info stealers

2023-03-14
SC Media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated personas used in YouTube videos to spread malware and phishing campaigns, which directly harms users by stealing information. The AI system's use in creating convincing digital humans and content is a key factor enabling the threat actors to deceive victims effectively. This direct link between AI use and realized harm to individuals' data security fits the definition of an AI Incident.
Thumbnail Image

BEWARE! AI-generated YouTube videos are being used by hackers to steal information

2023-03-14
HT Tech
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated videos being used maliciously to trick users into installing malware that steals personal and financial data. This constitutes harm to individuals' privacy and security, which falls under harm to persons or groups. The AI system's use in generating convincing fake videos is a direct contributing factor to the harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

AI-generated avatars tricking users into installing info-stealing malware

2023-03-15
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (AI-generated avatars and deepfake technology) being used maliciously to deceive users into downloading malware that steals sensitive information. The AI system's outputs (videos with AI avatars) are pivotal in tricking users, leading directly to harm (information theft). This meets the definition of an AI Incident because the AI system's use has directly led to harm to people (privacy and security breaches) and harm to property (digital assets). The involvement is in the use of AI systems for malicious purposes, causing realized harm, not just potential harm. Hence, the classification is AI Incident.
Thumbnail Image

Awas! Bahaya Video YouTube yang Dibuat AI Bisa Sebarkan Malware

2023-03-14
suara.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated personas used in videos to trick users into downloading malware, which results in harm to users' personal data and security. This constitutes direct harm caused by the use of an AI system (AI-generated content) in a malicious context. Therefore, this qualifies as an AI Incident due to realized harm (the spread of malware and theft of personal information) directly linked to AI system use.
Thumbnail Image

Waspada Video YouTube Berisi Malware! : Okezone techno

2023-03-14
https://techno.okezone.com/
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos used maliciously to spread malware that steals personal information, which constitutes harm to individuals' data security and privacy. The AI system's use in creating realistic tutorial videos is a direct factor enabling the malware distribution and resulting harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in the malicious campaign.
Thumbnail Image

Malware Beredar Lewat Video YouTube, Kok Bisa? Simak Penjelasan Berikut ini

2023-03-14
PT. Kontan Grahanusa Mediatama
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated personas used in YouTube videos to deceive viewers into installing malware, which is a direct harm to users' devices and security. The AI system's use in generating deceptive content that leads to malware installation fits the definition of an AI Incident, as it directly leads to harm to persons (a). The malware distribution facilitated by AI-generated content is a clear example of AI misuse causing realized harm, not just a potential risk or general information. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

BAHAYA! Awas Ada Malware dari YouTube Videos, Ternyata Disimpan di Sini - Bandungbarat

2023-03-14
suara.com
Why's our monitor labelling this an incident or hazard?
The use of AI to generate realistic personas that deceive users into clicking harmful links directly leads to harm by enabling malware infections and theft of personal data, which constitutes harm to individuals' rights and property. The AI system's use in this malicious context is a direct contributing factor to the incident. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

看YouTube 小心電腦恐遭駭! AI剪輯影片暗藏惡意程式竊個資 3招辨識 - 自由電子報 3C科技

2023-03-23
自由時報
Why's our monitor labelling this an incident or hazard?
The article explicitly describes generative AI systems being used maliciously to produce videos that contain links to malware, leading to unauthorized data theft from users. This is a direct harm to individuals' privacy and security, fitting the definition of an AI Incident. The AI system's use in generating deceptive content is central to the harm, and the malware infection and data breaches have already occurred, not just potential risks. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

看影片也會中毒!竊密駭客透過 AI 自動產生惡意 YouTube 影片

2023-03-21
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Synthesia, PictoryAI, D-ID, etc.) used to automatically generate videos that contain malicious links leading to malware infections (Vidar, Redline trojans). The AI-generated videos are instrumental in attracting victims to click on these links, directly causing harm through data theft and computer compromise. This meets the definition of an AI Incident because the AI system's use has directly led to harm to property and personal data (harm category d). The involvement is through malicious use of AI-generated content, and the harm is realized, not just potential. Hence, the classification is AI Incident.
Thumbnail Image

NCC-CSIRT Warns of Pirated YouTube Software-related Malware

2023-03-27
New Telegraph
Why's our monitor labelling this an incident or hazard?
The event explicitly describes AI-generated videos being used maliciously to distribute malware, which has directly led to realized harms including data theft and financial loss. The AI system's use in generating deceptive content is a direct contributing factor to these harms, fulfilling the criteria for an AI Incident.
Thumbnail Image

You risk becoming victim of cybercriminal gangs if you go for pirated YouTube software -- NCC-CSIRT - National Daily Newspaper

2023-03-27
National Daily NG
Why's our monitor labelling this an incident or hazard?
The advisory explicitly states that AI-generated videos are being used maliciously to trick users into downloading malware and falling victim to cybercrime, resulting in realized harms including data theft and financial loss. The AI system's use in generating deceptive videos is a direct contributing factor to these harms, fulfilling the criteria for an AI Incident under the framework.
Thumbnail Image

NCC raises alarm over AI-generated tutorial videos on YouTube and software stealing person information

2023-03-27
Legit.ng - Nigeria news.
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is reasonably inferred from the description of AI-generated tutorial videos used to deceive users. The event involves the use of AI in the creation of malicious content that directly leads to harm (data theft, financial loss, identity theft) to individuals. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of personal security and privacy, which are harms to persons and communities. The advisory and warnings are responses to an ongoing incident rather than a new hazard or complementary information.
Thumbnail Image

NCC warns against YouTube-related malware - Punch Newspapers

2023-03-26
Punch Newspapers
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated videos are being used maliciously to trick users into downloading malware, which has directly led to significant harms including data theft and financial loss. This fits the definition of an AI Incident because the AI system's use in generating deceptive content is a contributing factor to the realized harms. The advisory and warnings confirm that these harms are occurring, not just potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

NCC warns against pirated YouTube software-related malware

2023-03-27
Nairametrics
Why's our monitor labelling this an incident or hazard?
The use of AI-generated videos by cybercriminals to distribute malware constitutes the use of an AI system that has directly led to significant harms including data theft and financial loss. The event involves the malicious use of AI-generated content causing realized harm, fitting the definition of an AI Incident.
Thumbnail Image

NCC warns of malware-related AI-generated videos on YouTube

2023-03-27
Pulse Nigeria
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in generating deceptive tutorial videos that are used as a vector for malware attacks. These attacks have directly led to harms including data theft, financial loss, and identity theft, which are harms to persons and their property. The AI-generated videos play a pivotal role in enabling these harms by increasing the trustworthiness and effectiveness of the malicious content. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

NCC alerts public of pirated YouTube software-related malware

2023-03-26
Tribune Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the creation of deceptive videos that are used maliciously to cause harm. The harms described (data theft, financial loss, identity theft, system damage, reputation damage) fall under injury or harm to persons or groups and harm to property or communities. The AI-generated videos are directly used as a tool in the cybercrime, making the AI system's use a contributing factor to the realized harms. Therefore, this qualifies as an AI Incident.
Thumbnail Image

NCC Warns Of Pirated YouTube Software-related Malware

2023-03-26
Leadership
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems is explicit in the creation of AI-generated videos used maliciously to trick users into downloading malware or entering sensitive information. The harms described—data theft, financial loss, identity theft, system damage, and reputational damage—are direct consequences of the AI system's use in this malicious context. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harms as defined in the framework.
Thumbnail Image

NCC-CSIRT Alerts Nigerians To Pirated YouTube Software Malware - The Street Journal

2023-03-29
Breaking News
Why's our monitor labelling this an incident or hazard?
The advisory explicitly states that AI-generated videos are used maliciously to distribute malware and phishing scams, causing realized harms including data theft and financial loss. The AI system's use in generating deceptive videos is a direct contributing factor to these harms. Therefore, this event meets the criteria for an AI Incident as the AI system's use has directly led to significant harm to persons and organizations.