Michael Saylor Warns Bitcoin Community of Deepfake Scams

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

MicroStrategy chairman Michael Saylor warned that scammers are using AI-generated deepfake videos of him promising to double Bitcoin investments. These deepfakes prompt viewers to scan a QR code and send BTC to criminals. His team removes about 80 fake videos daily and urges users to verify claims before sending funds.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated deepfake videos used in scams that have caused actual financial harm to victims by tricking them into sending Bitcoin to scammers. The AI system's use in creating realistic fake videos is central to the harm occurring. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (financial loss) and harm to communities (scam victims).[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityRespect of human rights

Industries
Media, social platforms, and marketingDigital securityFinancial and insurance services

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Michael Saylor Alerts Bitcoin Community Amid Rising Tide of Deepfake Scams | Altcoin MicroStrategy | CryptoRank.io

2024-01-16
CryptoRank
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos used in scams that have caused actual financial harm to victims by tricking them into sending Bitcoin to scammers. The AI system's use in creating realistic fake videos is central to the harm occurring. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (financial loss) and harm to communities (scam victims).
Thumbnail Image

Americans lose billions to scams featuring celebs like Taylor Swift, Oprah

2024-01-16
Newsweek
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology and synthetic voice generation) used maliciously to create fraudulent celebrity endorsements that have directly led to billions of dollars in financial harm to victims. The harms include injury to individuals' financial health and erosion of trust in digital content, which fits the definition of harm to communities and individuals. The AI system's use is central to the scam's success, making it an AI Incident rather than a hazard or complementary information. The article also mentions ongoing legislative responses, but the primary focus is on the realized harm caused by AI misuse.
Thumbnail Image

Michael Saylor Sounds Alarm on Deepfake Bitcoin Scams - Decrypt

2024-01-15
Decrypt
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered deepfake scams that impersonate public figures to deceive people into sending cryptocurrency, which constitutes direct harm to individuals' property (financial loss). The AI system's role in generating convincing fake videos is pivotal to the scam's success. The harm is realized and ongoing, not merely potential. Hence, this is an AI Incident involving the use of AI systems (deepfake generation) leading to violations of property rights and financial harm to victims.
Thumbnail Image

Michael Saylor Alerts Bitcoin Community Amid Deepfake Scams

2024-01-16
cryptonews.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake videos that are used maliciously to deceive and scam individuals, causing direct financial harm. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. The harm is realized (scams have occurred), not just potential. Therefore, this is classified as an AI Incident due to direct harm to people (financial loss) caused by AI-generated content.
Thumbnail Image

Michael Saylor Takes Down 80 AI-Generated Deepfake Videos of Himself Every Day

2024-01-15
CryptoPotato
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos being used to scam people out of Bitcoin, which is a direct harm caused by the AI system's outputs. The harm is financial theft and deception, which fits under harm to property and communities. The AI system's use is central to the incident, as the deepfakes enable the scams. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's malicious use.
Thumbnail Image

AI Misinformation and Deepfake Scams Take Over Crypto

2024-01-16
BeInCrypto
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos used to scam people out of Bitcoin, which constitutes direct harm to individuals (financial harm) and harm to communities through misinformation and fraud. The AI system's role in generating these deepfakes is pivotal to the incident. Therefore, this qualifies as an AI Incident. Other parts of the article discussing OpenAI's efforts to combat misinformation and calls for regulation are complementary information but do not overshadow the primary incident of realized harm from AI misuse.
Thumbnail Image

Saylor Sounds Alarm: MicroStrategy CEO Battles Deepfake YouTube Epidemic - Blockonomi

2024-01-15
cryptodaily.co.uk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create deepfake videos that impersonate individuals to defraud victims, which directly leads to harm (financial loss) to people targeted by these scams. The harm is ongoing and widespread, as indicated by the large number of fake videos taken down daily and the increasing prevalence of such scams. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people and communities through fraudulent activity. The article does not merely warn of potential harm or discuss responses but reports on active, realized harm caused by AI-generated content.
Thumbnail Image

Michael Saylor Takes Down 80 AI-Generated Deepfake Videos of Himself Every Day

2024-01-15
cryptodaily.co.uk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos being used to promote Bitcoin scams, causing direct financial harm to victims who are tricked into sending cryptocurrency to scammer-controlled addresses. The AI system's role in generating these deceptive videos is pivotal to the harm. The harm is realized and ongoing, with about 80 such videos being removed daily, indicating active and widespread misuse. This fits the definition of an AI Incident as the AI system's use has directly led to harm to people (financial injury) and harm to communities (scam proliferation).
Thumbnail Image

Michael Saylor Warns of AI Deepfake Scams in Crypto

2024-01-16
cryptodaily.co.uk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos used to impersonate individuals and scam people in the cryptocurrency community, leading to financial harm and deception. The harm is realized, not just potential, as scammers actively use these AI deepfakes to defraud people. This fits the definition of an AI Incident because the AI system's use directly leads to harm to people and communities. The warnings and responses by the affected parties do not negate the fact that harm is occurring.
Thumbnail Image

Michael Saylor Takes Down 80 AI-Generated Deepfake Videos of Himself Every Day | Scams Michael Saylor | CryptoRank.io

2024-01-15
CryptoRank
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake videos being used maliciously to scam people out of their Bitcoin, which is a direct harm to individuals' property and financial well-being. The AI system's role in creating convincing fake videos is pivotal to the scam's success. The harm is realized and ongoing, as evidenced by the daily removal of about 80 such videos. This fits the definition of an AI Incident, as the AI system's use has directly led to harm (financial scams).
Thumbnail Image

Deepfake Dangers: Michael Saylor Alerts Followers to Emerging Bitcoin Scams

2024-01-16
blockchain.news
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake videos that impersonate a public figure to promote fraudulent Bitcoin schemes. This use of AI has directly caused harm to individuals financially, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, as scammers continue to produce such videos and victims are deceived. The article focuses on the harm caused by the AI-generated content rather than just potential or future risks, so it is not merely a hazard or complementary information.
Thumbnail Image

Michael Saylor Raises the Alarm on Deepfake Threats

2024-01-16
Crypto News Australia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos being used to impersonate Michael Saylor and scam people by promising to double their Bitcoins, which is a form of financial fraud causing harm to individuals. The AI system's use in creating these videos is a direct factor in the harm, fulfilling the criteria for an AI Incident involving harm to communities and individuals through deception and fraud.
Thumbnail Image

Saylor Sounds Alarm: MicroStrategy CEO Battles Deepfake YouTube Epidemic - Blockonomi

2024-01-15
Blockonomi
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely AI algorithms generating deepfake videos that impersonate individuals to commit fraud. The harm is direct and materialized, as victims are deceived into sending cryptocurrency to scammers, constituting financial harm and fraud. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (financial injury) and harm to communities (fraud and deception). The article does not merely warn of potential harm but reports ongoing incidents and active scams, confirming realized harm. Therefore, the classification is AI Incident.
Thumbnail Image

MicroStrategy's Michael Saylor Warns of Deepfake Scams

2024-01-15
COINTURK NEWS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos being used to scam people by impersonating public figures and soliciting Bitcoin transfers. This is a direct use of AI systems (generative AI for deepfakes) causing realized harm (financial scams). The harm is materialized and ongoing, meeting the criteria for an AI Incident. The warnings and expert statements reinforce the direct link between AI use and harm. Hence, the classification is AI Incident.
Thumbnail Image

AI Misinformation and Deepfake Scams Take Over Crypto

2024-01-16
TradingView
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deepfake technology) being used maliciously to create fake videos that directly cause financial harm to victims, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, as victims are duped into sending cryptocurrency. The broader discussion of AI misinformation risks, OpenAI's mitigation efforts, and job disruption forecasts do not themselves describe new incidents or hazards but provide important complementary context and governance responses. Therefore, the primary classification is AI Incident with complementary information included in the narrative.