Ferrari Executive Thwarts Deepfake Scam Attempt

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Ferrari executive was targeted by a scammer using deepfake technology to impersonate CEO Benedetto Vigna's voice. The scam involved a fake acquisition deal communicated via WhatsApp and a phone call. The executive's suspicion led to a verification question, which the scammer couldn't answer, preventing potential financial harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes the use of an AI system (deepfake voice synthesis) in an actual malicious scenario—impersonation of an executive to attempt corporate infiltration and fraud. This constitutes an AI Incident because the AI’s outputs caused (or directly facilitated) attempted harm (fraud, potential loss of confidential information).[AI generated]
AI principles
AccountabilityPrivacy & data governanceRobustness & digital securitySafetyTransparency & explainability

Industries
Mobility and autonomous vehiclesDigital securityFinancial and insurance services

Affected stakeholders
WorkersBusiness

Harm types
Economic/PropertyReputationalPsychological

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

How a Ferrari executive prevented a multi-million dollar deepfake scam

2024-07-29
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event involves a deepfake AI system used in a scam attempt—a clear case of potential harm from AI—but no actual loss or injury transpired because the executive recognized the deception and halted the transaction. This constitutes a near-miss scenario of a plausible AI-enabled fraud, fitting the definition of an AI Hazard rather than an Incident.
Thumbnail Image

Deepfake scammers use Ferrari CEO's voice to dupe staff

2024-07-30
Perth Now
Why's our monitor labelling this an incident or hazard?
The article describes the use of an AI system (deepfake voice synthesis) in an actual malicious scenario—impersonation of an executive to attempt corporate infiltration and fraud. This constitutes an AI Incident because the AI’s outputs caused (or directly facilitated) attempted harm (fraud, potential loss of confidential information).
Thumbnail Image

Ferrari Narrowly Dodges Deepfake Scam Simulating Deal-Hungry CEO

2024-07-26
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake voice‐cloning software) was used to impersonate a high‐profile executive in an attempted scam. No actual harm materialized—this was effectively a ‘near miss’—but it clearly shows a plausible pathway to significant financial harm in future uses. Thus it constitutes an AI Hazard rather than an Incident or merely complementary information.
Thumbnail Image

Hello Ferrari: CEO's voice deep faked for phonecall to scam supercar maker

2024-07-30
https://auto.hindustantimes.com
Why's our monitor labelling this an incident or hazard?
Deep-fake voice cloning is an AI system, and its use here directly enabled a fraud attempt against Ferrari. Although the scam was detected before loss occurred, the incident involved the AI system’s misuse to cause harm, meeting the criteria for an AI Incident.
Thumbnail Image

This 1 question saved Ferrari from a big deepfake scam: 'I need to identify you'

2024-07-30
Hindustan Times
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake voice tools) was maliciously used to impersonate a high-profile executive, posing a credible risk of financial or reputational harm. Although no loss materialized, this constitutes a near-miss where AI could have directly enabled fraud, so it qualifies as an AI Hazard rather than an incident.
Thumbnail Image

Someone Deepfaked Ferrari CEO's Voice and Tried to Scam the Company

2024-07-29
Motor1.com
Why's our monitor labelling this an incident or hazard?
A deepfake voice model was used maliciously to impersonate a CEO and attempt deception. No actual harm (e.g. theft) materialized, but the incident demonstrates how AI-enabled voice cloning could plausibly lead to successful scams, fitting the definition of an AI Hazard rather than an Incident.
Thumbnail Image

Deepfake Scammers Fail To Infiltrate Ferrari After Exec Got Suspicious And Asked A Question Only The CEO Knew

2024-07-29
BroBible
Why's our monitor labelling this an incident or hazard?
The incident involves the use (and potential misuse) of an AI system (deepfake generation) in a near-miss fraud. No actual loss occurred, but it demonstrates a credible and direct risk of financial harm and identity deception via AI—characteristic of an AI Hazard.
Thumbnail Image

Ferrari was about to fall victim to deepfakes, lose millions. Here's how an executive stopped it

2024-07-29
Firstpost
Why's our monitor labelling this an incident or hazard?
The article describes a malicious deepfake voice‐cloning attempt to impersonate Ferrari’s CEO and defraud the company. Although AI was used to carry out the scam, the fraud was averted and no actual harm materialized. This constitutes a credible AI‐related threat that could have led to an incident, so it is classified as an AI Hazard.
Thumbnail Image

Ferrari Exec Nearly Duped By Deepfake Voice Of CEO Benedetto Vigna In Scam Attempt

2024-07-29
Jalopnik
Why's our monitor labelling this an incident or hazard?
The event involves the deployment of an AI system (deepfake voice synthesis) in a real-world scam attempt aimed at causing financial harm. This is not merely a potential risk—AI was weaponized in a concrete incident, meeting the criteria for an AI Incident.
Thumbnail Image

Ferrari Exec Suspects Call From CEO Is Deepfaked, Asks Question Only He Would Know the Answer To

2024-07-29
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake voice generator) being actively used to impersonate Ferrari’s CEO and trick an executive into divulging sensitive corporate information and potentially authorizing fraudulent transactions. This malicious deployment directly led to an attempted scam, constituting an AI Incident under the framework’s definition of realized harm (even if the scam was ultimately thwarted).
Thumbnail Image

Ferrari Exec Outsmarts Deepfake Scammers

2024-07-29
COED
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) in a malicious use case (scam attempt). Although no actual harm (financial loss or other damage) occurred because the scam was detected and stopped, the event demonstrates a credible risk of harm from AI misuse. Since no harm materialized, but the AI system's use could plausibly lead to significant harm (e.g., financial loss), this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Ferrari Thwarted an AI Deepfake Scammer Posing as Its CEO With an Age-Old Trick

2024-07-29
The Drive
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake voice technology) was used in the scam attempt, which is a misuse of AI. The event describes a direct attempt to cause financial harm through AI-enabled impersonation, but no harm actually materialized due to the executive's intervention. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly have led to significant financial harm if successful, but no harm occurred in this case.
Thumbnail Image

The high-ranked executive was clever enough to ask a personal question for identification purposes

2024-07-29
Carscoops
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for voice cloning and deepfake generation to impersonate a CEO and attempt a fraudulent transaction, which constitutes malicious use of AI. Although no actual financial harm occurred, the AI system's use directly led to an attempted fraud, which is a form of harm (potential financial loss and violation of trust). Since the harm was narrowly avoided, but the AI system's malicious use was central to the event, this qualifies as an AI Incident due to the direct involvement of AI in causing harm or attempted harm.
Thumbnail Image

Ferrari CEO Deepfake Shows Growing Threat of AI Scams Impersonating ...

2024-07-27
Quinta’s weblog
Why's our monitor labelling this an incident or hazard?
The event involves the use of deepfake AI technology to impersonate a high-profile individual, which is an AI system by definition. The use of this AI system directly led to an attempted scam, which constitutes harm to the targeted individual and potentially the corporation, fulfilling the criteria for an AI Incident. Although the scam was detected and thwarted, the AI system's role in enabling the fraudulent attempt is clear and direct. Therefore, this qualifies as an AI Incident due to the realized harm from the AI-enabled impersonation scam attempt.
Thumbnail Image

Alberto Felice de Toni, il sindaco di Udine "smaschera" una truffa alla Ferrari con l'AI: "Intelligenza umana batte artificiale"

2024-07-29
Gazzettino
Why's our monitor labelling this an incident or hazard?
The article describes a concrete instance of AI misuse: a deepfake of Ferrari’s CEO created via AI software was used to attempt a scam. This misuse directly led to an attempted harm (fraud), making it an AI Incident rather than a mere hazard or general AI news.
Thumbnail Image

Ferrari, l'Ad "clonato" dagli impostori digitali

2024-07-29
Quotidiano Libero
Why's our monitor labelling this an incident or hazard?
Criminals directly misused AI voice-synthesis software to simulate CEOs and CFOs, leading to attempted (and in the Hong Kong case actual) financial harm. This misuse of an AI system caused clear material damage, meeting the criteria for an AI Incident.
Thumbnail Image

Come la Ferrari si è salvata da una truffa realizzata con IA e Deepfake

2024-07-29
Hardware Upgrade - Il sito italiano sulla tecnologia
Why's our monitor labelling this an incident or hazard?
Deepfake technology powered by AI was used to impersonate Ferrari’s CEO in calls and messages, aiming to deceive an executive into a fraudulent transaction. No loss occurred as the scam was uncovered, making this a near-miss scenario where the AI misuse could have led to real harm. Such events are best classified as AI Hazards, since they demonstrate credible potential for significant financial damage without having materialized.
Thumbnail Image

La storia di Ferrari e della truffa fatta con l'intelligenza artificiale - Il Post

2024-07-26
Il Post
Why's our monitor labelling this an incident or hazard?
Scammers used AI systems to generate convincing deepfake calls to trick employees into signing agreements and transferring funds, directly causing financial harm (e.g., a €24 million loss in Hong Kong and an attempted scam at Ferrari). This constitutes an AI Incident because the misuse of the AI system directly led to harm.
Thumbnail Image

Sindaco Udine 'smaschera' truffa a Ferrari, 'cervello batte l'IA' - Mondo Motori - Ansa.it

2024-07-29
ANSA.it
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to impersonate a person and attempt a fraudulent act against Ferrari. This is a clear case of AI misuse leading to an incident involving harm (attempted fraud and deception). The AI system's involvement is explicit and central to the event. Even though the fraud was foiled, the event qualifies as an AI Incident because the AI system's use directly led to a harmful event (attempted fraud).
Thumbnail Image

Truffa alla Ferrari sventata dal libro del sindaco De Toni

2024-07-29
Friuli Oggi - Il quotidiano del Friuli | Notizie dal Friuli
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used maliciously to impersonate a CEO and attempt a fraud, which is a clear misuse of AI. Although no actual harm occurred because the fraud was detected and stopped, the event demonstrates a plausible risk of harm from AI misuse. Since no harm materialized, but the AI system's use could plausibly have led to harm, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses or broader ecosystem context, so it is not Complementary Information. It is clearly related to AI misuse, so it is not Unrelated.
Thumbnail Image

Uno squillo, un ok e si rischia il posto di lavoro: cos'è la truffa del Ceo arrivata nelle aziende

2024-07-29
Fanpage
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake voice generation) in the malicious use of AI to impersonate a CEO and attempt fraud. The AI system's use directly contributes to the risk of harm (financial loss, reputational damage) to the company and employees. Although no actual financial loss occurred in this specific incident, the AI-enabled fraud attempt demonstrates a plausible risk of harm if successful. Therefore, this qualifies as an AI Hazard because the AI system's involvement could plausibly lead to an AI Incident (fraud, financial harm). It is not an AI Incident because the harm was not realized, and it is not merely complementary information or unrelated news.
Thumbnail Image

Ceo Ferrari attaccato da pirati informatici che gli clonano la voce

2024-07-28
Il Resto del Carlino
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake voice cloning) in a malicious attempt to deceive and potentially cause financial harm. However, the harm was avoided due to the vigilance of the targeted executive. Since no actual harm occurred but there was a credible risk of harm, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the attempted attack and its prevention, indicating plausible future harm if such attacks succeed.
Thumbnail Image

La truffa sventata da Ferrari anticipa il terribile futuro che ci aspetta

2024-07-29
DDay.it
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake audio to impersonate the CEO, which was central to the attempted fraud. The event involves the use of an AI system (voice deepfake) in a malicious way that directly led to an attempted financial harm incident. Although the fraud was prevented, the event still qualifies as an AI Incident because the AI system's use directly led to a significant harm attempt. The article also discusses broader societal risks of such AI-enabled frauds becoming widespread, but the primary event is a concrete AI Incident.
Thumbnail Image

Truffa sventata a Maranello: chiama il dirigente della Ferrari con un deepfake del Ceo, ma lui lo scopre con uno stratagemma

2024-07-28
Open
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake technology) was explicitly used to impersonate the CEO, constituting AI system involvement. The use of the AI system was malicious and intended to cause harm (financial fraud). Although no actual harm occurred because the executive detected the fraud, the event demonstrates a credible risk of harm from AI misuse. Since no harm materialized, but plausible harm was imminent, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the attempted fraud and its detection, not on a broader societal or governance response, so it is not Complementary Information.
Thumbnail Image

Ferrari, un Deep Fake tenta la truffa milionaria. Ecco il trucco con cui il manager ha smascherato il finto Ceo

2024-07-26
torinocronaca.it
Why's our monitor labelling this an incident or hazard?
The event involves the use of a deepfake AI system to impersonate a CEO and attempt a multimillion-dollar fraud. The AI system's outputs (voice and image deepfakes) were used in a malicious way to deceive a company manager, directly leading to a risk of financial harm. Although the fraud was prevented, the AI system's role was pivotal in the incident. This fits the definition of an AI Incident as the AI system's use directly led to an attempted harm to property (financial assets).
Thumbnail Image

Con l'intelligenza artificiale tentano la truffa alla Ferrari Gazzetta di Modena

2024-07-27
Gazzetta di Modena
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (voice cloning AI) to impersonate a high-profile individual and attempt a fraudulent act. This use of AI directly led to an attempted fraud, which is a form of harm (potential financial and reputational harm). Although the fraud was not successful, the AI system's role was pivotal in the incident. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through malicious use.
Thumbnail Image

Cyber criminali tentano di truffare un dirigente Ferrari. VIDEO

2024-07-27
Reggionline - Quotidianionline - Telereggio - Trc - TRM
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI (deepfake technology) to create realistic voice and image impersonations, which were used maliciously to attempt fraud. This constitutes the use of an AI system leading directly to a harm scenario (attempted deception and fraud), fitting the definition of an AI Incident. The harm was averted but the AI system's involvement was central to the incident. Therefore, this is classified as an AI Incident.