US Senator Targeted by Deepfake Impersonating Ukrainian Official

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Senator Ben Cardin was targeted by a deepfake scammer posing as former Ukrainian Foreign Minister Dmytro Kuleba during a Zoom call. The deepfake technology convincingly mimicked Kuleba's appearance and voice, raising security concerns. Cardin became suspicious when the impersonator asked uncharacteristic questions, prompting Senate security to issue warnings.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes an actual malicious use of AI-driven deepfake technology to deceive a senator’s office, posing national security and political risks. This is a direct harm scenario where AI-generated content was used to influence and extract sensitive information, fitting the definition of an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyDemocracy & human autonomyPrivacy & data governanceRespect of human rights

Industries
Government, security, and defenceDigital securityMedia, social platforms, and marketing

Affected stakeholders
Government

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

'Deepfake' Caller Poses as Ukrainian Official in Exchange With Key Senator

2024-09-26
The New York Times
Why's our monitor labelling this an incident or hazard?
The event describes an actual malicious use of AI-driven deepfake technology to deceive a senator’s office, posing national security and political risks. This is a direct harm scenario where AI-generated content was used to influence and extract sensitive information, fitting the definition of an AI Incident.
Thumbnail Image

US senator targeted by deepfake caller posing as Ukrainian diplomat

2024-09-26
The Guardian
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-generated audio and video to impersonate a known diplomat in real time, deceiving a U.S. senator and constituting an active harm (election interference/disinformation). The AI system's malicious use directly led to this security breach and political manipulation, classifying it as an AI Incident.
Thumbnail Image

Democrat senator targeted by deepfake impersonator of Ukrainian official on Zoom call: reports

2024-09-26
Fox News
Why's our monitor labelling this an incident or hazard?
The article describes a concrete incident where AI-generated deepfake technology was used to impersonate a foreign official, tricking a U.S. senator. This misuse of an AI system directly led to deceptive wrongdoing and poses risks to democratic processes, satisfying the criteria for an AI Incident rather than a mere hazard, update, or unrelated event.
Thumbnail Image

Senator lured into deepfake call with 'malign actor' posing as Ukrainian

2024-09-26
Washington Post
Why's our monitor labelling this an incident or hazard?
The event involves the actual use of deepfake AI (an AI system) to impersonate a public official and mislead a U.S. senator, fitting the definition of an AI Incident. It directly caused a deceptive engagement, posing risks to political decision-making and information integrity, and thus constitutes realized harm through misinformation tactics.
Thumbnail Image

Sen. Ben Cardin says he was targeted by apparent deepfake call

2024-09-26
Aol
Why's our monitor labelling this an incident or hazard?
The article describes a specific incident in which an AI-powered deepfake call was used to impersonate Ukraine’s former foreign minister over Zoom, aiming to manipulate and discredit a U.S. senator. This involves an AI system’s misuse causing direct harm through deception and unauthorized information gathering, fitting the definition of an AI Incident.
Thumbnail Image

A deepfake caller pretending to be a Ukrainian official almost tricked a US Senator

2024-09-26
The Verge
Why's our monitor labelling this an incident or hazard?
The event describes a realized misuse of AI deepfake technology—an AI system generated fake audio/video to impersonate a real foreign minister, successfully fooling a senator into a Zoom call and posing politically charged questions. This direct misuse of AI constitutes an AI Incident because it led to deception and a risk of harm to democratic processes.
Thumbnail Image

Sen. Ben Cardin says he was targeted by apparent deep fake call

2024-09-26
NBC News
Why's our monitor labelling this an incident or hazard?
The event involves the direct use of an AI system (deepfake audio/video generation) to target and deceive a U.S. senator, constituting a realized AI Incident through social engineering, attempted disinformation, and potential reputational and security harms.
Thumbnail Image

US Senator Took Zoom Call With Deepfake Scammer Pretending to be Ukrainian Official

2024-09-27
PetaPixel
Why's our monitor labelling this an incident or hazard?
The event describes an actual instance of deepfake (AI) misuse, where the technology directly enabled a deceptive social engineering attack against a public official, posing clear risks to information security and political decision-making—constituting an AI Incident.
Thumbnail Image

Top U.S. Senator targeted by 'deepfake' caller posing as Ukrainian official

2024-09-26
ReadWrite
Why's our monitor labelling this an incident or hazard?
The incident describes an actual event where AI-generated deepfake audio/video was used to impersonate a public official and mislead a senator, posing direct risks to political processes and security. The AI system’s misuse led to disinformation and required intervention by the senator’s office and law enforcement. Therefore, it qualifies as an AI Incident.
Thumbnail Image

Senator Lured Into Deepfake Call With 'Malign Actor' Posing as Ukrainian

2024-09-27
ITPro Today
Why's our monitor labelling this an incident or hazard?
The event involves the malicious use of AI deepfake technology to impersonate a real political figure. While the target ended the call before sensitive information was revealed, the incident demonstrates a credible and sophisticated AI‐driven threat that could plausibly lead to political interference or espionage if not detected. No actual harm beyond the attempted deception occurred, so it constitutes an AI hazard rather than a fully materialized incident.
Thumbnail Image

Maryland Sen. Cardin targeted in deepfake Zoom call; Russia, China, and Iran suspected

2024-09-26
FOX 5 DC
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used maliciously to impersonate a person and attempt manipulation. While the harm was averted, the use of deepfake AI in this context poses a plausible risk of harm to political integrity and security. Therefore, it qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving harm to communities or violations of rights if such attacks succeed in the future.
Thumbnail Image

Top Senator Targeted in Deepfake Operation

2024-09-26
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a deepfake impersonation of a political figure, which was used to deceive Senate members during an official meeting. This misuse of AI directly caused harm by undermining trust and security in political communications, fitting the definition of an AI Incident due to realized harm from AI misuse in a sensitive context.
Thumbnail Image

Senator lured into deepfake call with 'malign actor' posing as Ukrainian

2024-09-26
Stars and Stripes
Why's our monitor labelling this an incident or hazard?
The event describes a malicious use of AI-generated deepfake technology to impersonate a known political figure and deceive a senator. The AI system's involvement is explicit and central to the incident. The harm is realized in the form of deception, potential misinformation, and risk to political decision-making and security, which aligns with harm to communities and violations of rights. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sophistication of AI-backed operation targeting senator points to future of deepfake schemes

2024-09-27
The Columbian
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI (deepfake technology) to impersonate a political figure in a realistic video call, which directly led to an attempt to deceive a senator. This is a clear case of AI misuse causing harm by undermining trust and potentially influencing political decisions, which falls under harm to communities and violation of rights. The harm is realized as the senator and staff were targeted and deceived, even if the scheme was ultimately detected. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sophistication of AI-backed operation targeting senator points to future of deepfake schemes

2024-09-26
WOWK 13 Huntington
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create realistic deepfake video and audio to impersonate a political figure, which is a clear AI system involvement. The incident did not result in realized harm because the senator and staff detected the deception and ended the call promptly, preventing any direct injury, rights violation, or disruption. However, the sophistication and believability of the AI-enabled deepfake operation indicate a credible risk of future harm, such as political manipulation, misinformation, or security breaches. The article explicitly states that experts expect more such incidents in the future, and Senate security has issued warnings accordingly. Since no actual harm occurred but plausible future harm is evident, the event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

'Deepfake' caller posing as Ukraine's ex-foreign minister talks to US Senator

2024-09-26
The Kyiv Independent
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake video technology) used to impersonate a public figure and deceive a U.S. Senator. The use of deepfake technology in this context is a malicious use of AI that directly led to an attempt to influence political opinions and interfere with the democratic process, which is a harm to communities and a violation of rights. The incident has already occurred, with the Senator being targeted and the deception detected, thus qualifying as an AI Incident rather than a hazard or complementary information. The harm is indirect but real, as it undermines trust and could influence election outcomes if successful.
Thumbnail Image

'Deepfake' Caller Poses as Ukrainian Official in Exchange With Key Senator

2024-09-26
DNyuz
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—deepfake video technology—used to impersonate a public figure in a high-stakes political context. The AI system's use directly led to a deceptive attempt to influence a senator and obtain sensitive information, which is a violation of rights and a harm to political processes and communities. The harm is realized, not just potential, as the senator was targeted and the incident confirmed. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and political manipulation).