Florida Man Arrested for Using AI Deepfake Video in False Crime Report

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Alexis Martínez-Arizala, from Florida, was arrested after creating and using an AI-generated deepfake video to falsely report a crime to law enforcement. The video depicted two Black men breaking into a police car, misleading officers and wasting resources. He was apprehended in Puerto Rico and faces multiple charges.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was explicitly involved as the video was AI-generated (deepfake). The misuse of this AI system directly led to harm by fabricating evidence and falsely implicating a deputy, which can damage reputations and create safety risks. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and harm to public safety professionals).[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
GovernmentGeneral public

Harm types
Economic/PropertyReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Florida man arrested after pranking deputy with A.I. video in Lake Mary

2026-04-08
WKMG
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved as the video was AI-generated (deepfake). The misuse of this AI system directly led to harm by fabricating evidence and falsely implicating a deputy, which can damage reputations and create safety risks. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and harm to public safety professionals).
Thumbnail Image

Law enforcement warns of 'growing concern' over A.I. pranks

2026-04-09
WKMG
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos (deepfakes) used to deceive law enforcement and the public, resulting in criminal charges. The AI system's outputs directly caused harm by fabricating evidence and creating false reports, which disrupted law enforcement operations and posed safety risks. The harm is realized, not just potential, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lake Worth Beach man accused of using AI-generated deepfake video in false crime report

2026-04-08
WPEC
Why's our monitor labelling this an incident or hazard?
The incident explicitly involves an AI system generating a deepfake video that was used to file a false crime report. This misuse of AI directly caused harm by misleading law enforcement, wasting resources, and creating safety concerns for first responders. The harm is realized and directly linked to the AI system's use, fitting the definition of an AI Incident.
Thumbnail Image

Florida man fabricated an AI video of two Black men breaking into a police car to go viral, then got arrested in Puerto Rico trying to escape it

2026-04-09
We Got This Covered
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a deepfake video, which was then presented as real evidence to law enforcement, constituting misuse of AI technology. This led to a false crime report and tampering with evidence, causing harm to public safety professionals and potentially undermining trust in law enforcement. The harm is direct and realized, meeting the criteria for an AI Incident under violations of law and harm to communities/public safety. The article also discusses broader implications of deepfake fraud, but the primary event is the creation and use of the AI-generated video to commit a crime and cause harm.
Thumbnail Image

Florida deputy panics over patrol car break-in, but AI is to blame, and it happened just for clicks

2026-04-10
We Got This Covered
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video that falsely showed a patrol car break-in. This AI-generated content directly caused a real-world police response, constituting harm through misuse of emergency services and legal violations. The harm is realized, not just potential, and the AI system's role is pivotal in causing the incident. Hence, this qualifies as an AI Incident.
Thumbnail Image

Florida Cop Pranked With AI Video of His Patrol Car Getting Stolen, Prankster Arrested by Feds: 'Cop Wanted to Get Even'

2026-04-10
The Nerd Stash
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in creating a deepfake video that falsely depicted a crime, which directly misled a law enforcement officer and caused a false emergency response. This misuse of AI technology led to legal consequences and highlights risks to public safety and trust. The event meets the criteria for an AI Incident because the AI-generated content directly caused harm by fabricating evidence and disrupting police operations, even though no physical injury or property damage occurred. The incident also underscores broader societal harms related to misinformation and misuse of AI.
Thumbnail Image

Palm Beach County man arrested for using AI video in fake crime report

2026-04-10
Palm Beach Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a fake video (deepfake) used to deceive law enforcement, which is a misuse of AI technology. This misuse directly caused harm by wasting police resources, potentially damaging reputations, and creating safety concerns for first responders. The harm is realized and not hypothetical, fulfilling the criteria for an AI Incident. The involvement of AI in fabricating evidence that led to legal charges confirms the classification.