AI Police Report Error Claims Officer Turned Into Frog in Utah

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Heber City, Utah, an AI system used to generate police reports falsely claimed an officer transformed into a frog after misinterpreting background audio from a Disney movie. The incident exposed significant flaws in AI-generated law enforcement documentation, raising concerns about accuracy, accountability, and potential risks to due process.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (Draft One) is explicitly mentioned and is used to generate police reports, a critical function with legal and societal implications. The AI malfunctioned by hallucinating content unrelated to reality, which was included in an official police report. This constitutes a direct AI malfunction leading to misinformation, which can harm the integrity of law enforcement records and potentially affect human rights and legal processes. The article also discusses concerns about bias and accountability, reinforcing the seriousness of the harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityTransparency & explainabilitySafetyRobustness & digital securityRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
WorkersGeneral public

Harm types
ReputationalPublic interestHuman or fundamental rights

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detectionContent generation


Articles about this incident or hazard

Thumbnail Image

Cops Forced to Explain Why AI Generated Police Report Claimed Officer Transformed Into Frog

2026-01-02
Futurism
Why's our monitor labelling this an incident or hazard?
The AI system (Draft One) is explicitly mentioned and is used to generate police reports, a critical function with legal and societal implications. The AI malfunctioned by hallucinating content unrelated to reality, which was included in an official police report. This constitutes a direct AI malfunction leading to misinformation, which can harm the integrity of law enforcement records and potentially affect human rights and legal processes. The article also discusses concerns about bias and accountability, reinforcing the seriousness of the harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI-Generated Police Report Claims Officer Transformed Into Frog, Department Issues Clarification

2026-01-03
ndtv.com
Why's our monitor labelling this an incident or hazard?
The AI system (Draft One and Code Four) was involved in generating police reports, and it malfunctioned by incorporating irrelevant background audio into the report, creating a false and humorous claim. However, no harm occurred to any person, property, or rights, and the police department promptly clarified the mistake. The event focuses on the AI system's error and the department's response, which fits the definition of Complementary Information rather than an Incident or Hazard. There is no plausible future harm indicated beyond the known error, and no direct or indirect harm has materialized.
Thumbnail Image

Cop Transforms Into Frog According To AI Generated Police Report

2026-01-04
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as generating police reports. The AI's malfunction (hallucination and misinterpretation) directly led to a false and absurd police report, which undermines the accuracy and reliability of official records. This can cause harm to individuals' rights and due process, fitting the definition of an AI Incident due to violations of legal and procedural standards and potential harm to affected parties. Although the specific 'frog' error was caught, the broader issue of AI-generated inaccuracies in serious cases is ongoing and harmful. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Axon's AI Tool Mistakes Movie Audio, Claims Cop Turned into Frog

2026-01-04
WebProNews
Why's our monitor labelling this an incident or hazard?
The AI system (Draft One) malfunctioned by hallucinating a false narrative due to misinterpreting background audio, which is a direct AI system error. However, the error was detected and corrected by human review before any harm (such as wrongful arrests or legal consequences) occurred. Therefore, no realized harm has taken place. The incident demonstrates a plausible risk of harm if such errors were not caught, especially given the high-stakes context of law enforcement documentation. The presence of an AI system, its malfunction, and the plausible future harm align with the definition of an AI Hazard. The article does not describe actual injury, rights violations, or other harms caused by the AI output, so it does not meet the criteria for an AI Incident. The extensive discussion of responses and implications does not overshadow the primary event, which is the AI error and its potential risk.
Thumbnail Image

AI turns police officer into frog: Strange incident exposes major flaw in automated systems. What went wrong?

2026-01-05
The Economic Times
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate police reports and produced a clearly false claim (an officer turning into a frog) due to misinterpreting background audio. Although this is a malfunction of the AI system, there is no indication of actual harm (physical injury, rights violations, or property damage) resulting from this error. The incident raises concerns about reliance on AI in critical contexts but does not describe realized harm. Therefore, it qualifies as an AI Incident due to the AI system's malfunction causing misinformation in an official context, which can be considered harm to the integrity and trust in law enforcement reporting, a form of harm to communities and accountability.
Thumbnail Image

AI-generated police report turned cop into a frog: we laugh, but it's a problem

2026-01-05
Cybernews
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved (Draft One, an AI-powered police report writing tool using large language models). The event stems from the AI system's malfunction (hallucination or erroneous output). Although no direct harm occurred in this case, the article emphasizes the plausible risk that such AI errors in police reports could lead to significant harm, including misinformation in legal contexts and reduced accountability. Since the harm is not realized but the AI malfunction could plausibly lead to harm, this event fits the definition of an AI Hazard rather than an AI Incident. The article also includes broader concerns and critiques about AI use in policing, but the main event is the AI-generated false report content without actual harm.
Thumbnail Image

An AI-Generated Police Report Claimed a Cop Transformed Into a Frog

2026-01-05
VICE
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating police reports, and its malfunction directly led to inaccurate and misleading documentation. Although no physical harm or legal violation is reported, the incident caused disruption in police operations and could undermine trust in official records. The AI's failure to correctly interpret context and data led to a tangible negative outcome, fitting the definition of an AI Incident due to harm in the form of operational disruption and misinformation within a critical public safety function.
Thumbnail Image

Utah PD testing AI report writing software shares comical error caused by 'The Princess and the Frog'

2026-01-05
Police1
Why's our monitor labelling this an incident or hazard?
The AI system (report-writing software) malfunctioned by incorporating irrelevant content from a movie into an official report, which is a clear example of AI malfunction during use. However, the error was caught and corrected without causing any harm or violation. Since no harm occurred and the issue was a minor error without significant consequences, this qualifies as an AI Incident due to the AI system's malfunction leading to an incorrect report, albeit a comical one. The event does not describe potential future harm, so it is not an AI Hazard. It is more than just complementary information because it involves a malfunction with direct impact on an official document, even if minor.
Thumbnail Image

AI-generated police report states Utah officer was turned into a frog - UPI.com

2026-01-05
UPI
Why's our monitor labelling this an incident or hazard?
An AI system (Draft One using ChatGPT-4) was involved and malfunctioned by generating a false police report. However, the error did not cause any direct or indirect harm such as injury, rights violations, or operational disruption. The police department is aware and is implementing oversight to prevent future errors. This event provides useful context on AI use and its limitations but does not describe realized or plausible harm. Hence, it fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

AI Police Report Claims Cop Turned Into Frog

2026-01-06
Newser
Why's our monitor labelling this an incident or hazard?
The AI system's malfunction led to an incorrect police report, but no harm or violation occurred. The event highlights the need for human oversight and transparency in AI use, which is a governance and operational concern rather than a realized harm or imminent risk. The article does not describe any injury, rights violation, or disruption caused by the AI output. Hence, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it informs about the practical challenges and responses related to AI deployment in law enforcement, fitting the definition of Complementary Information.
Thumbnail Image

AI-generated police report says officer turned into frog

2026-01-06
PHL17.com
Why's our monitor labelling this an incident or hazard?
The AI system's malfunction led to a false police report, which was corrected before causing harm. There is no indication that this error caused injury, rights violations, or other harms defined as AI Incidents. The concerns about future risks and accountability issues are plausible but not realized harms. Therefore, this event fits the definition of an AI Hazard, as the AI system's malfunction could plausibly lead to harm if such errors were not caught or corrected, and the broader use of AI in policing carries potential risks. The article also includes complementary information about criticisms and cautions but the main event is the AI-generated false report and its implications as a potential risk.
Thumbnail Image

AI-generated police report says officer turned into a frog

2026-01-06
democraticunderground.com
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating false information, which could have led to harm if uncorrected, but the harm was averted through correction and explanation. There is no indication of actual injury, rights violation, or other significant harm resulting from the AI's malfunction. The event thus represents a risk scenario and a cautionary example rather than a realized AI Incident. The mention of the software being designed to avoid audits suggests governance concerns but does not itself constitute an incident or hazard. Therefore, this is best classified as Complementary Information, providing context and highlighting governance and oversight issues related to AI use in law enforcement.
Thumbnail Image

Police department's crime report says officer turned into a FROG

2026-01-06
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
An AI system (Draft One) was used to generate police reports, and it malfunctioned by including a false statement (officer turned into a frog). However, the error was caught through the department's review process before causing any harm or legal issues. The article focuses on the AI's current use, its benefits, the error as a learning point, and the safeguards in place. There is no evidence of injury, rights violations, or disruption caused by the AI output. The event does not present a plausible future harm scenario beyond known AI limitations already managed by review. Hence, it does not meet the criteria for AI Incident or AI Hazard but fits as Complementary Information about AI deployment and oversight in policing.
Thumbnail Image

AI Backfires In Utah After Police Report Claims Officer Turned Into a Frog

2026-01-07
OutKick
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it generates police reports from bodycam footage. The malfunction led to an incorrect report claiming an officer turned into a frog, which is a clear error but did not cause any injury, rights violation, or other harm. The incident highlights a malfunction of the AI system but no realized harm occurred. Therefore, this qualifies as a minor AI system malfunction without direct or indirect harm. Since no harm or plausible future harm is described, and the issue is being addressed, this is best classified as Complementary Information about an AI malfunction and its correction rather than an AI Incident or Hazard.
Thumbnail Image

Police Used AI For Reports, And It Said The Cop Turned Into A Frog. That's A Problem | Carscoops

2026-01-08
Carscoops
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generated police reports from bodycam audio. The malfunction (hallucination) led directly to misinformation in an official document, which can harm individuals by affecting legal and administrative decisions. Although the specific frog error was caught and corrected, the article emphasizes the risk of more subtle errors causing harm. This meets the definition of an AI Incident because the AI's malfunction has directly led to harm (misinformation in official records) and poses a risk to human rights and legal protections. The event is not merely a potential risk (hazard) or a general update (complementary information), but a realized incident involving AI malfunction causing harm.
Thumbnail Image

That time when an AI police report hallucinated an officer turning into a frog

2026-01-09
TechSpot
Why's our monitor labelling this an incident or hazard?
An AI system (Draft One) was involved in generating police reports and produced a hallucinated, false narrative. The error was detected and corrected before causing harm, so no direct or indirect harm materialized. The event illustrates a malfunction of an AI system that could plausibly lead to harm if such hallucinations were not caught, especially in legal or human rights contexts. Therefore, it fits the definition of an AI Hazard rather than an AI Incident. The article also discusses broader concerns and responses but the main focus is on the hallucination event and its implications as a risk.
Thumbnail Image

AI-Generated Police Report States Information | Independent Newspaper Nigeria

2026-01-10
Independent Newspaper Nigeria
Why's our monitor labelling this an incident or hazard?
An AI system (Draft One) was used to generate police reports and malfunctioned by incorporating irrelevant and false information from background media, leading to a false police report. This is a malfunction of an AI system that could potentially lead to harm if such misinformation were acted upon, but the article does not report any actual harm occurring. The department's response to increase oversight suggests recognition of the risk. Since no harm has yet occurred but the malfunction could plausibly lead to harm if uncorrected, this event qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Law Enforcement Today

2026-01-10
Law Enforcement Today
Why's our monitor labelling this an incident or hazard?
An AI system (Draft One) was used to generate police reports from bodycam footage, and it malfunctioned by incorporating fictional content from background audio, leading to a false police report. This is a direct malfunction of an AI system affecting official documentation. Although the error was caught and corrected, the AI's malfunction caused misinformation that could undermine trust or operational integrity. This fits the definition of an AI Incident because the AI system's malfunction directly led to a harm related to misinformation in official records, which can be considered harm to the management and operation of critical infrastructure (law enforcement documentation). There is no indication of plausible future harm beyond this event, so it is not an AI Hazard. It is not merely complementary information or unrelated news because the AI system's malfunction caused a concrete issue requiring correction.