Utah Approves AI Chatbot to Renew Psychiatric Medication Prescriptions

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Utah has approved a pilot program allowing Legion Health's AI chatbot to autonomously renew certain psychiatric medication prescriptions for stable patients. While safeguards and restrictions are in place, experts warn of potential risks to patient safety due to reduced human oversight and possible prescription errors.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system is explicitly involved in clinical decision-making by renewing psychiatric medication prescriptions, which qualifies as AI system involvement. The event stems from the AI system's use in healthcare. Although no actual harm or adverse outcomes have been reported, expert concerns about safety, opacity, and the system's limitations indicate plausible future risks of harm to patients' health. The system's safeguards and narrow scope reduce immediate risk, but the potential for harm remains credible. Thus, this is best classified as an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct or indirect harm has yet occurred.[AI generated]
AI principles
SafetyAccountability

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Physical (injury)

Severity
AI hazard

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

This chatbot can prescribe psych meds. Kind of.

2026-04-03
The Verge
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in clinical decision-making by renewing psychiatric medication prescriptions, which qualifies as AI system involvement. The event stems from the AI system's use in healthcare. Although no actual harm or adverse outcomes have been reported, expert concerns about safety, opacity, and the system's limitations indicate plausible future risks of harm to patients' health. The system's safeguards and narrow scope reduce immediate risk, but the potential for harm remains credible. Thus, this is best classified as an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct or indirect harm has yet occurred.
Thumbnail Image

Utah Is Giving Dr. AI the Power to Renew Drug Prescriptions

2026-04-03
Gizmodo
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as autonomously renewing prescriptions, including psychiatric medications, which directly affects patient health and safety. The article highlights prior evidence of AI systems in similar roles causing prescription errors and susceptibility to manipulation, indicating realized or ongoing harm risks. The system's deployment without continuous human oversight after initial approval thresholds further increases the risk. These factors meet the criteria for an AI Incident, as the AI system's use has directly or indirectly led to or could lead to injury or harm to persons. The event is not merely a potential hazard or complementary information but involves actual use with associated risks and concerns about patient safety.
Thumbnail Image

Startup Approved to Let AI System Prescribe Psychiatric Medication

2026-04-06
Futurism
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as prescribing psychiatric medications, which directly affects patient health. The article reports that the system is already approved and in use, not just a future possibility, indicating realized deployment. Experts warn about risks such as over-treatment and failure to detect patient deception, which are credible harms to health. The system's role is pivotal in these risks, as it automates prescription renewals without human clinical judgment. These factors meet the criteria for an AI Incident, as the AI system's use has directly led to or is causing harm or risk of harm to patients' health. The event is not merely a potential hazard or complementary information but an incident involving actual AI use with significant health implications.
Thumbnail Image

AI Can Now Prescribe You Psychiatric Medication in Utah

2026-04-04
PCMag Australia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (chatbot) used to prescribe psychiatric medication, which involves AI in a high-stakes healthcare context. Although safeguards like human doctor review are in place initially, the gradual phasing out of human oversight and the nature of psychiatric medication management raise credible concerns about potential harm to patients' health. Since no actual harm or incident is reported yet, but plausible future harm exists, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the AI system's deployment and its potential risks, not on responses or updates to past incidents.
Thumbnail Image

Utah Tests AI Powered Pilot for Automated Prescription Renewals of Psychiatric Meds

2026-04-04
WinBuzzer
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the system automates prescription renewals using AI. The event concerns the use of the AI system in a clinical setting with strict eligibility and safety protocols. No direct or indirect harm has been reported; the system is in a pilot phase with staged validation and human oversight to prevent harm. The article discusses potential risks and benefits but does not describe any actual injury, rights violation, or other harm. Thus, it does not qualify as an AI Incident. However, because the AI system's use could plausibly lead to harm if safeguards fail (e.g., incorrect renewals, missed clinical signs), it fits the definition of an AI Hazard. The article is not primarily about a response to a past incident or a governance update, so it is not Complementary Information. It is not unrelated as it clearly involves an AI system with potential impact on health.