FDA to Use AI for Drug Approval After Major Staff Cuts, Raising Efficiency and Safety Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The US FDA plans to use the AI system Elsa to accelerate drug and device approvals after laying off 2,000 staff. While aiming to boost efficiency and process large volumes of data, the move raises concerns about AI's ability to ensure safety in complex regulatory decisions.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly mentioned (Elsa) being used in the drug approval process, which is a critical infrastructure function related to public health. Although no harm has yet been reported, the use of AI in this context could plausibly lead to harm if the AI system fails to detect safety issues or makes erroneous approvals, given the complexity and importance of drug evaluations. Therefore, this situation represents an AI Hazard due to the credible risk of future harm stemming from AI use in a high-stakes regulatory environment.[AI generated]
AI principles
SafetyAccountabilityTransparency & explainabilityDemocracy & human autonomyRobustness & digital securityPrivacy & data governance

Industries
Healthcare, drugs, and biotechnologyGovernment, security, and defence

Affected stakeholders
ConsumersWorkers

Harm types
Physical (injury)Physical (death)ReputationalEconomic/PropertyPublic interest

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Organisation/recommendersReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Após demitir 2 mil funcionários, FDA vai usar IA na aprovação de medicamentos para 'aumentar radicalmente a eficiência'

2025-06-11
O Globo
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned (Elsa) being used in the drug approval process, which is a critical infrastructure function related to public health. Although no harm has yet been reported, the use of AI in this context could plausibly lead to harm if the AI system fails to detect safety issues or makes erroneous approvals, given the complexity and importance of drug evaluations. Therefore, this situation represents an AI Hazard due to the credible risk of future harm stemming from AI use in a high-stakes regulatory environment.
Thumbnail Image

EUA: FDA usará IA na aprovação de medicamentos - 11/06/2025

2025-06-11
Folha de S.Paulo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Elsa, a language model) in FDA regulatory processes, which is a clear AI system involvement. The article discusses the development and use of AI to accelerate drug approvals, which could plausibly lead to harms such as insufficiently vetted drugs reaching the market, potentially harming public health. However, no direct or indirect harm has yet occurred according to the article. The concerns and skepticism expressed indicate potential future risks rather than current incidents. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information, as it does not describe a response to a past incident or a general AI ecosystem update but focuses on the plausible risk of harm from AI use in approvals.
Thumbnail Image

FDA demite 2 mil funcionários e passa a usar IA na aprovação de medicamentos

2025-06-11
InfoMoney
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Elsa) in the FDA's drug approval process, which is a clear AI system involvement. However, the article does not report any direct or indirect harm caused by the AI system's use, nor does it describe any malfunction or misuse leading to harm. The concerns raised are about potential limitations and skepticism regarding the AI's effectiveness, but no harm or plausible future harm is described. Therefore, this is not an AI Incident or AI Hazard. The article mainly provides contextual information about the FDA's adoption of AI and related organizational changes, fitting the definition of Complementary Information.
Thumbnail Image

FDA usará IA em aprovações de medicamentos para 'aumentar radicalmente a eficiência'

2025-06-12
Estadão
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in the FDA's drug and device approval process, which is a development and use of AI. However, the article does not report any direct or indirect harm resulting from the AI's use, nor does it describe any plausible imminent harm. Instead, it focuses on the announcement, potential efficiency gains, skepticism, and governance issues. This fits the definition of Complementary Information, as it provides supporting context and updates on AI deployment and governance without describing an AI Incident or AI Hazard.
Thumbnail Image

Agência dos EUA quer usar IA para acelerar aprovação de medicamentos

2025-06-13
Poder360
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Elsa) in a critical regulatory process (drug approval). The AI's outputs have been reported to include false information, which could indirectly lead to harm if relied upon without verification. However, the article does not describe any actual harm or incidents resulting from the AI's use so far. The AI's role is in development and use, with some malfunction (hallucinations). Given the absence of realized harm but presence of plausible risk, this qualifies as an AI Hazard rather than an AI Incident. The article also discusses broader policy measures and context but the main focus is on the AI system's current use and its limitations posing potential risks.