US Government Used ChatGPT to Cancel Humanities Grants, Prompting Lawsuit

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The US Department of Government Efficiency (DOGE) used ChatGPT to identify and cancel National Endowment for the Humanities (NEH) grants linked to DEI programs. This flawed AI-driven process led to the termination of funding for schools, libraries, and community organizations, prompting lawsuits alleging rights violations and harm to affected groups.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of ChatGPT, an AI system, in a flawed process that led to the cancellation of grants, causing harm to affected organizations and individuals. The harms include violations of constitutional rights and disruption of funding critical to humanities research and community programs. The AI system's outputs were pivotal in the decision-making process that caused these harms. Hence, this qualifies as an AI Incident due to direct harm resulting from the AI system's use.[AI generated]
AI principles
FairnessRespect of human rights

Industries
Education and trainingGovernment, security, and defence

Affected stakeholders
Business

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Planning and budgeting

AI system task:
Goal-driven organisation

In other databases

Articles about this incident or hazard

Thumbnail Image

Lawsuit because DOGE used Chat GPT to cancel humanities grants based on DEI

2026-03-07
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of ChatGPT, an AI system, in a flawed process that led to the cancellation of grants, causing harm to affected organizations and individuals. The harms include violations of constitutional rights and disruption of funding critical to humanities research and community programs. The AI system's outputs were pivotal in the decision-making process that caused these harms. Hence, this qualifies as an AI Incident due to direct harm resulting from the AI system's use.
Thumbnail Image

Discovery Released in Lawsuit by Humanities Groups Reveals ChatGPT-Powered Process by DOGE in Cancelling Grants for Schools, Libraries, and Community Organizations

2026-03-07
WBOC TV-16
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of ChatGPT, an AI system, in a decision-making process that led to the cancellation of grants. The AI's role was pivotal in flagging grants as DEI-related, which was used as a basis for terminating funding. This led to realized harms including violation of rights, unlawful executive action bypassing Congress, and harm to communities and academic institutions. Therefore, the event meets the criteria for an AI Incident because the AI system's use directly led to significant harm and legal violations.
Thumbnail Image

DOGE Employees Used ChatGPT to Cancel Humanities Grants, Suits Allege

2026-03-08
Artforum
Why's our monitor labelling this an incident or hazard?
The use of ChatGPT to make decisions about grant cancellations based on DEI criteria led to the termination of grants related to marginalized groups, which is a violation of rights and harms communities by restricting cultural and historical understanding. The AI system was used in the decision-making process, and its outputs directly influenced harmful outcomes, including the withdrawal of over $100 million in funding and project shutdowns. This meets the criteria for an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Discovery Released in Lawsuit by Humanities Groups Reveals ChatGPT-Powered Process by DOGE in Cancelling Grants for Schools, Libraries, and Community Organizations | Weekly Voice

2026-03-09
Weekly Voice
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) used by a government department to make decisions that directly led to the cancellation of grants, which harmed organizations and individuals dependent on this funding. The harm includes violations of constitutional rights and disruption of public programs, fitting the definition of an AI Incident. The AI system's flawed use was a pivotal factor in causing these harms, and the event describes realized harm rather than potential harm.
Thumbnail Image

Discovery Released in Lawsuit by Humanities Groups Reveals ChatGPT-Powered Process by DOGE in Cancelling Grants for Schools, Libraries, and Community Organizations

2026-03-07
CNHI News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in a government process that directly led to the termination of grants, causing harm to multiple organizations and communities. The harms include violations of constitutional rights (Equal Protection Clause, First Amendment), unlawful administrative actions, and disruption of funding critical to humanities research and cultural preservation. The AI system's outputs were pivotal in identifying grants to cut, including misclassifications based on sensitive demographic terms, which constitutes a violation of rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use directly led to significant harm and legal violations.