US Judge Rules Use of ChatGPT to Cut Humanities Grants Unconstitutional

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The US Department of Government Efficiency (DOGE) used ChatGPT to identify and terminate over $100 million in National Endowment for the Humanities grants, targeting projects linked to DEI, Holocaust education, and Black history. A federal judge ruled this AI-driven process unconstitutional, citing viewpoint discrimination and violation of rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that ChatGPT was used by DOGE employees to flag grants for cancellation based on keywords, which led to the termination of grants and caused irreparable injury to the plaintiffs, including disruption of expression and research. The harms are directly linked to the AI system's use in decision-making that violated constitutional protections and caused widespread disruption. This meets the criteria for an AI Incident because the AI system's use directly led to realized harm involving violations of rights and harm to communities.[AI generated]
AI principles
FairnessRespect of human rights

Industries
Government, security, and defenceEducation and training

Affected stakeholders
Business

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Planning and budgeting

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

DOGE's Termination of Humanities Grants Is Ruled Unconstitutional

2026-05-07
The New York Times
Why's our monitor labelling this an incident or hazard?
The article explicitly states that ChatGPT was used by DOGE employees to flag grants for cancellation based on keywords, which led to the termination of grants and caused irreparable injury to the plaintiffs, including disruption of expression and research. The harms are directly linked to the AI system's use in decision-making that violated constitutional protections and caused widespread disruption. This meets the criteria for an AI Incident because the AI system's use directly led to realized harm involving violations of rights and harm to communities.
Thumbnail Image

DOGE Slammed by Judge for Using AI to Find $100 Million in Cuts

2026-05-07
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in the decision-making process for terminating grants, which directly led to harm in the form of unconstitutional discrimination and wrongful termination of funding. The judge explicitly holds the government responsible for the AI's outputs and their consequences, indicating the AI system's role was pivotal in causing the harm. Therefore, this qualifies as an AI Incident due to violations of rights and harm to communities resulting from the AI system's use and malfunction (lack of proper oversight).
Thumbnail Image

DOGE's ChatGPT-driven mass grant purge deemed illegal in scathing order

2026-05-07
Raw Story
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in a consequential government decision-making process that led to illegal cancellation of grants, which constitutes a violation of legal rights and obligations. The harm is realized as the wrongful termination of grants and the disruption of lawful grant administration. The court's ruling and injunction are responses to this harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a breach of legal obligations and harm to affected parties.
Thumbnail Image

Judge rules DOGE used ChatGPT in a way that was both dumb and illegal

2026-05-08
The Verge
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the decision-making process that directly led to the cancellation of grants based on protected characteristics, which constitutes a violation of constitutional rights (harm category c: violations of human rights and legal protections). The AI system's outputs were used without meaningful human review, making the AI's role pivotal in causing the harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to significant harm through unlawful discrimination and rights violations.
Thumbnail Image

Judge Blasts Use of ChatGPT in Federal Grant Purge, Rules NEH Terminations Unconstitutional

2026-05-07
Redstate
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, in the review process that led to the termination of over 1,400 federal grants. The court found that the AI-generated rationales were arbitrary and lacked proper human oversight, resulting in unconstitutional targeting based on viewpoint, race, sex, religion, and other protected characteristics. This directly links the AI system's use to violations of constitutional rights and unlawful administrative actions, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the grants were terminated unlawfully and the decision-making process was infected by improper AI-assisted classifications.
Thumbnail Image

DOGE slammed by judge for AI use in cutting $100 million

2026-05-08
ArcaMax
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was explicitly used in the process of deciding which grants to cut, and its outputs directly influenced the termination of over 1,400 grants. The judge's ruling highlights that the AI's use led to unconstitutional discrimination and that the government did not adequately review the AI's outputs, making the AI's role pivotal in causing harm. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The harm is realized, not just potential, and the AI system's involvement is central to the incident.
Thumbnail Image

Judge says DOGE grant terminations are unlawful and 'troubling'

2026-05-08
ABC7 New York
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT by DOGE staffers in the grant termination process, indicating AI system involvement. The AI system was used as a tool to identify grants for cuts based on keywords related to diversity, equity, and inclusion, which led to discriminatory decisions violating legal protections. The judge's ruling highlights the harm caused by these decisions, including violations of rights and harm to communities represented by the affected grants. Since the AI system's use directly contributed to unlawful actions and harm, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DOGE slammed by judge for AI use in cutting $100 million

2026-05-08
The Columbian
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used in the decision-making process that led to grant terminations, which the judge found unconstitutional and punitive, particularly regarding race. This constitutes a violation of rights under applicable law, fulfilling the criteria for an AI Incident. The harm has already occurred as the funding cuts were orchestrated based on AI outputs, and the court blocked their execution due to these harms.
Thumbnail Image

DOGE Used ChatGPT to Cut $100 Million in Humanities Grants, and a Judge Just Called It Unconstitutional

2026-05-08
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the decision-making process for terminating grants. The AI's outputs were directly used to flag and cancel grants without proper human expertise or peer review, leading to unconstitutional viewpoint discrimination and harm to the affected communities and scholars. The judge's ruling confirms that the AI's role was pivotal in causing these harms. Hence, this is an AI Incident due to the realized harm caused by the AI system's use in government grant decisions.
Thumbnail Image

Judge rules Trump-era cuts to $100M in humanities grants were unconstitutional

2026-05-08
KOAA
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the Department of Government Efficiency used ChatGPT, an AI system, to identify DEI-related grant projects for cancellation. This AI-driven classification was a pivotal factor in the unlawful termination of grants, which the court found to be unconstitutional viewpoint discrimination violating the First and Fifth Amendments. The harm is realized and significant, affecting the rights and funding of numerous individuals and organizations. Hence, this qualifies as an AI Incident because the AI system's use directly contributed to a violation of constitutional rights and harm to the affected parties.
Thumbnail Image

DOGE's Use of ChatGPT to Cut Funds for Holocaust Education and Anti-Black Massacre Documentary Ruled Unconstitutional

2026-05-08
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly states that ChatGPT, an AI system, was used by DOGE staff to produce discriminatory rationales that led to the termination of grants in a manner deemed unconstitutional and discriminatory by a federal judge. The AI system's outputs were directly incorporated into decision-making spreadsheets that formed the basis for cutting funding, resulting in harm to protected groups and violation of constitutional rights. This is a clear case where the AI system's use directly led to harm (violation of rights and harm to communities), fulfilling the criteria for an AI Incident.
Thumbnail Image

Court To DOGE Bros: Asking ChatGPT 'Yo, Is This DEI?' Is Not Proper Legal Process & Also A First Amendment Violation

2026-05-08
Techdirt
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of ChatGPT, an AI system, in the decision-making process for grant cancellations. The AI's outputs were relied upon without proper context or expert evaluation, resulting in wrongful termination of grants. This caused direct harm to the affected parties by unlawfully withdrawing funding and suppressing speech based on viewpoint discrimination, which is a violation of constitutional rights. Therefore, this qualifies as an AI Incident because the AI system's use directly led to significant harm, including legal and constitutional violations.
Thumbnail Image

DOGE Slammed by Judge for AI Use in Cutting $100 Million (1)

2026-05-08
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used in the decision-making process that led to the termination of federal funding grants, which the judge found unconstitutional and unauthorized. This use of AI directly contributed to harm by improperly cutting funding, affecting the rights and interests of grant recipients. Therefore, this qualifies as an AI Incident due to the realized harm linked to the AI system's use in governance decisions.
Thumbnail Image

Judge rules DOGE cuts to humanities grants unconstitutional

2026-05-08
Missouri Lawyers Media
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) used in the decision-making process for grant terminations. The AI's outputs were pivotal in selecting which grants to cut, leading to realized harm including violations of constitutional rights and harm to communities dependent on the grants. The ruling highlights the misuse of AI in a governmental context causing direct harm. Therefore, this qualifies as an AI Incident due to the direct link between AI use and constitutional and community harm.