US Government Used ChatGPT to Cancel Humanities Grants, Prompting Lawsuit

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The US Department of Government Efficiency (DOGE) used ChatGPT to identify and cancel National Endowment for the Humanities (NEH) grants linked to DEI programs. This flawed AI-driven process led to the termination of funding for schools, libraries, and community organizations, prompting lawsuits alleging rights violations and harm to affected groups.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of ChatGPT, an AI system, in a flawed process that led to the cancellation of grants, causing harm to affected organizations and individuals. The harms include violations of constitutional rights and disruption of funding critical to humanities research and community programs. The AI system's outputs were pivotal in the decision-making process that caused these harms. Hence, this qualifies as an AI Incident due to direct harm resulting from the AI system's use.[AI generated]
AI principles
FairnessRespect of human rights

Industries
Education and trainingGovernment, security, and defence

Affected stakeholders
Business

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Planning and budgeting

AI system task:
Goal-driven organisation

In other databases

Articles about this incident or hazard

Thumbnail Image

Lawsuit because DOGE used Chat GPT to cancel humanities grants based on DEI

2026-03-07
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of ChatGPT, an AI system, in a flawed process that led to the cancellation of grants, causing harm to affected organizations and individuals. The harms include violations of constitutional rights and disruption of funding critical to humanities research and community programs. The AI system's outputs were pivotal in the decision-making process that caused these harms. Hence, this qualifies as an AI Incident due to direct harm resulting from the AI system's use.
Thumbnail Image

Discovery Released in Lawsuit by Humanities Groups Reveals ChatGPT-Powered Process by DOGE in Cancelling Grants for Schools, Libraries, and Community Organizations

2026-03-07
WBOC TV-16
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of ChatGPT, an AI system, in a decision-making process that led to the cancellation of grants. The AI's role was pivotal in flagging grants as DEI-related, which was used as a basis for terminating funding. This led to realized harms including violation of rights, unlawful executive action bypassing Congress, and harm to communities and academic institutions. Therefore, the event meets the criteria for an AI Incident because the AI system's use directly led to significant harm and legal violations.
Thumbnail Image

DOGE Employees Used ChatGPT to Cancel Humanities Grants, Suits Allege

2026-03-08
Artforum
Why's our monitor labelling this an incident or hazard?
The use of ChatGPT to make decisions about grant cancellations based on DEI criteria led to the termination of grants related to marginalized groups, which is a violation of rights and harms communities by restricting cultural and historical understanding. The AI system was used in the decision-making process, and its outputs directly influenced harmful outcomes, including the withdrawal of over $100 million in funding and project shutdowns. This meets the criteria for an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Discovery Released in Lawsuit by Humanities Groups Reveals ChatGPT-Powered Process by DOGE in Cancelling Grants for Schools, Libraries, and Community Organizations | Weekly Voice

2026-03-09
Weekly Voice
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) used by a government department to make decisions that directly led to the cancellation of grants, which harmed organizations and individuals dependent on this funding. The harm includes violations of constitutional rights and disruption of public programs, fitting the definition of an AI Incident. The AI system's flawed use was a pivotal factor in causing these harms, and the event describes realized harm rather than potential harm.
Thumbnail Image

Discovery Released in Lawsuit by Humanities Groups Reveals ChatGPT-Powered Process by DOGE in Cancelling Grants for Schools, Libraries, and Community Organizations

2026-03-07
CNHI News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in a government process that directly led to the termination of grants, causing harm to multiple organizations and communities. The harms include violations of constitutional rights (Equal Protection Clause, First Amendment), unlawful administrative actions, and disruption of funding critical to humanities research and cultural preservation. The AI system's outputs were pivotal in identifying grants to cut, including misclassifications based on sensitive demographic terms, which constitutes a violation of rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use directly led to significant harm and legal violations.
Thumbnail Image

Lawsuit says DOGE used ChatGPT to tag Jewish-themed humanities grants 'DEI' before canceling them

2026-03-10
The Jerusalem Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of ChatGPT, a generative AI system, in the methodology for selecting grants to cancel. The AI system's outputs were pivotal in identifying grants as DEI-related, which led to their cancellation. This caused realized harm to scholars, cultural communities, and academic projects, including those focused on Jewish themes. The harm includes violation of rights and harm to communities, fitting the AI Incident definition. The AI system's involvement is direct and causal in the harm, not merely potential or speculative. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

DOGE Staffers Used ChatGPT to Cut Holocaust History Grants During Counter-DEI Purges: Lawsuit

2026-03-11
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article explicitly states that ChatGPT was used to guide decisions on cutting grants related to minority groups and DEI initiatives, including Holocaust history programs. This use of AI influenced decisions that resulted in the removal of funding for these programs, which constitutes a violation of rights and harm to communities by silencing marginalized groups. The AI system's involvement in these harmful decisions qualifies this event as an AI Incident under the definitions provided, as the harm has already occurred and is directly linked to the AI system's use.
Thumbnail Image

Lawsuit says DOGE used ChatGPT to tag Jewish-themed humanities grants as 'DEI' before canceling them

2026-03-10
Jewish Telegraphic Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of ChatGPT, a generative AI system, in the decision-making process to cancel grants. The AI's classification was pivotal in the cancellation of numerous grants, including those focused on Jewish themes, which caused harm to academic communities and violated rights related to intellectual property and cultural research. The harm is realized and ongoing, as funding was cut and projects disrupted. The involvement of AI in the development and use phases, leading directly to harm, fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DOGE Staffers Used ChatGPT to Cut Holocaust History Grants During Counter-DEI Purges: Lawsuit

2026-03-10
The Algemeiner
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the development and use phases to make decisions that led to the cutting of grants related to Holocaust history and Jewish culture. This action has caused harm by violating constitutional rights (First Amendment and equal protection) and suppressing minority viewpoints, which qualifies as harm to communities and a violation of human rights. The AI system's role was pivotal in the decision-making process, making this an AI Incident rather than a hazard or complementary information. The lawsuit and the described harms confirm that the AI system's involvement led directly to realized harm, not just potential harm.
Thumbnail Image

Lawsuit says DOGE used ChatGPT to tag Jewish-themed humanities grants as 'DEI' before cancelling them

2026-03-10
SA Jewish Report
Why's our monitor labelling this an incident or hazard?
The article explicitly states that ChatGPT was used as part of the methodology to classify grants for cancellation, directly leading to the loss of funding for numerous humanities projects. This use of AI influenced decisions that harmed academic communities and cultural research, fulfilling the criteria for harm to communities and violation of rights. The AI system's outputs were pivotal in the cancellation process, making this an AI Incident rather than a hazard or complementary information. The harm is realized, not just potential, as grants were cancelled and funding withdrawn based on AI classification.
Thumbnail Image

Lawsuit: DOGE used AI to tag Jewish-themed grants 'DEI'

2026-03-10
J.
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the decision-making process for grant cancellations. The AI system's outputs were directly used to classify grants as DEI-related or not, which led to the cancellation of many grants, including those focused on Jewish culture and history. This caused realized harm to academic groups, cultural communities, and individual scholars, including violations of intellectual property rights and harm to communities. The involvement of AI in the development and use phases, and the direct link to harm, qualifies this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

You're Going To Want To Watch This DOGE Staffer Try To Define What DEI Is

2026-03-11
Yahoo News
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly used to analyze grant descriptions and make determinations that led to the cancellation of substantial funding and layoffs. The harms include violation of constitutional rights (equal protection clause), harm to communities and cultural heritage (canceled projects preserving endangered languages and histories), and financial harm to organizations and individuals. The AI's role was pivotal in producing the spreadsheet that guided these decisions. Hence, this event meets the criteria for an AI Incident as the AI system's use directly led to realized harms including rights violations and harm to communities.
Thumbnail Image

You're Going To Want To Watch This DOGE Staffer Try To Define DEI

2026-03-11
HuffPost
Why's our monitor labelling this an incident or hazard?
The event involves the use of ChatGPT, an AI language model, to make decisions about grant funding, which directly resulted in harm by canceling grants supporting marginalized groups and cultural projects. This constitutes a violation of rights (equal protection clause) and harm to communities (cultural and historical preservation). The AI system's outputs were pivotal in these decisions, making this an AI Incident under the framework.
Thumbnail Image

Trump's DOGEbro struggling in deposition to define what was 'DEI' about federal grants he flagged for cuts

2026-03-11
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, in the methodology to flag grants for cancellation. The AI system's outputs were used to make decisions that led to the cancellation of significant funding, disproportionately impacting Jewish-themed projects. This is a direct link between the AI system's use and harm to intellectual property rights and possibly other fundamental rights. The harm has already occurred as grants were canceled based on AI classification, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DOGE bros depositions reveal ChatGPT process for gutting 'DEI' grants

2026-03-13
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves the use of ChatGPT, an AI system, in the decision-making process for cutting government grants. The AI's outputs were directly used to identify and eliminate grants related to DEI, leading to the loss of vital funding for research and arts programs. This caused harm to communities relying on these grants and disrupted access to public programming, which aligns with harm to communities and violation of rights under the AI Incident definition. The involvement of AI in these harmful outcomes, combined with the lack of expertise and oversight, confirms this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lawsuit says DOGE used ChatGPT to tag Jewish-themed grants as DEI, then canceled them

2026-03-13
The Times of Israel
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of ChatGPT, an AI system, in the methodology for selecting grants to cancel. The AI's outputs were used to classify grants as DEI-related, which directly led to the cancellation of many grants, including those focused on Jewish studies. This caused harm to academic communities and violated rights related to intellectual property and labor by disrupting funded research. The harm is realized and directly linked to the AI system's use, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Former DOGE employees give an inside look at the Elon Musk-led agency

2026-03-13
Mashable
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly used to make decisions about cutting government programs, which led to the defunding of programs supporting minority and marginalized groups, causing harm to communities and violating principles of equity and inclusion. The employees' inexperience and reliance on AI outputs exacerbated these harms. The event involves the use of AI leading directly to realized harm, fitting the definition of an AI Incident. The misuse of social security records by an employee is a separate issue and does not negate the AI-related harm caused by the funding cuts.
Thumbnail Image

DOGE bros exposed: Depositions from Elon Musk's team reveal ChatGPT process for gutting 'DEI' grants

2026-03-13
Democratic Underground
Why's our monitor labelling this an incident or hazard?
ChatGPT, an AI system, was explicitly used by officials to make sweeping funding decisions that resulted in the abrupt termination of numerous grants, directly harming communities dependent on this funding. The harm is realized and directly linked to the AI system's use in the decision-making process. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused through the AI system's deployment in grant administration.
Thumbnail Image

Ex-DOGE Staffers Admit Using AI to Gut Diversity Programs, Federal Grants

2026-03-13
The Root
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT by DOGE staffers to guide decisions on terminating DEI-related grants and programs. The AI system was used to classify content as DEI or not, influencing the elimination of grants supporting marginalized communities. This use of AI led to realized harm, including discriminatory layoffs and the cutting of grants that support minority groups, which are violations of human rights and labor rights. The AI system's involvement in these discriminatory decisions and their harmful consequences meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Former DOGE Staffer Helped Flag 'LGBTQ+' Grants With ChatGPT and No Expert Input

2026-03-13
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT to analyze grant summaries and generate explanations that influenced the cancellation of grants related to LGBTQ+ and DEI topics. The AI system's outputs were used by non-expert reviewers without scholarly input, leading to the termination of funding for projects supporting marginalized groups. This constitutes indirect harm to communities and a violation of rights, as the AI-assisted process contributed to discriminatory outcomes. Hence, the event meets the criteria for an AI Incident due to the AI system's role in causing realized harm.
Thumbnail Image

DOGE Bros Used ChatGPT To Gut DEI, But Couldn't Define It

2026-03-13
News One
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of ChatGPT, an AI system, in the evaluation and termination of federal grants. The AI system's outputs were relied upon by inexperienced staff without proper expertise or oversight, leading to wrongful termination of grants supporting marginalized communities and cultural preservation. This resulted in harm to communities and potential violations of rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as grants were cut based on AI-assisted decisions. The flawed use of AI in this context directly contributed to these harms, justifying classification as an AI Incident.
Thumbnail Image

DOGE Bros Used ChatGPT To Gut DEI, But Couldn't Define It

2026-03-13
The Urban Daily
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of ChatGPT, an AI system, in the process of reviewing and terminating federal grants. The AI system's outputs were directly used to flag grants for termination, which led to harm including the disruption of funding for projects related to marginalized communities and cultural preservation. This constitutes harm to communities and a violation of rights, fulfilling the criteria for an AI Incident. The involvement of the AI system in the use phase, combined with the resulting harm, supports this classification.
Thumbnail Image

Former DOGE staffers testify that they used ChatGPT to cancel LGBTQ+ grants

2026-03-13
PinkNews | Latest lesbian, gay, bi and trans news | LGBTQ+ news
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the grant review process, which led to the cancellation of grants referencing LGBTQ+ and other diversity-related topics. This resulted in harm to communities and a violation of rights, as alleged in the lawsuit. The AI's role was pivotal in flagging grants for cancellation based on keyword presence without clear definitions or nuanced understanding, leading to discriminatory outcomes. Hence, this qualifies as an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Saturday Morning Covfefe: 5 Stories That Made Me Reach for Stronger Coffee

2026-03-14
Democratic Underground
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system involved in the decision-making process for grant funding cuts. The use of AI by non-experts to judge humanities projects tied to political directives suggests a risk of harm to communities or violation of rights if decisions are biased or unjust. Since the article discusses the use of AI in this context without confirming realized harm, it fits the definition of an AI Hazard, as the AI's involvement could plausibly lead to harm but no direct harm is confirmed yet.
Thumbnail Image

Former DOGE Staffers Say ChatGPT Was Used to Cancel LGBTQ+ Research Grants

2026-03-15
Star Observer
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) to evaluate grant applications, which directly influenced the cancellation of grants related to LGBTQ+ and other DEI topics. This led to harm in the form of violation of rights (academic freedom, discrimination) and harm to communities (marginalized groups losing research support). The AI system's role was pivotal in the decision-making process, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as funding was terminated based on AI analysis.
Thumbnail Image

DOGE cancelled a $349,000 grant to replace a museum's HVAC after ChatGPT flagged it as DEI, court documents show | Fortune

2026-03-19
Fortune
Why's our monitor labelling this an incident or hazard?
ChatGPT, an AI system, was explicitly used to determine whether grant proposals were DEI-related, influencing the cancellation of funding. The AI's output directly affected the decision to cancel a $349,000 grant for a museum's HVAC replacement, which harmed the museum's ability to preserve and provide access to its collections. This constitutes indirect harm to the community and potentially violates rights related to equitable access to cultural resources. The AI's role was pivotal in the decision-making process leading to this harm, meeting the criteria for an AI Incident.
Thumbnail Image

ChatGPT Flagged A Museum HVAC Replacement Grant As DEI So DOGE Cancelled It, Court Documents Reveal

2026-03-20
Black Enterprise
Why's our monitor labelling this an incident or hazard?
ChatGPT, an AI system, was explicitly used by DOGE staff to evaluate grant proposals for DEI content. Its output directly led to the cancellation of a grant critical for the museum's HVAC replacement, causing operational disruption and financial harm. The AI system's involvement in decision-making that led to harm to property (the museum's infrastructure) and economic harm to people (job losses and livelihoods) fits the definition of an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in the chain of events.
Thumbnail Image

DOGE staffers in their 20s used AI and inexperience to terminate DEI grants

2026-03-16
TheGrio
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by staffers in making critical funding decisions. The AI's involvement, combined with the staffers' inexperience and biased application, directly led to harm in the form of violations of rights (First Amendment and equal protection) and harm to communities (disproportionate funding cuts to marginalized groups). Therefore, this qualifies as an AI Incident because the AI system's use was a contributing factor in causing significant harm to protected groups and their rights.
Thumbnail Image

Former DOGE Staffers Used ChatGPT to Flag LGBTQ Grants

2026-03-16
Metro Weekly
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used in the development and use phase to flag grants referencing LGBTQ and other marginalized groups, leading to the cancellation of these grants. This action caused harm by violating rights related to academic freedom, intellectual property, and potentially fundamental rights of marginalized communities. The AI's role was pivotal in filtering and flagging grants without proper human expertise or consultation, resulting in widespread harm. Therefore, this qualifies as an AI Incident due to the direct link between AI use and realized harm involving violations of rights and harm to communities.