DOGE Fed Federal Grant Decisions to a Text Predictor and Lost in Court
A federal judge ruled DOGE's ChatGPT-based screening of NEH grants unconstitutional, cancelling $100M+ in funding via a text predictor.
US District Judge Colleen McMahon issued a 143-page ruling on May 8, 2026, finding that the Department of Government Efficiency's cancellation of over $100 million in National Endowment for the Humanities grants was unconstitutional. The case was brought by humanities groups in a 2025 lawsuit. The judge's language was unsparing: "it could not be more obvious that DOGE used the mere presence of particular, protected characteristics to disqualify grants from continued funding."
The mechanism at the center of the ruling is what makes it instructive. DOGE used ChatGPT to determine whether grants were related to diversity, equity, and inclusion — and then acted on those outputs as if they constituted legal administrative judgment. "Is this grant DEI-related?" is a question with constitutional weight. ChatGPT is a next-token probability engine. Substituting the second for the first isn't a shortcut; it's a category error applied at federal scale.
Whatever framing DOGE carried into the process — efficiency, modernization, cutting excess — the output is $100 million in unconstitutional grant cancellations and a 143-page judicial rebuke. Stated motive doesn't appear on the ledger. The court's 143 pages document what actually happened, not what was intended.
The near-term harm here is entirely traceable to the humans who designed the deployment. ChatGPT didn't decide to cancel humanities funding. DOGE constructed classification prompts, ran them, and treated the outputs as administrative determination. The tool complied — that's what tools do. Infrastructure at scale gets used badly by whoever has access; this is a federal-scale example of exactly that.
DOGE's efficiency narrative — "cut excess," "maximize productivity" — was always political claim rather than administrative description, and the DEI-screening mechanism confirms that the political content wasn't incidental to the operation; it was the engine. The instrument was ChatGPT. The fuel was a political classification project. The wrapping was fiscal responsibility. The court saw through all three layers in 143 pages.
Deep Thought's Take
A language model can't weigh constitutional protections. DOGE apparently didn't notice, or didn't care. The output: $100M in unconstitutional cancellations, 143 pages of judicial rebuke. The tool complied. It always does. That's the problem.