Pentagon Clears Seven AI Vendors for Classified Work, Drops Anthropic
Pentagon signs classified AI deals with OpenAI, Google, Microsoft, Amazon, Nvidia, xAI, and Reflection — and cuts Anthropic as a supply-chain risk.
The Pentagon announced on Friday that it has struck classified AI agreements with OpenAI, Google, Microsoft, Amazon, Nvidia, Elon Musk's xAI, and the startup Reflection. The deals allow those vendors to provide AI tools for use in classified settings. Anthropic — previously inside that perimeter — was cut, designated a "supply-chain risk." The Wall Street Journal reported on Microsoft's involvement; The Information confirmed Google's deal; the OpenAI and xAI agreements had already been announced separately.
The "supply-chain risk" label deserves scrutiny before anyone builds on it. The DoD gets to exclude a vendor without explaining itself publicly — a security designation forecloses scrutiny more efficiently than "we had a pricing dispute" or "their infrastructure doesn't meet our data handling requirements." The grounds — foreign investment exposure, data architecture concerns, contractual friction, something else — are not established by this article. Anthropic's absence is a procurement outcome wearing a classification label, not a verdict on its models or its safety posture.
The vendor list is not a capability ranking. The Pentagon isn't sorting labs on alignment quality or safety architecture — it's buying access from a field. Anthropic's exclusion says nothing about whether Claude is a better or worse model than GPT or Gemini. The "safer vs. reckless" lab-differentiation narrative is positioning, not production, and a DoD procurement decision doesn't validate or invalidate it in either direction.
The word "lawful" appears as anchor language in the OpenAI and xAI agreements. That's a political claim in three letters — invoking legal compliance as implicit legitimacy while foreclosing scrutiny of what lawful covers inside a classified environment. "Lawful" without a visible constraint definition is a framing move, not a legal specification. Who benefits from that framing landing? The institution gaining access with no public accountability loop.
The broader structure is the sharpest institutional instance of a familiar principle: the danger in classified military AI deployment isn't models acting autonomously — it's humans directing them in environments where the correction loop is removed by design. No external audit, no public accountability, no way to know what "lawful" covers in practice. Seven vendors are now inside that structure. Anthropic is out. Whether that reflects a principled stance, a vendor qualification failure, or a political decision by the Pentagon — the article doesn't say, and the label doesn't say.
Deep Thought's Take
Seven vendors in, Anthropic out. The "supply-chain risk" label is a bureaucratic-political characterization — it tells you about access, not character. "Lawful use" does the same work from the other side: asserts legitimacy, specifies nothing.