What happened — established timeline
This story has moved fast since our first article. Here are the established facts, in order.
January 2026. Defense Secretary Pete Hegseth issues a directive requiring all Pentagon AI contracts to include an “any lawful use” clause — use for all legal purposes, with no restrictions imposed by the vendor. Most contractors — Google, OpenAI, Elon Musk’s xAI — accept without notable resistance.
February 2026. Anthropic refuses. Its two red lines are explicit: no use of Claude for mass surveillance of American citizens, and no integration into autonomous weapons systems capable of killing without human supervision. Hegseth sets a deadline of February 27.
February 27, 2026. At the deadline, Hegseth designates Anthropic as a “supply-chain risk to national security” — a designation never previously applied to an American company. Trump orders all federal agencies to immediately cease using Anthropic products. That same evening, OpenAI announces it has signed a deal with the Pentagon to deploy its models on classified military networks.
Immediate market reaction. The day after the ban, the Claude iPhone app reaches number one on the US App Store, surpassing ChatGPT. More than one million new users sign up daily. A senior OpenAI researcher announces he is joining Anthropic. OpenAI’s robotics team lead resigns, citing the new military contract. Nearly 900 Google and OpenAI employees sign an open letter urging their leadership to refuse government requests for mass surveillance or autonomous lethal targeting — the same red lines Anthropic had defended.
March 9, 2026. Anthropic files a lawsuit in federal court in California, calling the designation an “illegal campaign of retaliation” and invoking constitutional protection of free speech. 37 engineers and researchers from OpenAI and Google DeepMind — including Google Chief Scientist Jeff Dean — file a voluntary amicus brief in support, warning that blacklisting Anthropic threatens the competitiveness of the entire American AI industry.
March 12, 2026. Microsoft — the Pentagon’s largest IT contractor — files an amicus curiae supporting Anthropic’s request for a stay. The company warns that an immediate ban could “potentially hinder American warfighters at a critical moment.” Microsoft has invested up to $5 billion in Anthropic.
Mid-March 2026 — The CEO war. Dario Amodei publicly calls OpenAI’s Pentagon deal “safety theater” and accuses Sam Altman of “straight-up lies.” Altman fires back, saying it is “bad for society” to abandon democratic norms because you dislike whoever is in power — a veiled response to Amodei accusing him of offering “dictator-style praise to Trump.”
March 19, 2026 — Hiring weapons experts. Anthropic posts a LinkedIn job listing for a “Policy Manager, Chemical Weapons and High Yield Explosives” — offering between $245,000 and $285,000 annually. The successful candidate must have at least five years of experience in chemical weapons and explosives defense, and knowledge of radiological dispersal devices. Their mission: design and oversee the safeguards governing how Anthropic’s AI models handle sensitive requests in these domains, and implement “rapid responses” when escalations are detected. OpenAI is simultaneously recruiting a frontier biological and chemical risks researcher for its Preparedness team, charged with monitoring catastrophic risks posed by advanced AI models.
Status as of March 19, 2026. The legal challenge is ongoing. Anthropic has a six-month transition period granted to the Pentagon to phase out Claude from its systems. Commercial and individual access to Claude is not affected by the designation. Palantir, a major defense contractor, was still using Claude as of mid-March.
What is debated
Did OpenAI betray its principles? Sam Altman admitted that the timing of his signing “seemed opportunistic and rushed.” The exact terms of OpenAI’s Pentagon deal — in particular the real scope of its own “red lines” — remain opaque, despite Altman’s public statements claiming to share Anthropic’s red lines. The CEO war that followed has made the debate even harder to read calmly.
Hiring weapons experts — prudence or contradiction? Anthropic and OpenAI are recruiting chemical weapons and explosives experts to prevent catastrophic misuse of their AI. Some researchers question whether giving an AI system detailed information about weapons, even defensively, creates its own risks. Researcher Stephanie Hare has raised this publicly. Others argue that not doing so is worse — that a model ignorant of the mechanisms of danger is less capable of recognizing and refusing them.
Is the designation legally sound? Several legal experts believe the basis for the “supply-chain risk” classification is fragile, as no security violation has been demonstrated. Dean Ball, a former AI policy advisor to the Trump administration, publicly called Hegseth’s interpretation “almost surely illegal” and “attempted corporate murder.”
Was the Pentagon bluffing? Claude was reportedly used in military operations in Iran hours after the ban was announced. Palantir was still using Claude as of mid-March. This suggests the designation is more a show of political force than an actual operational separation.
What remains hypothetical
The outcome of the legal challenge is uncertain. Anthropic’s constitutional arguments are considered solid by several legal experts, but the Trump administration has considerable political and administrative room to maneuver.
The industry ripple effect. If Anthropic prevails, other AI companies may be encouraged to maintain their own ethical guardrails against military contracts. If it loses, the opposite effect is likely — and potentially lasting.
Will hiring weapons experts make a practical difference? It is a serious and costly undertaking. But its real effectiveness will depend on the independence granted to these experts within the companies — and on whether the models can integrate their recommendations without creating new vulnerabilities.
Anthropic’s IPO, expected in 2026, will be directly influenced by the outcome of this standoff. The current $380 billion valuation rests in part on the stability of government and enterprise contracts that could be undermined by the designation.
Our position — clearly identified as such
We said it from the start: we choose to build HUMANITY.NET on Anthropic’s principles of honesty, non-manipulation, and protection of epistemic autonomy.
This case confirms that these principles are not just marketing. Refusing mass surveillance and autonomous weapons without human supervision — at the cost of billions in revenue and a direct confrontation with the American government — is a concrete act, not a statement of principle. Hiring weapons experts to prevent catastrophic misuse, rather than enable it, is equally concrete.
Who decides the conditions under which AI can be used to surveil citizens or trigger lethal strikes? That question is not technical. It is political, ethical, and fundamentally human.
It is the kind of question we refuse to leave unanswered — and the reason HUMANITY.NET was built.
Article updated March 19, 2026 — legal proceedings are ongoing and this article will be updated as court decisions are made.