IMPLEMENTATION GAP ANALYSIS

When Governments Rewrite Contracts: The Anthropic Dispute and the Governance Vacuum Nobody Wants to Name

ISAR Global • 1 MAR 2026

The dispute between Anthropic and the United States Pentagon, which concluded last week with a presidential ban on federal use of Anthropic’s technology and an unprecedented “supply chain risk” designation against an American company, has generated considerable commentary about AI safety, military ethics, and corporate responsibility.

Most of that commentary has missed the more fundamental issue.

Whatever one’s view on autonomous weapons or domestic surveillance — and there are serious, legitimate arguments on both sides — the legal and governance reality of what occurred deserves considerably more scrutiny than it has received.

A Contract Was Already Signed

Anthropic entered into a legally binding transaction agreement with the Pentagon in July 2025. That agreement, worth up to $200 million, made Anthropic the first AI company approved for deployment on classified Defence Department networks. Critically, it included explicit restrictions preventing Claude from being used in fully autonomous weapons systems or for domestic mass surveillance of American citizens. The Pentagon agreed to those terms. The contract was signed.

Six months later, the administration issued a memorandum requiring all Pentagon AI contracts to incorporate “any lawful use” language — language that directly contradicted the terms already agreed. When Anthropic declined to accept retrospective amendment of a binding agreement, it was threatened with economic ruin through a designation mechanism previously reserved for foreign adversaries, and a presidential decree ordering every federal agency to immediately cease use of its products.

The question this raises is not primarily about AI. It is about whether government contracts with private entities mean anything at all.

The Incoherence Is the Point

Anthropic’s CEO Dario Amodei identified a contradiction that the administration never adequately resolved. The Pentagon simultaneously designated Anthropic a supply chain risk to national security — implying the company’s products were too dangerous to use — whilst invoking emergency production powers to compel Anthropic to continue supplying those same products on revised terms.

These two positions are not legally or logically compatible. A genuine security designation reflects a considered assessment that a supplier poses unacceptable risk. It is not a negotiating tactic deployed when a contractor declines to renegotiate mid-contract. The incoherence strongly suggests that the legal mechanisms were being deployed instrumentally — as levers of coercive pressure — rather than as genuine security determinations.

That matters, because it means the legal framework was subordinated to executive will. That is a governance failure of a different order entirely from anything Anthropic is alleged to have done.

The Resolution Tells the Real Story

Within hours of Anthropic’s designation, OpenAI announced a Pentagon deal covering the same classified networks. That agreement, by Altman’s own account, contains the same substantive red lines Anthropic had sought: no domestic mass surveillance, human oversight for autonomous weapons decisions.

If the Pentagon’s position was that those restrictions were operationally unacceptable — that military effectiveness genuinely required the ability to deploy AI in fully autonomous lethal systems or mass civilian surveillance — then no agreement with OpenAI containing identical restrictions should have been acceptable either. The fact that it was accepted, apparently without difficulty, reveals that the substance was never the issue.

The dispute was about authority. The administration’s position, stripped to its core, was that a private contractor may not impose conditions on government use of its products. Anthropic’s position was that a legally agreed contract means what it says. OpenAI found a form of words — referencing existing law and policy rather than explicit contractual prohibition — that allowed the administration to claim the principle whilst Altman secured the substance.

That is a pragmatic resolution. It is not a vindication of the rule of law.

The Legislative Vacuum That Made This Possible

Here is the governance reality that this episode makes impossible to ignore: the United States has no comprehensive legislative framework governing military use of AI. There are no statutes specifying permissible and impermissible applications. There is no independent oversight mechanism. There is no judicial process through which affected parties — whether companies, military personnel, or citizens — could seek clarity or remedy.

In the absence of legislation, everything defaults to contract law and executive discretion. Contract law can be overridden by coercion when one party holds sufficient economic and regulatory power. Executive discretion, by definition, reflects the preferences of whoever holds executive power at a given moment.

This is not a stable governance arrangement for technologies that may determine the character of warfare and the boundaries of civil liberty for generations. The Anthropic dispute did not create this vacuum. It exposed it.

What This Should Mean for Governance

ISAR Global’s analytical focus is the gap between governance rhetoric and governance reality. This episode is a precise illustration of that gap at its most consequential.

International AI governance frameworks — from the UN’s emerging Scientific Panel to the OECD’s AI Principles to the EU AI Act — address military and surveillance applications in varying degrees of specificity. None of them have enforcement mechanisms capable of constraining executive action of this kind. The rhetoric of responsible AI governance and human oversight of lethal systems is extensive. The reality, as demonstrated in Washington over the past week, is that when a government with sufficient power decides to act, existing frameworks provide no meaningful check.

That is not an argument for abandoning international governance efforts. It is an argument for taking seriously the distance between where those efforts currently stand and where they need to reach.

Anthropic took a principled position on a legally and ethically sound basis. It paid a severe commercial price for doing so. The administration ultimately accepted — through a different company — the same substantive protections it had spent weeks characterising as operationally intolerable.

The lesson is not that Anthropic was wrong. The lesson is that without legislative clarity, the next company in this position may not have the financial resilience or institutional courage to hold the line. And the one after that will not bother drawing one in the first place.

That is how governance vacuums become governance catastrophes.

Share this analysis: