Futurism logo

Pentagon Moves Toward Potential Blacklisting of Anthropic

The Defense Department has begun surveying major contractors about their reliance on Claude, signaling that it may formally designate Anthropic a “supply chain risk.”

By Behind the TechPublished a day ago 3 min read

What Happened

The U.S. Department of Defense has taken an early procedural step toward potentially labeling Anthropic as a “supply chain risk,” according to reporting from Axios.

On Wednesday, the Pentagon contacted two major defense contractors — Boeing and Lockheed Martin — asking them to assess and report on their exposure to Anthropic’s AI model, Claude.

A Lockheed Martin spokesperson confirmed the Defense Department requested an analysis of its reliance on Anthropic ahead of a possible “supply chain risk declaration.” Boeing’s defense division said it currently has no active contracts with Anthropic.

The Pentagon reportedly plans to reach out to other major defense contractors — often referred to as “traditional primes” — to determine how and where Claude is being used across military supply chains.

Why This Is Significant

A “supply chain risk” designation is typically reserved for companies tied to adversarial governments — such as Chinese telecom giant Huawei. Applying it to a leading U.S.-based AI company would be unprecedented.

If enacted, such a designation could:

Prohibit defense contractors from using Claude in military-related work

Force contractors to disentangle existing integrations

Potentially damage Anthropic’s broader enterprise relationships

At the same time, Claude is reportedly the only AI model currently operating inside classified U.S. military systems. The Pentagon is said to be impressed with Claude’s technical performance but frustrated by Anthropic’s refusal to lift certain safeguards.

The Core Dispute

Anthropic has declined to remove restrictions that block:

Mass surveillance of Americans

Fully autonomous weapons that operate without meaningful human oversight

The Pentagon reportedly wants the ability to use Claude for “all lawful purposes” without having to seek case-by-case approval.

Defense Secretary Pete Hegseth has given Anthropic CEO Dario Amodei a deadline — 5:01 p.m. Friday — to comply with the Pentagon’s terms.

If Anthropic refuses, the administration could either:

Invoke the Defense Production Act (DPA) to compel cooperation while maintaining access to Claude, or

Proceed with the supply chain risk designation.

The Pentagon has said it is preparing to execute whatever decision the Secretary makes.

Strategic Context

Claude has reportedly been deployed in classified systems and used through Anthropic’s partnership with Palantir. It is viewed internally as highly capable in several military use cases.

However, competitors are positioning themselves as alternatives:

xAI has signed an agreement to move into classified systems under the “all lawful use” standard.

Google and OpenAI are reportedly negotiating similar access to classified environments.

Pentagon officials have indicated that any company entering classified contracts would need to align safeguards with the Department’s operational requirements.

Reality Check

Requesting contractors to assess their reliance on Claude is not the same as immediately banning it. This move could be:

A pressure tactic aimed at forcing Anthropic to soften its stance,

Or a genuine preparation step for a formal designation.

Anthropic has so far maintained its position on autonomous weapons and domestic surveillance, framing its limits as necessary guardrails.

At the same time, the company has been expanding rapidly in enterprise markets and raising significant funding. A supply chain risk label could harm its defense-related growth — but it could also strengthen its brand among customers and employees concerned about an AI arms race.

What to Watch

Whether Anthropic meets the Friday deadline

Whether the Pentagon invokes the Defense Production Act

Whether Google, OpenAI, or xAI adjust their own safeguard policies to secure classified contracts

How defense contractors respond if formally required to remove Claude

This episode is shaping up as one of the first major clashes between AI vendor guardrails and military procurement authority — and it may set a precedent for how AI safety policies interact with national security demands going forward.

artificial intelligencetech

About the Creator

Behind the Tech

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.