The Anthropic Test: Why Every Organization Needs to Define Its AI Non-Negotiables Now

The dispute between Anthropic and the Department of Defense isn’t primarily a story about one company and one contract. It’s a live-fire exercise in AI governance—and every organization with an AI strategy should be watching carefully.
What Actually Happened
Here’s the compressed version of a fast-moving story.
In the summer of 2025, Anthropic secured a Pentagon contract reportedly worth around $200 million to support U.S. national security applications, including use of its Claude model on classified networks. That’s not unusual. AI is moving into sensitive institutional spaces fast, and defense agencies are no exception.
What happened next is where the story gets instructive.
In January 2026, Defense Secretary Pete Hegseth issued a memo pushing “any lawful use” language into all Defense Department AI contracts—a directive that directly collided with Anthropic’s existing guardrails. The company’s two core red lines were clear: no use of its systems for mass surveillance of American citizens, and no use for fully autonomous weapons—systems capable of firing without meaningful human control.
On February 24, Hegseth met Anthropic CEO Dario Amodei at the Pentagon and reportedly warned that non-compliance could result in the company being designated a “supply chain risk” and potentially subjected to a Defense Production Act order. The following day, the Pentagon sent what was described as a “best and final offer” demanding removal of model-level guardrails and agreement to unrestricted lawful use.
Anthropic refused.
By February 27, the Trump administration had ordered federal agencies and military contractors to cease doing business with Anthropic. Hegseth publicly applied the “supply chain risk” label—a designation historically reserved for foreign firms deemed national security threats, now applied to a domestic AI company over ethical constraints. Anthropic announced it would challenge the designation in court.
Meanwhile, OpenAI reportedly moved in, signing a deal to provide AI for Pentagon classified networks under the terms Anthropic had declined.
Why This Is a Trust Crisis, Not Just a Contract Dispute
Let’s be precise about what’s at stake, because the framing matters.
The Pentagon’s position—that any private company’s categorical restrictions on “lawful use” are impractical and inappropriate—isn’t unreasonable on its face. Operational contexts are complex. Legal determinations are contextual. Powerful institutions are rightly skeptical of ceding decision authority to vendors.
But “any lawful use” is a floor, not a strategy. Legality tells you what you’re permitted to do. It says nothing about what you should do, or what consequences cascade through systems at scale when you do it.
Anthropic’s two AI non-negotiables—no mass surveillance of citizens, no autonomous weapons without human control—aren’t arbitrary brand positioning. They track directly onto longstanding debates in international humanitarian law about weapons that select and engage targets without human judgment, and human rights frameworks about pervasive surveillance infrastructure. These aren’t fringe concerns. They’re the kinds of commitments that allow institutions, publics, and partners to extend trust.
The unprecedented nature of designating a domestic AI company a “supply chain risk” over ethical guardrails should register as a warning signal. Not a partisan one—a structural one. If the mechanism for enforcing government AI preferences is to label ethical constraints as national security liabilities, we’ve created a chilling effect that extends far beyond any single company’s contract decisions. Every vendor watching this story now knows what holding a red line might cost.
That’s the systemic risk. And it lands squarely in the lap of every organizational leader making AI governance decisions right now.
Worth noting, too: Claude had already been used inside the Pentagon for intelligence assessments and battlefield simulations before this dispute. Anthropic never objected to military use broadly. The dispute was specifically about whether guardrails on the most consequential uses—autonomous weapons, mass surveillance—would remain intact. That’s a meaningful distinction, and it’s one every organization should internalize: your AI non-negotiables don’t have to cover everything. They have to cover the right things.
The Leadership Lesson: Your AI Non-Negotiables Must Exist Before the Pressure Test
Here’s what I see in this story as a systems thinker: Anthropic’s guardrails held not because they were written on a slide somewhere, but because they were operationalized. They were built into the model, encoded into contracts, and articulated publicly—before a powerful stakeholder pushed back.
That sequence matters enormously.
Values defined in the moment of pressure are not values. They’re negotiations. And values lost in negotiation are not values. They’re collateral. The only commitments that survive contact with powerful institutions are the ones already embedded in your contracts, your product design, and your public communications before the phone rang.
Most organizations haven’t done this work. They have AI principles. They have usage policy documents. They have responsible AI task forces. What they often don’t have is a set of pre-committed, operationalized AI non-negotiables—with explicit rationales, embedded in licensing terms, tested through scenario planning, and communicated to every major partner and customer in advance.
This is the core of what I work on with organizations through KO Insights—the gap between values that live in decks and values that hold under pressure. Anthropic’s case demonstrates that closing that gap is possible. It also demonstrates what it costs. Every leader needs to decide, in advance and with clear eyes, which costs they’re willing to pay.
A Practical AI Non-Negotiables Checklist for Leaders
This isn’t theoretical. Here’s where to start:
1. Define your non-negotiables. Identify three to five uses your organization will not support, even if they’re legal and potentially lucrative. Think: mass biometric surveillance, autonomous lethal targeting, manipulative behavioral profiling. Tie each red line to an explicit legal, ethical, or safety rationale—not just reputational risk. Vague discomfort doesn’t survive a high-stakes negotiation.
2. Map who gets harmed. For each AI use case your organization might enable or encounter, map the affected parties: end users, employees, communities, democratic institutions. Make harm scenarios concrete. “Discrimination in high-stakes decisions” is too abstract. “A hiring model trained on biased historical data that systematically screens out qualified candidates from underrepresented groups” is actionable.
3. Publish your governance commitments. Translate values into public-facing commitments: usage policies, acceptable-use statements, contract clauses. Ensure your largest customers and partners have seen—and ideally signed—those commitments before you’re in a pressure situation. Governance that lives only in internal documents provides very little protection when the moment arrives.
4. Bake red lines into contracts and product design. Add usage restrictions directly into licensing terms. Where technically feasible, make certain harmful configurations difficult or impossible to implement by design. “Responsible by default” isn’t just a value statement—it’s an engineering and legal discipline.
5. Rehearse the pressure test. Run tabletop scenarios: What would we do if a major government agency, strategic partner, or regulator demanded “any lawful use” as a condition of a significant contract? Pre-commit your answers. Document who makes the decision to walk away, and how you’d communicate it to employees, investors, and the public. The worst moment to figure out your answer is when someone is waiting on the other end of the line with a deadline.
If you’re looking for a framework to structure this work inside your organization, the KO Insights advisory practice is designed exactly for this kind of AI governance leadership.
The Larger Stakes
Analysts across the legal and policy spectrum have noted that other governments and militaries are watching this episode closely. The question it surfaces is a significant one: will major powers signal that AI used in sensitive and consequential contexts will be subject to legally and ethically grounded limits—or that maximum operational freedom is the governing principle?
That question doesn’t get answered in boardrooms alone. But it gets shaped there.
Every organization that defines its non-negotiables clearly, encodes them into real governance structures, and holds them under pressure is contributing to an answer. Every organization that treats values as negotiable when the contract is large enough is contributing a different one.
The Anthropic–Pentagon clash is not a cautionary tale about choosing ethics over revenue. It’s a reminder that the choice gets made whether you’re ready for it or not.
Better to be ready.
Kate O’Neill is a Thinkers50-recognized strategist, author, and keynote speaker. She is the founder and CEO of KO Insights and the author of What Matters Next and five other books on technology, humanity, and the future of decision-making.

