The $200 Million Question

An interesting food for thought...

See Me Geek Out About AI LIVE:

FIRST UP: I am so excited to be speaking at The Prosper Show this year.As a extra bonus for my audience, i wrangled a discount code. You can either get a FREE Expo-only pass or 15% off from the All-Access pass. Use jlambadjieva at check out.


And if you are in Europe, come and join the European Seller Conference, happening in Prague, March 18-21, 2026! ๐Ÿš€

Here is a special coupon code - JOANNA50 - that gives a 50โ‚ฌ discount.

The $200 Million Question

Okay, I need to talk about something that isn't about product listings or schema markup or how to make your AI-generated ad copy slightly less unhinged. Most weeks, this newsletter lives in the practical: optimise this, prepare for that, build systems that make your operations faster before your competitors do. But sometimes a story breaks that sits underneath all of that โ€” the kind of thing that touches the infrastructure of trust your entire AI-powered business is quietly standing on.

This is one of those weeks. And honestly, I've been going back and forth about whether to write about it (my partner can confirm I've been muttering about Pentagon contracts while making dinner, which is apparently "not normal behaviour"). But I think not covering this would be doing you a disservice, because the questions it raises are ones every person spending money on AI tools will eventually have to answer. So let's get into it.

The Standoff

Dario Amodei Source: Business Insider

On Friday, February 27, 2026 โ€” today, as I write this, which is either excellent timing or terrible timing depending on how you feel about deadlines โ€” a deadline expires. The US Department of Defense has given Anthropic, the company behind Claude, until 5:01pm ET to agree to let the military use its AI model for "all lawful purposes." No conditions, no guardrails, no "well, actually."

If Anthropic refuses, the Pentagon has threatened to cancel the company's $200 million defence contract, label it a "supply chain risk" (a designation usually reserved for, you know, foreign adversaries), and potentially invoke the Defense Production Act to compel compliance. Which is the government equivalent of saying "nice AI company you've got there, shame if something happened to it."

Anthropic has refused.

CEO Dario Amodei said the company "cannot in good conscience" comply. Their two red lines: no mass domestic surveillance of American citizens, and no fully autonomous weapons systems โ€” the kind that select and engage targets without a human in the loop.

Now, before anyone paints this as a pacifist tech company clutching its pearls at the mere mention of defence work, let's be clear about what Anthropic is not objecting to. They've actively pursued military contracts. They were the first frontier AI company to deploy models on classified government networks. Claude is already used across the Department of Defense for intelligence analysis, operational planning, cyber operations, and modelling. They agreed in December to expand into missile defence and cyber defence. They cut off access to firms linked to the Chinese Communist Party, walking away from several hundred million dollars in revenue.

This isn't a company that's squeamish about defence. This is a company that's drawn a line and said "this far, no further." Which, depending on your worldview, is either principled or commercially suicidal. (Possibly both. The Venn diagram of those two things has more overlap than people like to admit.)

Who Said Yes (And Why That Matters)

Here's where it gets instructive for anyone choosing between AI providers โ€” and yes, I know that sounds like a weird pivot, but stay with me.

Of the four companies awarded Pentagon contracts worth up to $200 million each last July โ€” Anthropic, OpenAI, Google DeepMind, and Elon Musk's xAI โ€” Anthropic is the only one that refused the "all lawful purposes" standard without conditions.

xAI signed a deal earlier this week to deploy Grok on classified military networks under exactly the terms Anthropic rejected. This happened days after Grok drew global criticism for generating sexualised deepfake images, including of minors. So when you're evaluating what "safety-conscious" means across different companies, maybe hold that particular detail in one hand while you weigh the options. (I'm not editorialising. I'm just presenting the timeline. The timeline is doing the editorialising for me.)

OpenAI agreed to join the Pentagon's unclassified AI network. According to Semafor, some OpenAI employees felt it was important to make ChatGPT available to the military partly to avoid ceding ground to xAI's Grok. OpenAI had also quietly removed language from its usage policy that explicitly banned weapons development and military warfare applications โ€” a revision made back in 2024 that attracted about as much attention as the terms and conditions update on your favourite app. Which is to say: almost none.

Google is reportedly close to finalising its own agreement, though the internal situation is messier. Over 100 Google DeepMind employees signed a letter this week urging management to adopt Anthropic's red lines. This echoes 2018, when employee opposition forced Google to abandon an earlier Pentagon AI contract โ€” though the company has since centralised its military contract decisions and, shall we say, streamlined certain dissent mechanisms. (Corporate speak for "we reorganised the org chart so that doesn't happen again.")

So the scoreboard reads: three companies that have agreed, or are about to agree, to let the military use their AI however existing law permits. And one company that's prepared to lose $200 million and potentially its entire government business over two specific objections.

Both Sides of the Table

The Pentagon's position has its own logic, and it's worth understanding before evaluating it.

Defence officials argue they're asking for nothing more than the right to use a licensed commercial product for any lawful purpose. They point out that mass domestic surveillance is already illegal, making Anthropic's objection redundant. Emil Michael, the Pentagon's chief technology officer, has framed this as a straightforward commercial question: if you want defence revenue, don't impose restrictions beyond what the law requires. Which, as arguments go, is clean and simple and only slightly undermined by the part where they threatened to invoke wartime production laws against a company that builds chatbots.

There's also a military readiness argument. The Pentagon doesn't want to negotiate with a private company about whether a particular use case falls within contractual guardrails during a time-sensitive operation. Defence officials raised hypothetical scenarios during negotiations โ€” including how Claude's safeguards might affect response to an intercontinental ballistic missile launch. Which is a fair point, even if the image of Pentagon officials stress-testing a chatbot's ethical boundaries while a missile is inbound feels like the plot of a film I'd definitely watch but would also give me anxiety.

Anthropic's counter rests on two arguments โ€” values and technical capability.

On values: Amodei argues that legal protections against mass surveillance haven't kept pace with what AI can actually do. Under current law, the government can purchase detailed records of Americans' movements, browsing habits, and associations from commercial sources without a warrant. AI makes it possible to assemble this data into comprehensive profiles automatically and at scale. Anthropic's position is that "currently legal" and "wise to build" are not the same sentence.

On technical reliability: Anthropic contends that frontier AI models โ€” including Claude โ€” simply are not dependable enough to power fully autonomous weapons. The models hallucinate. They make errors. In a context where errors mean unintended escalation or civilian casualties, removing human oversight isn't a policy choice. It's a technical recklessness problem. The company offered to work directly with the Pentagon on improving reliability. The Pentagon declined.

And there's a contradiction in the government's own position that Amodei highlighted: the Pentagon has simultaneously threatened to label Anthropic a security risk and invoke the Defense Production Act to force the company to keep providing its technology. Designating Claude as both a national security threat and essential to national security at the same time. (Pick a lane, guys.)

Why This Matters If You Sell Things Online

If you're reading this thinking "I sell phone cases on Amazon, why should I care about Pentagon contracts," I get it. You're not building weapons systems. You're not conducting surveillance. You're trying to rank products and write better copy and figure out why your ACOS went through the roof last Tuesday.

But here's the thing โ€” and I promise this connects: the Anthropic-Pentagon standoff crystallises a question that's quietly embedded in every AI tool you're paying for. What are the values of the company whose technology you're building your business on? And do those values hold when the pressure gets real?

Every major AI model you might use โ€” for product research, content generation, customer analysis, competitive intelligence โ€” is built by a company that is simultaneously negotiating with governments and militaries about what its technology can and cannot do. The guardrails on the tools you use today are not permanent features. They are policy decisions made by companies under varying degrees of commercial and political pressure. And as we've just seen, those policies can change quietly, quickly, and under circumstances that have nothing to do with your product listings.

OpenAI removed its military use prohibition in 2024. Google is preparing to agree to terms it once found objectionable enough to cancel a contract over. xAI signed up this week with a model that has already demonstrated significant safety failures. Anthropic held its position and may lose its largest government contract today.

None of this tells you which tool to use for your next batch of bullet points. But every subscription you pay, every API call you make, every tool you integrate is an economic vote.

I'm not telling you to boycott anyone. I'm telling you that the AI industry has reached a point where the commercial tools you use for your business and the military tools used in conflict zones are built by the same companies, on the same models, funded by the same revenue streams. Your purchasing decisions exist within that reality whether you engage with it or not.

Do You Love The AI For Ecommerce Sellers Newsletter?

You can help us!

Spread the word to your colleagues or friends who you think would benefit from our weekly insights ๐Ÿ™‚ Simply forward this issue.

In addition, we are open to sponsorships. We have more than 65,000 subscribers with 75% of our readers based in the US. To get our rate card and more info, email us at [email protected]

The Quick Read:

The Tools List:

๐Ÿ“Š Numerous AI - Stop spending time on spreadsheet busywork.

๐Ÿค– Forefront - Your AI assistant for work

๐Ÿ‘ค Neiro AI: Generate video AI avatars with human-like features, micro-expressions, and emotions.

๐Ÿฉป Mockey: Generate high-quality product mockups with 1000+ templates.

๐Ÿฆพ Browse AI - Train a robot in 2 minutes to extract and monitor data from any website

๐Ÿ’ป Brainner - Streamline talent acquisition by automating AI-driven resume analysis

About The Writer:

Jo Lambadjieva is an entrepreneur and AI expert in the e-commerce industry. She is the founder and CEO of Amazing Wave, an agency specializing in AI-driven solutions for e-commerce businesses. With over 13 years of experience in digital marketing, agency work, and e-commerce, Joanna has established herself as a thought leader in integrating AI technologies for business growth.

For Team and Agency AI training book an intro call here.

What did you think of todayโ€™s email?

๐Ÿ™‚Good, not great

๐Ÿ™„It sucked