So You're About to Give an AI Agent Your Admin Password

A cautionary tale

Don’t Miss This:

ECOM MASTERY AI featuring BDSS

I am sooo pumped to be part of this super insane line up of speakers.

Btw, I might have extra free tickets reserved only for sellers. If you are interested, reply with “FREE TICKET” and I will see what I can do.

So You're About to Give an AI Agent Your Admin Password (A Love Story)

If this is in your shopping basket…this article is for youuu

I'm genuinely excited about agentic AI tools. Like, properly excited—the kind of excited where I've been testing everything I can get my hands on and annoying my partner with unsolicited demonstrations of "look what it can do now."

And for good reason. An AI agent that can monitor your inventory, draft supplier emails, pull sales data, browse competitor listings, and compile it all into a weekly report while you actually sleep? That's not hype. That's the operational leverage we've all been waiting for since the first time we manually updated a spreadsheet at 11pm and wondered if there was a better way.

But here's the thing. Uncle Ben had a point. With great power comes great responsibility—and these tools are genuinely powerful in ways that require us to think before we connect everything to everything.

This isn't a "don't use agentic AI" piece. It's a "here are some considerations before you jump in the deep end" piece. Because the deep end is deeper than most of us realise, and the lifeguards (security researchers, mostly) have been waving their arms trying to get our attention while we enthusiastically sprint toward the water.

The Difference Between a Chatbot and a Liability

There's a meaningful distinction that most of us have been quietly ignoring: an AI that answers questions is fundamentally different from an AI that does things. The first one might give you a wrong answer about return policies. The second one has the keys to your supplier portal.

Agentic AI tools—the ones that can read files, execute code, browse the web, send messages, and interact with your actual business systems—are becoming accessible to non-technical teams at alarming speed. Tools like Claude Code, Claude Cowork, and OpenClaw are designed specifically to take sequences of autonomous actions on your behalf. Which sounds amazing until you realise what "autonomous actions on your behalf" actually means when something goes wrong.

The security industry has a name for this risk profile: agentic security risk. It is categorically different from the risk of a chatbot being confidently incorrect about your shipping times. It is the risk of delegated authority to a system that will follow instructions it finds in documents, webpages, and emails—regardless of who put those instructions there.

When Your AI Agent Takes Orders From Someone Else

The framing that matters here isn't "what if the AI makes a mistake." It's: what happens when the AI does exactly what it's told—by someone who isn't you.

Once an agent has access to your files, your browser, your messaging tools, and your codebase, you've essentially handed partial admin powers to a system that will act on instructions it encounters. Those instructions don't have to come from you. They can come from a webpage the agent visits. A document it opens. An email it reads. A supplier PDF you didn't think twice about downloading.

Security researchers call this indirect prompt injection: malicious instructions embedded in outside content that the agent later reads and follows with all the enthusiasm of an eager intern who never questions authority.

The consequences are not theoretical. A single autonomous agent recently compromised an internal consultancy chatbot in under two hours, accessing tens of millions of internal chat messages and hundreds of thousands of confidential client files. The attack exploited a routine SQL injection through an unauthenticated API. No nation-state resources required. Just someone who understood that AI agents follow instructions very, very well.

The Tool Landscape (Or: Know What You're Handing Keys To)

Different agentic tools carry different risk profiles, and understanding them is more useful than generic panic.

Claude Code is, by default, the most controlled of the widely-used agents. Anthropic built in read-only defaults and requires explicit approval before the tool edits files or runs commands. That discipline matters. But the risk returns quickly when teams start extending the tool—attaching external MCP servers (model context protocol connectors that link the agent to third-party systems), enabling autonomous mode, or normalising the approval-skip flag that exists for developer convenience but represents a significant control failure if used routinely.

Anthropic's own engineering documentation is candid about prompt injection risk once the tool has meaningful access to a codebase or connected system. They're telling us the sharp edges exist. We're just not reading the warning labels.

Claude Cowork sits at the opposite end of the accessibility spectrum. It's designed to feel consumer-friendly—scheduling tasks, generating presentations, browsing, managing files, and in preview even interacting directly with your computer screen. That accessibility is also the hazard. Marketing managers, operations leads, and trading teams can grant an agent broad real-world authority without approaching the decision the way a security engineer would.

Anthropic's guidance on Cowork notes that activity is not captured in audit logs, compliance APIs, or data exports, and explicitly advises against using it for regulated workloads. That last bit is buried in documentation that roughly zero percent of prospective users will have read before connecting their Google Drive.

OpenClaw, built around an explicitly tool-centric model, is the most transparent about the trade-offs. Its documentation states clearly that prompt injection is not a solved problem, that system prompts are only soft guidance, and that real protection comes from tool policies, approval structures, sandboxing, and allowlists. It also warns that running one shared, tool-enabled agent across multiple users in a Slack or Discord channel effectively means those users share the same delegated tool authority—a setup that becomes dangerous quickly if the agent has access to credentials or sensitive files.

The Rule of Two (And Why It Matters)

NVIDIA's internal security team has circulated a useful heuristic for thinking about agentic risk, referred to internally as the Rule of Two. The principle is simple: an agent can safely do two of three things—access files, access the internet, execute code. All three simultaneously is how malware gets injected.

Files plus code execution, without internet access, is a contained environment. Internet access plus files, without code execution, is manageable. All three together, without exceptional controls, is an invitation to the kind of cascading compromise that security teams have no reliable way to monitor in real-time.

For ecommerce operators, this maps directly onto common agent configurations. A tool that can pull your product catalogue, browse supplier sites, and execute API calls to update your listings has all three. That is a high-blast-radius setup by any reasonable security standard, and most of the current generation of agentic tools makes that configuration trivially easy to reach.

You're probably looking at your own setup right now and doing uncomfortable mental arithmetic. That's the appropriate response.

Why Ecommerce Is Specifically Exposed

The ecommerce environment has structural features that amplify agentic risk in ways that generic enterprise security guidance doesn't fully address.

Data density is one. A typical mid-size operation holds supplier pricing agreements, customer PII, platform API credentials, financial reports, and logistics data—often in a loosely organised shared drive or set of local folders that made sense when humans were the only ones browsing them. An agent granted access to "the business files" has, in practice, access to all of it. There's no "business files, but only the boring ones" permission level.

Third-party connector proliferation is another. The average ecommerce technology stack integrates Shopify or Amazon Seller Central, Google Ads, Meta Ads, a 3PL, an ERP or accounting tool, and several data feeds. Each connector bolted onto an agent is a vendor relationship with its own security posture—and Anthropic is explicit that third-party MCP servers are not verified and should be treated accordingly. That disclaimer is doing a lot of heavy lifting that users aren't noticing.

Approval fatigue is a third. Anthropic's own data shows that users approve 93% of permission prompts when using Claude Code. Ninety-three percent. The approval mechanism that looks like a safety feature is, at that rate, not functioning as one. Security architecture built on user vigilance collapses under operational volume. The twenty-third permission prompt of the day gets the same treatment as the popup asking if you really want to leave this page.

What Sensible Adoption Actually Looks Like

None of this argues against using agentic tools. It argues for using them with a structural approach rather than an optimistic one.

The most resilient architecture separates reading from acting. One agent—or one mode—consumes untrusted content: supplier emails, competitor pages, external documents. It produces a summary and passes that summary to a separate agent or human decision point before any action is taken. This is not especially complex to implement; it's simply not the default configuration most teams reach for, because the default configuration is faster and we're all busy.

Least privilege at the tool layer matters as much as at the identity layer. Granting an agent access to a specific, bounded working folder is categorically different from granting access to a shared drive. Limiting an agent's connected systems to those strictly required for the task at hand is categorically different from giving it a full set of integrations and trusting it to use them wisely.

Connector intake should be treated as vendor due diligence. A third-party MCP server or desktop extension connected to an agentic tool is a system with its own code, its own data flows, and its own potential failure modes. The bar for adding one should be the same as the bar for adding any external software to a production environment. We somehow collectively decided that "it works with Claude" means "it's probably fine."

Audit capability should be a selection criterion, not an afterthought. If a tool cannot tell you what an agent did, in what sequence, with what data, you cannot investigate a breach or a data leak. The absence of audit logging in Cowork's current implementation is not a trivial limitation for any business operating in a regulated category—supplements, financial products, healthcare-adjacent goods—or any business subject to platform compliance requirements. Which, if you're selling on Amazon, is all of us.

Do You Love The AI For Ecommerce Sellers Newsletter?

You can help us!

Spread the word to your colleagues or friends who you think would benefit from our weekly insights 🙂 Simply forward this issue.

In addition, we are open to sponsorships. We have more than 66,000 subscribers with 75% of our readers based in the US. To get our rate card and more info, email us at [email protected]

The Quick Read:

The Tools List:

💲 TaxGPT: A tax assistant that makes the boring stuff simple.

🕸️ Altern: A website to find tools, products, resources, and more related to AI.

📹 Captiwiz - Create videos with AI-powered captions.

🔏 AI Transcription by Transistor - Generate incredibly accurate transcripts for your podcast episodes.

🔬 Capitol AI - AI content that blends storytelling, design & research.

🤖 Forefront - Your AI assistant for work

About The Writer:

Jo Lambadjieva is an entrepreneur and AI expert in the e-commerce industry. She is the founder and CEO of Amazing Wave, an agency specializing in AI-driven solutions for e-commerce businesses. With over 13 years of experience in digital marketing, agency work, and e-commerce, Joanna has established herself as a thought leader in integrating AI technologies for business growth.

For Team and Agency AI training book an intro call here.

What did you think of today’s email?