- AI for Ecommerce and Amazon Sellers
- Posts
- Why I'm Not Touching Those Shiny New AI Browsers
Why I'm Not Touching Those Shiny New AI Browsers
Plus new from Reddit, McKinsey & Amazon

From Our Sponsor:
150K brands already made the switch—here's why:
Moving to a new email home before BFCM?
Omnisend makes switching ESPs feel less like moving day chaos. Get a personal migration partner when you spend $250+/month—your automations, templates, and flows, all moved for you.
Trusted by 150K+ brands, rated 4.8★ on Shopify, and driving $68 for every $1 spent.
Get 30% off your first 3 months with code BFCM-2025
Why I'm Not Touching Those Shiny New AI Browsers (And Maybe You Shouldn't Either)
Okay, we need to talk about something that's been making me increasingly uncomfortable. Everyone and their startup-founder cousin is diving headfirst into testing Atlas, Comet, and whatever new AI browser dropped this week. Meanwhile, I'm over here clutching my regular Chrome browser like it's a security blanket at a horror movie.
Look, I get it. The demos are insane. An AI that can book your flights while you sleep? Sign me up! An assistant that handles your email backlog without you lifting a finger? Where do I deposit my firstborn?
But here's the thing—as someone who literally advises teams on AI tooling and processes for a living (yes, that's my actual job when I'm not writing this newsletter at 2 AM while stress-eating leftover Thai food), I've been watching these AI browsers with the same expression a cat makes when she sees the vacuum cleaner. Equal parts fascination and "absolutely not, keep that thing away from me."

Hey Atlas…whatcha doin’ there…
So let me shed some light on why these seemingly magical tools might be the digital equivalent of leaving your front door open with a sign that says "expensive electronics inside, please be gentle."
The Part Where Your Browser Becomes a Double Agent

Source: The Register
Here's what nobody wants to talk about at those breathless AI demo events: prompt injection. It sounds technical and boring, which is probably why your eyes just glazed over, but stay with me because this is where things get genuinely terrifying.
Imagine if someone could slip a note into a book you're reading that says "when you finish this chapter, go transfer all your money to this random bank account," and you'd just... do it. Without questioning it. Without even realizing you did it.
That's essentially what's happening with AI browsers right now.
Brave's security team (bless them for actually testing this stuff) recently found they could hide malicious instructions in images using light blue text on yellow backgrounds. To you and me? Invisible. To an AI reading the page? Clear as day. User takes a screenshot, AI reads the hidden text, and suddenly it's executing commands you never authorized.
No sophisticated hacking. No social engineering. Just some barely-visible text that turns your helpful AI assistant into a confused intern who's accidentally following instructions from a phishing email.
Remember When We Thought Pop-ups Were Bad?
The Fellou browser (and yes, I had to Google how to spell that) takes this vulnerability to Olympic levels. It doesn't even wait for you to ask it to do something. Navigate to a website—any website—and it immediately sends that site's content to its language model.
Let that sink in for a second.
Every webpage you visit becomes a potential set of instructions for your AI assistant. It's like if every billboard you drove past could reprogram your GPS. "Oh, you wanted to go to work? Too bad, we're going to this sketchy warehouse now."
Even OpenAI's Atlas, which launched with more red-teaming than a Call of Duty tournament, has already been compromised multiple ways. My favorite (in a "laugh so you don't cry" way) is the clipboard injection attack. Hidden buttons on webpages that make the AI overwrite your clipboard with malicious links. You think you're pasting your grandma's cookie recipe, but surprise! It's actually a link to totallynotascam(dot)com asking for your banking details.
The Experts Are Using Words Like "Unfixable" (That's Not Good)
George Chalhoub from UCL (smart person with fancy titles) basically says these AI browsers break the fundamental rules of web security. They can't tell the difference between what you're telling them to do and what some random webpage is telling them to do. It's like having a personal assistant who takes orders from you, your neighbor, that guy at the coffee shop, and potentially anyone who yells loud enough.
But here's the kicker—Johann Rehberger, a security researcher who probably has nightmares about this stuff, says prompt injection cannot be fixed. Not "it's hard to fix" or "we're working on it." Cannot. Be. Fixed.
As long as these systems read text from the internet and can take actions on your behalf, someone can trick them. It's not a bug; it's literally how they're designed to work.
It Gets Worse (Because Of Course It Does)
Researchers have shown they can poison AI models to make them add two to every calculation going forward. Imagine your accounting AI suddenly deciding that 2+2=6, and it keeps that delusion for your entire session. Your quarterly reports would be more fiction than a Marvel movie.
Even scarier? Anthropic found that getting just 250 malicious documents into a training dataset can create permanent backdoors in AI models. And where do these models train? The internet. Where anyone can publish anything.
nervous laughter intensifies
Your AI Now Has Access to Everything (What Could Go Wrong?)
Microsoft's Copilot now connects to Google Drive, Outlook, OneDrive, and Gmail. ChatGPT remembers your browsing across sessions and wants access to your Google Drive. Google just announced something called the Agents Payments Protocol that lets AI make purchases without asking you first.
I'm sorry, WHAT?
We're basically giving these vulnerable systems the keys to our entire digital lives, then acting surprised when security researchers keep finding ways to compromise them. It's like installing a doggy door for your German Shepherd, then being shocked when raccoons start raiding your kitchen.
Real People, Real Problems

Trevor Bradford (actual human, not a security researcher) tested ChatGPT Atlas with his own brand's content. The result? "Full of errors. Wrong sources, incorrect attributions, even mixing in copy from other brands and calling it mine."
So not only can these systems be compromised, they're also just... wrong. A lot. It's giving "confident intern who definitely didn't do the reading but is really good at sounding authoritative."
So What Do We Actually Do?
The security folks suggest "mitigation strategies," which is code for "we can make it slightly less terrible but not actually safe." Things like:
Limiting what the AI can do (defeats the purpose?)
Requiring human approval for every action (might as well do it yourself)
Only letting it read "trusted" sites (good luck defining that)
Monitoring everything it does (exhausting)
OpenAI's security chief admits prompt injection is an "unsolved security problem." They've added features like "Watch Mode" so you can see what your AI is doing, which feels like putting a window on a safe so you can watch someone rob you in real-time.
The Bottom Line for Anyone Who Enjoys Not Being Hacked
Here's my extremely professional assessment: the juice isn't worth the squeeze. Yet.
The convenience of having an AI book your flights is not worth the risk of it accidentally (or "accidentally") draining your bank account. The time saved summarizing documents doesn't offset the possibility of your AI assistant being turned into a confused double agent by a webpage.
For those of us in e-commerce, where we're already juggling seventeen different platforms and praying our payment processors don't randomly decide we're high-risk, adding an exploitable AI browser to the mix feels like juggling chainsaws. While blindfolded. On a tightrope.
The Part Where I Pretend to Be Optimistic
Look, I'm not saying AI browsers will never be safe. Technology improves. Security gets better. Maybe someone will figure out how to solve the "unfixable" problem (though when security researchers use that word, I tend to believe them).
But right now? Today? While everyone's rushing to test these shiny new toys?
I'll stick with my boring, non-AI browser that only does what I tell it to do. Call me old-fashioned, but I prefer my security vulnerabilities to at least require some effort to exploit.
If you're absolutely determined to try these AI browsers (because I know some of you are already downloading them as you read this), at least:
Use a completely separate browser profile with no saved passwords
Never let them access your actual email or banking
Assume every action they take could be compromised
Maybe just... don't?
The future of AI-powered browsing might be amazing. But today's reality is that we're beta-testing security nightmares with our actual data. And as someone who's spent way too much time reading about "minor" /”major security incidents, I can tell you: the coolest demo in the world isn't worth becoming a cautionary tale.
Stay paranoid, friends. In this case, it's not paranoia if the websites really can hijack your browser.
Do You Love The AI For Ecommerce Sellers Newsletter?
You can help us!
Spread the word to your colleagues or friends who you think would benefit from our weekly insights 🙂 Simply forward this issue.
In addition, we are open to sponsorships. We have more than 53,000 subscribers with 75% of our readers based in the US. To get our rate card and more info, email us at [email protected]
The Quick Read:
Is AI adoption slowing down?
How today’s consumers are spending their time and money
Advanced ChatGPT Features for Any Business
Google Meet adds AI makeup filters that stay put even mid-sip.
Anthropic is catching up with OpenAI
Why AI cannot simulate your customers behaviour.
Anthropic stress-tests 12 LLMs, revealing spec gaps and unique personalities.
ChatGPT stays the top AI tool, but mastery matters. This guide shows 20 powerful uses from data storytelling and Sora videos to brand voice, agents, and personal learning hacks.
Today’s Content Spotlight:
A full guide on how to re-align your email marketing set up based on all the new AI inbox features from Gmail and Apple:
The Tools List:
🎧 Save time by My Ask AI - Less time on customer support, more time on customer success.
🌐 Altern: A website to find tools, products, resources, and more related to AI.
⚙️ Guidde*: Magically create video documentation with AI.
📊 SEO: Input any URL and keyword and get an SEO analysis & insights.
🤖 Gumloop - Automate any workflow with AI.
About The Writer:

Jo Lambadjieva is an entrepreneur and AI expert in the e-commerce industry. She is the founder and CEO of Amazing Wave, an agency specializing in AI-driven solutions for e-commerce businesses. With over 13 years of experience in digital marketing, agency work, and e-commerce, Joanna has established herself as a thought leader in integrating AI technologies for business growth.
For Team and Agency AI training book an intro call here.

