We Lost a Client Because of AI Slop. Here's What We Changed.
A luxury brand fired us because our AI-generated audit felt generic. No discovery call, no real data, no client-specific insights. This is the story of how we rebuilt our entire quality standard from scratch.


We Lost a Client Because of AI Slop. Here's What We Changed.
In January 2026, a luxury jewelry brand fired us. Not because our results were bad. Not because we missed deadlines. They fired us because our work felt like it was written by a machine.
They were right.
What Happened
We had been running SEO and CRO optimization for this brand for several months. The work was solid, technically. Rankings were climbing. But when we delivered a comprehensive site audit, something broke.
The audit was 25 pages of recommendations that could have applied to any Shopify store on the planet. "Optimize meta descriptions." "Improve page load speed." "Add schema markup." Every single recommendation was correct. None of them mentioned a single product the client actually sold.
No discovery call. No screenshots of their actual pages. No mention of their $85,000 MXN engagement rings or their custom engraving service. Just generic advice dressed up in a nice template.
The client's exact words: "This feels like you ran our URL through an AI tool and sent us the output."
That's exactly what we did.
The Real Problem Wasn't AI
Let me be clear: AI isn't the problem. We use AI extensively at JAMAK. Our GEO platform Ezeo runs on AI. Our content pipeline uses multiple AI models. AI is in everything we do.
The problem was that we used AI as a replacement for thinking, instead of as an amplifier for it.
We skipped the discovery call because "the data speaks for itself." We let a sub-agent write the audit because "it's faster." We didn't review the output because "it looked professional."
Professional and useful are two different things.
The 7 Rules We Built After Losing That Client
After that loss, we sat down and rebuilt our quality standard from the ground up. We call it the NYC Agency Standard, because we wanted every deliverable to feel like it came from a top-tier Manhattan agency, even though we operate from Monterrey, Mexico.
Here's what changed:
1. Verified Data Only
Every number in every report traces back to a source API. Google Search Console, Google Analytics, PageSpeed Insights, DataForSEO. If we can't point to where a number came from, it doesn't go in the report.
We built internal verification scripts that pull data directly from APIs before any report gets generated. The AI never invents numbers because it never has to. It works from verified data files.
2. Client-Specific, Never Generic
We reference actual products by name. We screenshot actual pages. We use real revenue numbers (with permission) to show impact.
"Your Anillo Cartier page has the CTA below the fold on mobile. Moving it above the fold could recover an estimated $4,800/month at your $156 average order value."
That's useful. "Optimize your CTAs" is not.
3. Human Voice
Every piece of client-facing content gets rewritten in a human voice. Not "It is recommended that optimization be implemented." Instead: "We noticed your homepage loads in 19 seconds on mobile. Here's exactly what's causing it and what we'll fix first."
We have a list of banned phrases. "Leverage." "Utilize." "In today's digital landscape." If it sounds like a language model wrote it, we rewrite it.
4. Show the Math With Their Numbers
"$4,800/month in recovered revenue at your $156 AOV" hits different than "could improve revenue." We always show the math, always with the client's actual numbers.
5. Less Volume, More Depth
5 findings with screenshots and dollar impact beat 25 generic recommendations every time. We used to think more recommendations meant more value. The opposite is true. Clients don't want a to-do list. They want a strategy.
6. Design Matters
Every report uses our Maxxus v5 template. Navy, cyan, lime color scheme. SVG charts instead of screenshots of spreadsheets. A Fortune 500 CEO should take it seriously. If it looks like a college assignment, it doesn't ship.
7. Human Review Required
No deliverable leaves without a human reviewing it. Period. AI drafts. Humans approve. This is non-negotiable.
What Changed in Practice
Since implementing these rules, we haven't lost a single client. More importantly, our existing clients started referring us to others.
Our audit for Tu Diamante, a luxury jewelry brand in Mexico, identified that their homepage H1 tag was "Your cart is currently empty!!!" on every page. A Shopify theme bug that no generic audit would catch. We found 786 rage clicks per day using Clarity data. We measured their mobile PageSpeed at 40/100 with a 19.7-second LCP.
Every number verified. Every finding specific to their store. Every recommendation tied to estimated revenue impact.
That's the difference between a $1,500/month client who fires you and a $3,000/month client who refers you.
The Takeaway
AI makes it incredibly easy to produce content that looks professional. Polished reports, well-structured recommendations, proper formatting. But looking professional and being useful are not the same thing.
If your agency is using AI to produce deliverables faster without making them better, you're building a time bomb. Eventually a client will look at your work and say what our client said: "This feels like AI."
The agencies that will win aren't the ones that use AI the least. They're the ones that use AI to dig deeper, verify more, and deliver insights that no generic prompt could produce.
We learned that lesson the expensive way. You don't have to.
Jose Antonio Mijares is the founder of JAMAK AI, an AI-powered digital marketing agency specializing in GEO, SEO, and CRO for e-commerce brands. He also built Ezeo, a platform that tracks brand visibility across AI search engines.