• GEO
  • About
  • Cited
  • Blog
  • Portfolio
  • Contact
Menu

Retina Media

  • GEO
  • About
  • Cited
  • Blog
  • Portfolio
  • Contact

The Step Everyone Skips in GEO

April 13, 2026

Josh Grant, partner at HyperGrowth Partners, published a piece last week called "3 AEO Automations. Steal Them." It's one of the better tactical posts I've seen on AI visibility this year. Worth reading if you haven't.

The three automations he lays out are a content refresh pipeline, an FAQ and schema engine, and a competitive displacement agent. Each one is real. Each one works. The Webflow and Docebo results he cites are credible. And the core argument is correct: AI visibility is becoming an operations discipline.

I agree with all of that. The conversation needs to go one level deeper.

The automations start in the middle

All three of Grant's workflows assume the hard part is already done.

The content refresh pipeline assumes you know which pages to rewrite. The FAQ engine assumes you know which questions buyers are actually asking. The competitive displacement agent assumes you can infer why a competitor is getting cited and you aren't.

Those assumptions break down fast if you don't already have a clear map of your category. 

Here's what I mean. Grant's refresh pipeline pulls pages from Google Search Console, filters for high-impression and low-CTR URLs, and runs a prompt that scores the opening 150 words. That's useful, but it only touches pages Google is already surfacing. It tells you nothing about the queries where AI systems are building buyer shortlists before your site ever loads. The queries your analytics will never see.

His FAQ engine pulls questions from Reddit and People Also Ask. Also useful. But Reddit threads and PAA boxes reflect how people search Google. They don't reflect how buyers frame questions inside ChatGPT or Claude when they're evaluating a purchase. Those are different behaviors. The phrasing is different. The intent structure is different. The way the model synthesizes an answer is different.

His displacement agent fires when citation share drops week over week. Smart. But by the time you're reacting to a drop, you've already lost the position. The question that matters more: why did the model select your competitor in the first place? What is the model's current representation of your category? Where does your brand's narrative break inside that representation?

Those are diagnostic questions. A prompt chain doesn't answer them.

The GEO market is splitting into three layers

I've been watching this space compress over the past six months. The pattern is becoming clear.

Monitoring is getting commoditized. Semrush has an AI Visibility Toolkit with daily prompt tracking and competitor research. Profound is pushing agents and answer-engine insights. AirOps is connecting citation data, prompt data, and analytics into automated workflows. Within a year, monitoring will be a feature inside every major marketing platform. It won't be a business.

Execution is where Grant's automations live. Content refreshes, schema injection, displacement response. This is valuable work. It's also increasingly automatable. The playbooks are getting published. The prompts are getting shared. The platforms are building the connectors. The ceiling on differentiation here is low and getting lower.

Diagnosis is the layer almost nobody is building. Figuring out which buyer conversations actually drive shortlist formation. Understanding how AI systems currently represent your category, your competitors, and your brand. Identifying where the narrative breaks and what interventions will shift selection outcomes.

Most of the energy in this space is going into layers one and two. Layer three is where the leverage actually lives.

Why diagnosis is the hard part

Grant's article cites a striking stat: ChatGPT only cites 15% of the pages it retrieves. The other 85% get found and skipped. That's a retrieval-versus-selection problem. And it's a problem that content structure alone won't solve.

Selection depends on how the model understands the query, the category, and the entities involved. If the model's internal representation of your brand is fragmented or inconsistent, no amount of answer-first intros will fix it. If your content answers questions the model isn't prioritizing for the buyer's actual intent, a perfect FAQ schema won't help. If your competitive narrative doesn't address the specific dimensions the model uses to compare options, a displacement response loop just produces faster iterations on the wrong fix.

Here's what that looks like in practice. A B2B platform rewrites its top ten pages with answer-first intros, adds FAQ schema sourced from Reddit and PAA, and builds a competitive comparison page. All the right moves from the automation playbook. Citations don't move. The reason: the buyer prompts driving shortlist formation in their category center on implementation risk, switching cost, and time-to-value for mid-market teams. None of those dimensions appear anywhere in the optimization set. The team executed well on the wrong map.

The map has to be right before the loop can work.

Building that map starts with how B2B buyers actually buy. You study the query landscape through the lens of real purchase behavior. You run queries across multiple AI systems in isolated sessions and analyze what gets selected, what gets skipped, and why. You construct a knowledge graph of how the model represents your category and validate it against reality.

That's slow, rigorous, unglamorous work. It doesn't fit in a prompt template. It doesn't automate into a four-step workflow. And it's the thing that makes every downstream automation actually effective.

The part that matters

Grant's workflows are genuinely useful. Teams that build them will outperform teams that don't.

But automation without diagnosis produces confident irrelevance at scale. A fast loop built on the wrong queries means you're optimizing for conversations that don't drive revenue. A displacement response triggered by the wrong signal means you're chasing share in a frame that doesn't matter to buyers.

The questions every B2B brand should be asking right now are simple: Do we actually know which AI-driven buyer conversations matter in our category? Do we know how the models currently represent us? Do we know where the narrative breaks?

If the answer is no, the automations can wait. Start with the map.

Every team in your category will eventually build the loop. The ones who win will be the ones who built the map first.

Tags GEO, AI Visibility, B2B Content Strategy, AI Citations, Competitive Intelligence
Visible But Not Winning →

Latest

Featured
Apr 13, 2026
The Step Everyone Skips in GEO
Apr 13, 2026
Apr 13, 2026
Apr 7, 2026
Visible But Not Winning
Apr 7, 2026
Apr 7, 2026
Mar 29, 2026
The Information Gap Doesn't Stay Empty
Mar 29, 2026
Mar 29, 2026
Mar 22, 2026
Slop and Soul
Mar 22, 2026
Mar 22, 2026
Mar 14, 2026
Preface to the First Edition of Cited: How B2B Brands Win in the Age of AI-Generated Answers
Mar 14, 2026
Mar 14, 2026
Mar 6, 2026
The GEO White Paper v4.0: AI-Native Brand Visibility for B2B (March 2026)
Mar 6, 2026
Mar 6, 2026
Feb 28, 2026
The Committee Member Nobody Invited
Feb 28, 2026
Feb 28, 2026
Feb 9, 2026
The Blind Automation Problem
Feb 9, 2026
Feb 9, 2026
Feb 1, 2026
The File That Makes AI Sound Like You
Feb 1, 2026
Feb 1, 2026
Jan 24, 2026
GEO Tools Can't Tell You What to Build
Jan 24, 2026
Jan 24, 2026

© 2026  Retina Media LLC