• GEO
  • About
  • Cited
  • Blog
  • Portfolio
  • Contact
Menu

Retina Media

  • GEO
  • About
  • Cited
  • Blog
  • Portfolio
  • Contact

If you want AI to tell the right story about your brand, you have to give it something solid to work with.

The Information Gap Doesn't Stay Empty

March 29, 2026

An SEO consultant was shopping for a PEO provider earlier this year. He was already a Gusto payroll customer, so he did something millions of B2B buyers now do reflexively: he asked Google's AI Mode whether Gusto could handle PEO services.

The AI told him Gusto doesn't offer PEO. Then it surfaced Rippling as an alternative.

Rippling's entire contribution to this moment was a factually accurate FAQ that answered a question Gusto's own site left unaddressed. No ad. No outreach. Just a page with a specific answer, sitting there waiting for the model to find it. Google's AI used it and injected a competitor into what had started as a customer retention moment. Gusto lost a potential upsell before anyone from either company had said a word to the buyer.

No one gamed anything, at least not in an unethical sense. The AI simply did what AI does: it assembled the best available answer from whatever it could find. One brand had the answer. The other had a gap.

Why specificity wins

This wasn't some flukey situation. It's a consequence of how large language models construct responses.

When an LLM is building an answer, it's looking for material that's useful to the response it's trying to assemble. Specific claims (numbers, dates, named features, concrete comparisons) give the model something to work with. Vague claims don't. A page that says "contact us for pricing" gives the model nothing extractable. A third-party blog that says "Company X starts at $49/month" gives it exactly what it needs.

The model evaluates sources differently than a human researcher would. Your official corporate page carries no automatic weight advantage just because it's yours. The model weights whichever source provides the most complete, specific answer to the question being asked. Authority matters, but specificity matters more, because specificity is what allows the model to construct the response.

This is a structural feature of how LLMs process information, not a bug that'll get patched next quarter.

The Ahrefs experiment: tested and reproducible

In December 2025, Ahrefs ran a controlled experiment that isolated this mechanism precisely. They set up a scenario where an official brand FAQ stated "we don't publish unit counts." Then they introduced unofficial sources with fabricated but specific numbers.

Five of eight major AI models chose the fabricated numbers over the official vagueness. Given a choice between "we don't disclose that" and a made-up customer count, the majority of models went with the fabrication. Because it was more useful for constructing a complete answer.

A caveat worth noting: Search Engine Journal's methodological critique pointed out that the test used leading questions and the "official" brand site lacked the authority signals a real established brand would carry. The real-world vulnerability for well-known brands may be lower than what the experiment's controlled conditions suggest. 

But this only calibrates the risk; it doesn't eliminate it. And for the thousands of B2B companies that aren't household names, the experiment's conditions are closer to reality than the exception.

The finding reinforces the Rippling-Gusto pattern. In one case, a competitor filled the gap with accurate information. In the other, researchers filled it with fabricated information. The models treated both the same way: specific answer wins, regardless of source.

The pricing page problem

You can see this playing out at scale in one of the most common B2B content patterns: the "contact sales" pricing page.

Your pricing page says "request a demo." A comparison blog (one you've never heard of, written by someone with no inside knowledge) says "Company X starts at $49/month, Company Y starts at $89/month." The model uses the blog's numbers. It has no choice. Those are the only specific numbers available.

Whether those numbers are accurate is almost beside the point. The model needed a number, and your site didn't provide one. Someone else did.

Multiply this across every buyer query that touches pricing, implementation timelines, customer counts, integration lists, and competitive comparisons. Each gap is an opening. Each opening gets filled by whatever source provides the most specific answer.

The business opportunity (and the compounding risk)

This mechanism creates two dynamics that run in opposite directions.

The opportunity: every narrative gap a competitor leaves unfilled is a gap you can walk into for free. Rippling's total distribution cost for that PEO moment was zero. One FAQ page. The model handled the rest. If your competitor's site says "contact sales" and yours publishes transparent pricing, the model will use your numbers when a buyer asks about the category. You've just entered a buying conversation you weren't invited to, at the exact moment the buyer is making decisions, with zero ad spend.

The risk runs the other direction, and it takes three forms. A competitor publishes a factually accurate answer to a question your site leaves open (the Rippling playbook). A third-party comparison site publishes numbers that are plausible but unverified. Or someone fabricates claims entirely, and the model treats them as credible because they're specific. Each is a different threat with different remediation costs, and all three are happening simultaneously across every B2B category with information gaps.

The immediate risk is what the model surfaces right now, at query time. That's the retrieval problem: the model assembles its answer from whatever specific content is available today, and if your official content isn't specific enough, it loses.

The longer-term risk is compounding. Buyers encounter the model's answer during their research. Some of them repeat it: in an internal Slack summary, a LinkedIn post about their vendor evaluation, a podcast mention. Each repetition creates a new signal that can reinforce the original framing in future model training data. The gap widens. And it gets harder to close the longer it circulates.

The cost of prevention is a fraction of the cost of remediation. A brand that publishes specific, official answers to buyer questions now can shape what the models say before someone else does it for them. A brand that discovers six months from now that AI is citing fabricated pricing or outdated limitations is fighting on two fronts: correcting what the model retrieves today and displacing what's already been absorbed into its training.

The inversion

For years, B2B marketing teams have been trained to withhold specifics as a sales tactic. Don't publish pricing; force the conversation. Don't disclose customer counts; avoid competitive intelligence. Don't discuss limitations publicly; control the narrative in sales calls.

That playbook made sense when you controlled the discovery surface. Your website was the destination. Withholding information was leverage.

In the AI discovery layer, the logic inverts. Withholding information creates a vacuum. And the vacuum gets filled by whoever publishes a specific answer first, whether they're a competitor, a comparison site, or someone making numbers up entirely.

The information gap on your site right now is being filled by someone. You just don't know who, and you don't know what they're saying.

This post draws on research from Cited: How B2B Brands Win in the Age of AI-Generated Answers.

Tags B2B Content Strategy, Generative Engine Optimization, GEO, Brand Visibility, Competitive Intelligence
Slop and Soul →

Latest

Featured
Mar 29, 2026
The Information Gap Doesn't Stay Empty
Mar 29, 2026
Mar 29, 2026
Mar 22, 2026
Slop and Soul
Mar 22, 2026
Mar 22, 2026
Mar 14, 2026
Preface to the First Edition of Cited: How B2B Brands Win in the Age of AI-Generated Answers
Mar 14, 2026
Mar 14, 2026
Mar 6, 2026
The GEO White Paper v4.0: AI-Native Brand Visibility for B2B (March 2026)
Mar 6, 2026
Mar 6, 2026
Feb 28, 2026
The Committee Member Nobody Invited
Feb 28, 2026
Feb 28, 2026
Feb 9, 2026
The Blind Automation Problem
Feb 9, 2026
Feb 9, 2026
Feb 1, 2026
The File That Makes AI Sound Like You
Feb 1, 2026
Feb 1, 2026
Jan 24, 2026
GEO Tools Can't Tell You What to Build
Jan 24, 2026
Jan 24, 2026
Jan 16, 2026
Why Proprietary Data Is the Last Content Advantage in AI Search
Jan 16, 2026
Jan 16, 2026
Jan 6, 2026
5 “Boring” AI Agents Every B2B GTM Team Should Deploy ASAP
Jan 6, 2026
Jan 6, 2026

© 2026  Retina Media LLC