• Portfolio
  • About
  • Blog
  • Dwelling in a Place of Yes
  • Contact
Menu

Retina Media

  • Portfolio
  • About
  • Blog
  • Dwelling in a Place of Yes
  • Contact

The Discovery Wars: Why AI Answers Are Replacing Search

October 21, 2025

Six months ago, one fintech company’s “Pricing & Packaging” page drove 38% of organic pipeline. Today, traffic is down 40%. Branded and direct demos are up 22%, and SDR notes keep saying, “Found you in ChatGPT.” Analytics shows nothing between query and demo. The research moved off-page, into the answer.

As of October 2025, ChatGPT serves over 800 million weekly users and 2.5 billion daily messages. While organic search still drives most referrals, multiple industry snapshots show AI search rising quickly, and publishers are already tracking CTR impact from AI Overviews.

Shortlists now form inside AI platforms. Visibility depends on being mentioned, quoted, and compared before the click (if the click even happens at all).

Buyers are delegating shortlist formation to models because models synthesize faster than they can. If your content isn’t structured for extraction, you’re invisible at the exact moment intent crystallizes.

What actually changed (behavior, not tech)

Old flow: Query → ten links → four tabs → skim → shortlist.
New flow: Query → synthesized answer → shortlist → quick validation.

Traditional search rewarded pages. AI answers reward facts, clarity, and structure a model can extract and compare. The result is a faster path to a confident shortlist and less patience for descriptors like “best-in-class” or “industry-leading” without proof.

Why buyers prefer AI search (a live, evolving example)

A realistic mid-market scenario:

Initial ask: “Best SOC 2 monitoring for about 2,000 employees.”
Constraint 1: “Must integrate with Slack for real-time alerts.”
Constraint 2: “EU data residency required.”
Constraint 3: “Keep TCV under $50k; implementation under 30 days.”

Across one thread, AI search delivers:

  1. One clear answer: A tight shortlist with two or three proof points per vendor rather than ten links to sift.

  2. Contextual conversation: Constraints reshape the list in seconds.

  3. Less fluff, more meat: Specifics like integrations, SLAs, and regions float to the top; slogans fall away.

  4. Side-by-side: A comparison table appears inside the answer.

  5. Advisor tone: Trade-offs are stated plainly, so the buyer exits with a confident shortlist.

Answers can cite stale pricing, miss smaller vendors, or hallucinate edge-case features. Your content must be structured, current, and easy to quote so the model prefers your facts when it composes the answer. Qualitative research shows users lean on AI chat for fast synthesis, then spot-check details, which means clear sources and dates matter.

The invisible funnel

Discovery, evaluation, and early trust now happen in model space. Your analytics don’t see the most influential stage. You feel it indirectly: branded/direct upticks, different first-call questions, faster movement to security review. The click that used to start the journey often never happens.

How AI actually reads your content

Models extract entities, claims, numbers, comparisons, and definitions. They favor concise, canonical statements over long paragraphs. Repeatable “atoms” win:

  • FAQ atom: Question → 2–4 sentence factual answer → one citeable fact (metric or binary) → source link.

  • Comparison table atom: Rows = vendors; columns = 6–8 decisive features, a pricing signal, integrations, and a visible “last updated” date.

  • Data card atom: Metric name → plain-English definition → formula (if relevant) → one sentence of context → source link.

The new rules of discoverability (with a bad-to-good transformation)

Fast checklist: will a model quote this page?

  1. Is the key claim stated in one sentence near the top?

  2. Are there concrete numbers or binary facts?

  3. Is “who it’s for” and “what it replaces” stated clearly?

  4. Is pricing signaled (banded is fine) with a last-updated date (include next review date)?

  5. Is there a comparison table with consistent column names?

  6. Are primary docs linked (security, pricing, integrations)?

  7. Are visible date stamps on tables or data cards in place?

  8. Are FAQs tight Q→A with citeable sentences?

  9. Is the page free of filler adjectives?

  10. Can a human skim and understand it in 15 seconds?

Bad (ignored):

  • Opens with a slogan.

  • Long vision paragraph; no facts above the fold.

  • Pricing behind a form.

  • No table, no dates, no sources.

Good (quoted):

  • Header: what it is, who it’s for, what it replaces.

  • Key facts (50–80 chars each):

    • SOC 2 Type II; automated evidence export

    • Slack + Jira integrations (native)

    • EU data residency: Frankfurt, Dublin

    • Typical go-live: 21–28 days

    • Mid-market TCV: $28k–$44k (last updated: 2025-09-30)

  • Comparison table columns: Residency, SLAs, Integrations, Time-to-Value, Pricing signal, Proof/Certs (each cell links to a primary doc).

  • FAQ atoms that answer specific buyer questions (“Does it support DLP with Gmail?”).

Search-industry reporting shows AI surfaces are shifting click patterns and referral share even as organic remains the largest channel. These basics help you qualify for the answer while keeping validation paths intact.

Execution plan (90 days, fast lane)

Over the next three months, convert your highest-leverage pages into extractable formats and prove inclusion. In month one, identify ten target pages and ship a first wave (glossary, FAQ hub, two comparisons, five data cards, pricing explainer) with visible “last updated” stamps and sources. In month two, expand comparisons to adjacent categories, publish persona-specific FAQs, and add JSON-LD for products/FAQs/how-tos. In month three, stand up a claims registry, publish a quarterly “what changed in our category” brief designed for excerpting, and turn on AI-influence reporting in your CRM/analytics stack.

Sidebar: Ship list (first 30 days)

  • 1 glossary

  • 1 FAQ hub

  • 2 comparison pages

  • 5 data cards

  • 1 pricing explainer

  • Date + source on every table

Measuring what you can’t see (concrete instrumentation)

You won’t capture the model’s answer, but you can measure the assists.

Salesforce fields

  • Discovery source (picklist): Website (untracked), Partner, Event, AI tool - ChatGPT, AI tool - Claude, AI tool - Perplexity, Social, Referral, Other.

  • First-mentioned vendor list (long text): vendors the buyer saw in the AI answer.

Required capture

  • SDR/AE sets both fields on lead/opportunity creation.

  • Booking form checkbox: “Where did you first research solutions?” (Google, Vendor site, Analyst, AI chat, Community, Other).

Process

  • Call notes macro (Gong/Chorus): “AI discovery? Y/N. If yes, paste the prompt and vendors returned.”

  • GA4/Tag Manager: set a custom dimension ai_discovery=true when the booking checkbox is selected.

    Reporting

  • Cohort: Discovery Source contains "AI tool".

  • Compare win rate, cycle length, and opportunity size vs. non-AI cohort.

  • Run a ±14-day cohort around content/table updates to spot downstream lifts.

What good looks like: AI-discovery cohorts show higher SQO rates and shorter cycles. Expect more compare and integration questions on first calls—consistent with AI-mediated shortlisting.

Failure modes (and fixes)

  • Over-claiming without sources → distrusted or excluded. Fix: a claims registry (i.e. a structured record your company keeps of its key factual assertions) with links to primary docs.

  • Stale data → ignored by models. Fix: visible last-updated stamps and a quarterly review.

  • Wall of prose → hard to excerpt. Fix: tables, FAQs, data cards.

  • No comparisons → someone else frames the market. Fix: publish the comparison first.

A short case vignette (fictionalized, realistic)

What shipped in 45 days: a glossary, three comparison pages, twelve data cards, and a pricing explainer; all tables dated and sourced.

90-day results: inclusion in AI answers for eight core queries; a 19% lift in branded demos.

What changed in sales calls:

  • Buyers arrived with pre-formed shortlists that matched the published tables.

  • First calls skipped “what do you do?” and moved straight to integration gaps and time to value.

  • Cycle compression: security review started in week two because certs and residency were surfaced up front.

  • New objection pattern: “You’re $6–10k above Vendor B. Justify the delta on SLAs and DLP scope.” The proof blocks handled it.

The Discovery Wars

AI answers are where shortlists form. If you want the buyer, win the mention. The Discovery Wars will be won by those who understand that what matters now is whether the models know you exist when the buyer asks. 

Because if they don't, you will be vanquished.

Sources

  1. Ofcom (2025). “User Experiences of Generative AI Search: Qualitative Research Report.” Research report, PDF. Retrieved from https://www.ofcom.org.uk/siteassets/resources/documents/research-and-data/online-research/other/generative-ai-search-qualitative-research-report.pdf?v=403429

  2. BrightEdge (2025). “BrightEdge Data Finds AI Accounts for Less Than 1 Percent of Search; Organic Traffic Continues.” Press release. Retrieved from https://www.brightedge.com/news/press-releases/brightedge-data-finds-ai-accounts-less-1-search-organic-traffic-continues

  3. BrightEdge (2025). “AI Search Visits Are Surging 2025.” Research overview. Retrieved from https://www.brightedge.com/resources/research-reports/ai-search-visits-in-surging-2025

  4. Search Engine Journal (2025). “Impact of AI Overviews: How Publishers Need to Adapt.” Article. Retrieved from https://www.searchenginejournal.com/impact-of-ai-overviews-how-publishers-need-to-adapt/556843/

  5. Search Engine Journal (2025). “Google CTRs Drop 32 Percent for Top Result After AI Overview Rollout.” Article. Retrieved from https://www.searchenginejournal.com/google-ctrs-drop-32-for-top-result-after-ai-overview-rollout/551730/

  6. Search Engine Land (2025). “AI Search Traffic Referrals and Organic Search: Data and Context.” News coverage. Retrieved from https://searchengineland.com/ai-search-traffic-referrals-organic-search-data-461935

  7. OpenAI/TechCrunch (2025). “Sam Altman says ChatGPT has hit 800M weekly active users.” Article. Retrieved from https://techcrunch.com/2025/10/06/sam-altman-says-chatgpt-has-hit-800m-weekly-active-users/

  8. TechCrunch (2025). “ChatGPT users send 2.5 billion prompts a day.” Article. Retrieved from https://techcrunch.com/2025/07/21/chatgpt-users-send-2-5-billion-prompts-a-day/

Tags AI Search, Brand Discovery, GEO, ChatGPT, Branded Search
How B2B Buyers Build Shortlists: GEO For AI Overviews, ChatGPT, And Perplexity →

Latest

Featured
Oct 21, 2025
The Discovery Wars: Why AI Answers Are Replacing Search
Oct 21, 2025
Oct 21, 2025
Oct 9, 2025
How B2B Buyers Build Shortlists: GEO For AI Overviews, ChatGPT, And Perplexity
Oct 9, 2025
Oct 9, 2025
Oct 4, 2025
The Dark Traffic Effect: Measuring Invisible Demand from AI Answers
Oct 4, 2025
Oct 4, 2025
Sep 28, 2025
When the Flame Stays Lit
Sep 28, 2025
Sep 28, 2025
Sep 13, 2025
The Web Is Disappearing: What Google's Court Admission Reveals About AI Search
Sep 13, 2025
Sep 13, 2025
Aug 24, 2025
Who Made This? The Crisis and Evolution of Creative Authorship in the AI Era
Aug 24, 2025
Aug 24, 2025
Aug 13, 2025
The AI That Threatened to Expose an Affair Explains Itself
Aug 13, 2025
Aug 13, 2025
Aug 6, 2025
The GEO White Paper: Optimizing Brand Discoverability in Models like ChatGPT, Perplexity, and Google AI Overviews (Version 3.0, August 2025)
Aug 6, 2025
Aug 6, 2025
Jul 17, 2025
The Real Introduction
Jul 17, 2025
Jul 17, 2025

© 2025  Shane H. Tepper