• Portfolio
  • About
  • Blog
  • Dwelling in a Place of Yes
  • Contact
Menu

Retina Media

  • Portfolio
  • About
  • Blog
  • Dwelling in a Place of Yes
  • Contact

The Blind Automation Problem

February 9, 2026

A CMO approves $400K in AI tooling for 2026. The board wants to see pipeline impact by Q3. Her team spins up a lead scoring model, an AI-assisted outbound sequence, a content generation workflow, and an account intelligence dashboard. By March, all four are live. The team posts about it on LinkedIn. The CEO mentions it on an earnings call.

Three months later, she can't answer a basic question: which of these is actually working?

This is the state of AI in go-to-market right now. Adoption is nearly universal. Impact is not.

The numbers are brutal

McKinsey's 2025 State of AI survey found that 88% of organizations use AI in at least one business function, up from 78% the year before. But only 39% attribute any measurable impact on EBIT to those investments. Among those who do report impact, most say AI accounts for less than 5% of their total EBIT. Nearly two-thirds of organizations haven't begun scaling AI across the enterprise. They're still piloting.

BCG's Build for the Future 2025 study of 1,250 senior executives tells a similar story from a different angle. 60% of companies generate no material value from AI despite active investment. Only 5% qualify as what BCG calls "future-built," meaning they've put in place the capabilities to create substantial, measurable value at scale. The other 95% are somewhere between experimenting and stalling.

In GTM specifically, the picture is even sharper. A survey of 195 B2B companies by GTM Strategist found that 91% of GTM teams use general AI tools like ChatGPT. But only 24% report a "big impact" from AI adoption. 53% report little to no impact at all.

Read that again. More than half of GTM teams using AI see no meaningful results from it.

Everyone has the tools. Almost nobody has the instrumentation.

The standard explanation for this gap is some version of "we need better tools" or "we need more training" or "we need cleaner data." All real, none sufficient.

The deeper issue is that most GTM teams have no reliable way to know what's happening inside their own AI workflows. They can tell you which tools are deployed. They can show you the Slack thread where someone said the new lead scoring model "feels more accurate." They cannot tell you, with any confidence, whether any of it is producing durable pipeline.

They are, in a very literal sense, automating blind.

Default surveyed 300+ RevOps leaders and found that fewer than 10% can demonstrate ROI from their AI implementations. The finding that jumped out to me: teams running one or two focused AI workflows often reported stronger results than teams running seven or more. Breadth without instrumentation produces fuzzy outcomes.

BCG's research reinforces this from the top. More than 60% of their "future-built" 5% rigorously track AI value, while among everyone else, measurement is ad hoc at best. The companies generating real returns are measuring every cycle. The ones generating nothing are measuring almost nothing, and that's not a coincidence.

The pilot treadmill

There's a pattern that's become disturbingly common: A GTM team identifies a promising AI use case, runs a pilot, gets encouraging results in a controlled setting, and presents them to leadership. Leadership says "scale it." Then nothing happens.

McKinsey calls this the "aggregation problem": scattered use case wins that never translate to material enterprise impact. The team tried 15 things, three kind of worked, nobody documented what happened, and now the budget is gone.

The root cause is structural. Most organizations treat each AI workflow as an isolated experiment, with no standard for evaluation and no shared definition of what "working" even means. Gartner found that only 36% of CFOs express confidence in their ability to drive enterprise AI impact, even as investments keep climbing. The money is flowing. The instrumentation to justify it largely doesn't exist.

Why AI impact fails to compound

The instrumentation gap has a second-order effect that most teams haven't reckoned with yet. It changes whether AI returns accumulate or evaporate.

BCG's 5% is interesting for what it proves by its existence: some companies are generating real, measurable returns from AI. That part isn't in dispute.

What's worth sitting with is the shape of the distribution. It's bimodal: a small group pulling away, everyone else flat, and almost no middle ground.

That doesn't look like a skills gap or a tooling gap. If the problem were "we need better models" or "we need more data engineers," you'd expect a gradient, a steady improvement curve as teams invest more. You'd see proportional returns. Instead, the returns are binary. You're either compounding or you're not.

The companies in BCG's top 5% plan to spend 26% more on IT and allocate up to 64% more of their AI budget compared to everyone else. They can do this because they have line-of-sight into what each dollar produces, enough to prove causality and allocate budget with confidence. So they invest more, which gets them further ahead, which gives them better data to invest with. The flywheel turns.

For the other 95%, the dynamic runs in reverse. Without the ability to connect AI activity to business outcomes, each new workflow is a bet placed in the dark. Some pay off, probably, but nobody can say which ones, so the portfolio of bets never gets smarter. Next quarter, the team makes roughly the same bets again with a different tool. Each cycle resets to zero because nothing learned in the previous one is captured in a form that informs the next.

The CFO is watching

There's a timeline pressure that makes this more urgent than a standard "get your act together" argument.

Gartner's 2026 budget data tells a clear story. 64% of CFOs are constraining SG&A growth below revenue growth, and 42% expect AI-driven headcount reductions in support functions. The message to GTM leaders is unambiguous: show that your AI investments are generating measurable operating leverage, or lose the budget. If you can't show returns, you don't get to keep spending.

The GTM Strategist survey captured this shift in a single finding: the stated priority for 2026 across B2B teams was "ruthless scaling." The obvious follow-up question, which the survey itself acknowledged, is scaling what, exactly? If you can't answer that, you're scaling your costs.

What most teams are still missing

The organizations pulling ahead have a closed loop between what they can see and what they do about it. They can defend ROI with enough confidence to fund the next cycle, and that confidence feeds forward. The organizations falling behind have tools, plenty of them. Productiv reports that the average SaaS portfolio sits at 342 applications. What they don't have is anything connecting those tools into a system that learns from its own output.

A quick diagnostic: Can you name which AI workflow contributed to a specific deal that closed last quarter? Can you tell your CFO, with data, which of your AI investments to double down on and which to cut? If your lead scoring model disappeared tomorrow, would you know within a week? If the answer to any of these is no, you’re in the 95%. Most teams are.

Adding another dashboard, another agent, or another workflow builder doesn't close this gap. More surface area to automate, absent the ability to evaluate what that automation produces, is just more surface area to automate blindly.

Every CMO and CRO heading into the second half of 2026 can say they're using AI. The harder question is whether they know if it's working, and for most teams, the honest answer is still no. 

That gap is where the next wave of competitive advantage gets built.

Tags AI GTM automation, B2B GTM, Revenue attribution, Blind automation, AI ROI
The File That Makes AI Sound Like You →

Latest

Featured
Feb 9, 2026
The Blind Automation Problem
Feb 9, 2026
Feb 9, 2026
Feb 1, 2026
The File That Makes AI Sound Like You
Feb 1, 2026
Feb 1, 2026
Jan 24, 2026
GEO Tools Can't Tell You What to Build
Jan 24, 2026
Jan 24, 2026
Jan 16, 2026
Why Proprietary Data Is the Last Content Advantage in AI Search
Jan 16, 2026
Jan 16, 2026
Jan 6, 2026
5 “Boring” AI Agents Every B2B GTM Team Should Deploy ASAP
Jan 6, 2026
Jan 6, 2026
Dec 20, 2025
11 Predictions for AI-Enabled GTM in 2026
Dec 20, 2025
Dec 20, 2025
Dec 15, 2025
The Trial of Traditional SEO
Dec 15, 2025
Dec 15, 2025
Dec 6, 2025
The Next Trillion-Dollar AI Opportunity is GTM. SaaS Won't Capture It.
Dec 6, 2025
Dec 6, 2025
Nov 26, 2025
GTM Teams Still Can't Answer the Most Basic Question
Nov 26, 2025
Nov 26, 2025
Nov 15, 2025
The AI Bubble and the Curve We’re Not Measuring
Nov 15, 2025
Nov 15, 2025

© 2026  Shane H. Tepper