• Portfolio
  • About
  • Blog
  • Dwelling in a Place of Yes
  • Contact
Menu

Retina Media

  • Portfolio
  • About
  • Blog
  • Dwelling in a Place of Yes
  • Contact

11 Predictions for AI-Enabled GTM in 2026

December 20, 2025

If 2025 was the year of "agents everywhere," 2026 will be the year those agents become accountable. Marketers know at this point that they’re plenty capable of generating output. But it's deciding what an agent can touch, what it's allowed to do, how you prove it helped, and how you shut it off when it doesn't.

As many orgs learned the hard way this year, winners won't be teams with the most extensive AI implementations. Rather, go-to-market success will be measured by the ability to ship workflows into production, measure lift, and control how much damage occurs when things go wrong. (Because we’re all still learning, and there will be things that inevitably go wrong.)

These are the bets I'd make for 2026, based on what accelerated (and what broke) in 2025.

1) Agents graduate from "helpers" to "process owners"

Major platforms shifted from copilots to agent platforms with management layers in 2025. Salesforce launched Agentforce with command center capabilities; Microsoft rolled out agent control plane concepts at Ignite. The message was clear: agents are operators, and operators need permissions, logs, and a kill switch.

Next year, the most successful GTM teams will run agent-owned processes, not just agent-assisted tasks:

  • Inbound triage → enrichment → routing → first-touch → meeting prep, orchestrated end-to-end

  • Renewal risk monitoring → outreach triggers → exec briefings

  • Account research packs generated nightly for top accounts

Multi-agent handoffs become common, and the error propagation problem gets serious attention. Once outputs chain (Agent A → Agent B), quality becomes a systems problem, not a prompt problem.

Agent eval becomes a standard GTM discipline. By 2026, it looks like QA: test cases, scorecards, escalation paths, and postmortems. Vendors that can't provide role permissions, sandboxing, and performance telemetry get stuck in pilot hell.

The buyer conversation shifts from "marketing ops likes it" to "RevOps + Security will approve it."

(For the organizational implications of this shift, see #9 on sales roles.)


2) Governance becomes a GTM requirement (with a regulatory forcing function)

For EU-facing organizations or anyone selling into the EU, the AI Act's major enforcement phase begins August 2, 2026. Treat it as a forcing function to ship minimum viable governance for EU-exposed workflows.

Minimum viable governance means:

  • Agent permissions: what data can it access, what actions can it take?

  • Escalation rules: when does a human get pulled in?

  • Model routing policies: what information goes to which models, and what stays on-prem?

  • Incident response: what happens when an agent does something unexpected at 2:13am?

Even outside regulated industries, internal governance is hardening, driven as much by security and legal teams as by compliance requirements. UK regulators are already flagging risks from autonomous agents in financial services, signaling broader scrutiny ahead.

One operational reality: 2026 will also be the year of Shadow AI cleanup. Employees used personal ChatGPT accounts and unsanctioned tools throughout 2024-2025. IT and security teams will spend significant energy in 2026 auditing, de-provisioning, and consolidating onto approved systems. Governance necessarily covers both new deployments and what's already loose in the wild.

Which leads me to my next point: governance becomes competitive differentiation. "Our agents log everything" is table stakes. The premium tier is agents that produce audit-ready justifications: inputs, constraints, policy checks, and decision traces.

Expect vendor bake-offs to include logging, human oversight, and policy controls as first-class evaluation criteria.

3) Data quality becomes the bottleneck, and the budget follows

GTM teams will learn painfully that agent effectiveness is capped by data integrity. The problems are familiar:

  • Duplicates and stale contact data

  • Broken account hierarchies

  • Garbage lifecycle stages

  • Missing product usage signals

  • No semantic consistency in how metrics are defined

In 2026, a mediocre model with a clean identity graph will beat a frontier model running on dirty data.

This becomes the year RevOps gets funding to do the unsexy work: identity resolution (both account hierarchy and person-level matching), deduping, enrichment strategy consolidation, event instrumentation, and building semantic layers for GTM metrics.

Two caveats:

First, this assumes RevOps has the political capital to enforce data standards. In practice, sales VPs often overrule data hygiene mandates to protect deal velocity. The top organizations will have enforcement mechanisms with teeth: exec mandate, deal desk gates, or comp plan hooks tied to data quality.

Second, data quality isn't just as much a cultural problem as a technical one. The best enrichment tools in the world can't fix a sales team that refuses to update records or an org that treats CRM as a reporting burden rather than an operational asset. Orgs must address both the systems and the behaviors.

With content-generation now expected, the next wave of compelling GTM AI products will be focused on making underlying data usable for autonomous action.

4) ROI pressure forces a shift from "use cases" to "use-case factories"

The ROI hangover is real. Reuters reported that business leaders broadly agree AI is the future; they just wish it worked right now. Only a minority of companies are seeing margin improvement from AI investments. BCG's research highlights how few organizations are extracting value at scale.

Yet CEO surveys show most plan to increase AI spend next year.

So budgets will get conditional. Teams will be required to demonstrate:

  • Baseline → lift (with numbers)

  • Time-to-value

  • Failure modes and mitigation

  • Human override points

The organizations that win will be hyperfocused with AI implementations. They'll build a use-case factory: a repeatable pipeline from workflow idea → production → measured lift, without reinventing governance each time.

The lifecycle:

  1. Intake: Is this workflow a good candidate?

  2. Design: Success metric, guardrails, failure modes

  3. Test: Sandbox validation before production

  4. Ship: Controlled rollout with monitoring

  5. Monitor: Performance telemetry and escalation triggers

  6. Retire: Kill switch protocol when it's not working

The most successful GTM teams will aggressively retire manual workflows and measure success by the reduction of human hours in low-leverage loops. "AI strategy" stops being a slide deck and becomes an operating system.

5) GTM engineering becomes a standard line item

Forward-deployed GTM engineering becomes a line item, because someone has to turn demos into production.

Platform disappointment plus data unreadiness plus agent complexity creates a services opportunity. Next year rewards teams who can wire data sources, implement guardrails, and ship working automations quickly.

Mid-market and enterprise GTM orgs increasingly operate like software teams:

  • A small internal automation pod or an external partner with engineers embedded in the business unit (not IT)

  • A backlog of "agentable" workflows, prioritized by impact and feasibility

  • A quarterly release cadence for GTM operations

The competitive edge is shipping GTM workflows like products.

The role boundary matters: GTM engineers own workflow implementation and observability; RevOps owns process design and metrics; IT owns infrastructure and security. First-90-days deliverables typically include data wiring, one production workflow, and observability with rollback capability.

One constraint: these roles barely exist at scale yet. The talent pipeline for "people who understand both GTM and agent orchestration" is thin. The more likely near-term pattern is that this work gets awkwardly squeezed into existing RevOps roles (often poorly) rather than cleanly stood up as a new function. Expect upskilling programs, internal mobility from RevOps, and heavy competition for the few people who already have this hybrid skillset.

6) The battle for the "Agent Control Plane" begins

CRMs embedded agents aggressively in 2025 while point solutions offered agentic automation across ad ops, content, and prospecting. In 2026, that tension escalates, though how it resolves remains genuinely uncertain.

What's actually at stake: the control plane, the layer that handles permissions, logging, and audit (governance) plus routing, execution, and orchestration (operations). Whoever owns the control plane owns the customer relationship.

CRMs are great at records and terrible at orchestration, so they'll try to own orchestration anyway. Salesforce is already pushing Agentforce as the command center; HubSpot will undoubtedly follow. The prediction is that both will start restricting third-party agent API access in the name of "security,"using governance requirements as justification for lock-in.

But convenience often beats architecture. If CRMs make it easy enough, even if technically inferior, enterprise buyers may not care enough to fight for best-of-breed orchestration. The alternative scenario: a new orchestration layer emerges on top of the CRM, treating the CRM as a dumb database while a separate system handles agent logic, permissions, and execution.

Procurement starts asking harder questions: Where does this agent live? Who can see what it did? How do we shut it off?

Interoperability standards like MCP, A2A and Agent Payments Protocol are still competing. Expect fragmented agent interop alongside consolidation pressure, not resolution.

The honest framing: 2026 is the year this tension becomes visible and contested. By 2027, we'll know who won.

7) AI-native discovery becomes a first-class GTM surface

One important nuance upfront: in many cases, owned web properties are a minority of sources that AI search pulls from. Visibility depends on the broader ecosystem, comprising earned media, third-party reviews, community discussions, and partner content. Companies must seed structured, parseable content across the sources AI actually references, not just optimize their own sites.

McKinsey data from 2025: 44% of AI-powered search users say it's their primary source of insight, topping traditional search at 31%. This isn't a niche behavior anymore.

The shift is toward treating AI answer engines as a channel with budgets, owners, KPIs, and experimentation cadence. These tools affect brand preference formation, shortlist inclusion, and sales cycle framing. Your narrative shows up before your sales team does.

As I’ve already argued, the "SEO vs GEO" debate is a distraction. The real shift is from ranking (being seen) to citation (being the answer). The goal moves from driving traffic to your site to embedding your answer in the model's output.

The question thus becomes: do we have an AI-discovery instrumentation and improvement loop, or are we guessing?

That loop means: measure citations and mentions (inclusion rate, query share, sentiment framing) → identify narrative gaps → publish structured formats across high-authority sources → re-measure. One caution: category variance is high. Teams should benchmark against their specific competitive set rather than assume universal patterns.

8) Buyer-side AI agents become operationally real

This is the prediction many 2026 forecasts are missing.

The conversation around AI in GTM focuses almost entirely on the seller side: how your team uses agents to prospect, personalize, and execute. But the buyer side is moving faster than most realize.

AI-mediated discovery increasingly carries through to execution. Agents compare options, assemble baskets, and complete checkout. While marketers obsess over website traffic, the "visitor" is changing. Adobe noted a 4,700% year-over-year spike in traffic from GenAI browsers, meaning the entity viewing your pricing page might be a bot, not a human. McKinsey estimates agentic commerce could influence $3-5 trillion in global retail sales by 2030.

The infrastructure is being built now: Mastercard's Agent Pay for agent-initiated transactions, OpenAI's Agentic Commerce Protocol developed with Stripe, Amazon testing "Buy for Me" to let AI purchase from other retailers.

For B2B, the timeline is longer but the pattern is emerging. Early signs:

  • Procurement teams using AI to draft intake summaries and vendor shortlists

  • Buying committees asking AI to generate comparison matrices

  • Security questionnaires pre-filled by AI before human review

  • RFP responses evaluated by AI against rubrics before human scoring

What this means:

  • "Optimize for purchasing agents" becomes a distinct discipline different from optimizing for AI search citations. Early research shows buyer agents exhibit "choice homogeneity," concentrating demand on a few products while ignoring others entirely.

  • Sellers need structured, parseable content that agents can process: tight comparisons, explicit proof points, clear constraints, transparent pricing.

  • Competitive intelligence expands to include: "What do buyer agents see when they compare us to alternatives?"

  • The attribution challenge deepens. Connecting AI influence to pipeline becomes harder when the "visitor" may be an agent, not a human. Traditional analytics break when the journey happens inside a model's context window. New attribution architectures measuring citation presence, recommendation inclusion, and downstream conversion correlation will need to emerge.

One caveat for B2B specifically: the assumption that buyers want agentic commerce may be premature. B2B purchases often serve a "cover your ass" function; buyers choose vendors partly to have someone to blame if things go wrong. An agent making the decision doesn't offer the same career-risk mitigation as a human relationship with a vendor's sales team. This dynamic may slow adoption in high-stakes enterprise purchases even as it accelerates in transactional ones.

The content formats and data structures that work for agent parsing are the same ones that work for AI search citation. Optimizing for one prepares you for the other.

9) Sales roles mutate: workflow orchestration replaces activity volume

Entry-level revenue roles tilt toward workflow and signal orchestration:

  • Monitoring intent and citation signals across platforms

  • Triggering plays at the right moments

  • Running targeted one-to-many motions

  • Keeping CRM data aligned with reality

Meanwhile, AEs get more leverage: better account briefs generated automatically, synthesized follow-up recommendations, faster multi-threading support.

The ideal AE profile shifts from "activity hunter" to "systems architect." The interview question becomes: "How do you debug the agent that generates your pipeline?"

Hiring signals change accordingly. Systems thinking, tooling fluency, and comfort supervising AI outputs matter more than raw activity volume.

What gets automated first: research, first-draft outreach, meeting prep, CRM updates, follow-up sequencing. What stays human: judgment calls on deal strategy, relationship navigation, negotiation, and exception handling.

Most current sales teams aren't equipped for this shift yet. Expect a painful skills gap before the talent pipeline catches up and potential friction from reps whose comp plans reward activity rather than system improvement. The orgs that move first on comp model alignment will have an advantage in both retention and results.

10) Pricing bifurcates: enterprise gets predictability, SMB gets metered

Salesforce's signaling around seat-based pricing for agents is a tell: CFOs want predictability, not metered uncertainty. But vendors, especially those paying high inference costs, are incentivized to protect margins through usage-based models. The result is segmentation.

Expect:

  • Enterprise: Flat or hybrid constructs to get budget approval. Value-per-workflow becomes the sales conversation: time saved, conversion lift, reduced churn risk, not cost-per-token.

  • SMB: Usage-based pricing persists because deal sizes don't justify custom packaging.

What gets sold as seats: governed agents, admin controls, premium support. What stays usage-based: execution volume, API calls, compute-intensive tasks.

2026 will be the peak of pricing model confusion, not the resolution.

11) The "agent spam" crisis accelerates a return to human-verified channels

This might be the hottest take of the bunch: if every GTM team deploys outbound agents, inboxes become unusable.

In 2025, we saw early signs: LinkedIn DMs flooded with AI-generated “personalization” that wasn't personal, email sequences that felt like they were written by the same model (because they were). In 2026, this gets worse before it gets better.

The counter-response: verified human channels gain value. Expect renewed investment in:

  • In-person events and field marketing

  • Gated communities with identity verification

  • Direct mail and physical touchpoints

  • Referral and warm intro programs

  • Executive engagement through trusted networks

As AI makes digital outreach cheaper, it makes human presence scarcer, and therefore more valuable. The teams that maintain authentic human touchpoints will have a structural advantage in cutting through the noise.

How to measure whether this is happening:

  • Cold outbound reply rates decline year-over-year

  • Domain reputation and deliverability become harder to maintain

  • Event CPL rises but conversion-to-pipeline improves

  • Warm intro conversion rates outperform cold by widening margins

Position this as a counter-trend, not an inevitability. Some categories and buyer personas will be more affected than others. But if your pipeline depends on cold outbound, 2026 is the year to diversify.

One wildcard: if email service providers and platforms deploy effective agent-detection filters, the inbox might become usable again without requiring the shift to human channels. But betting on that feels optimistic given the arms-race dynamics.

The practical playbook

If you want to be ahead by the end of 2026:

  1. Pick five workflows where speed matters and errors are survivable. Start there.

  2. Define one metric per workflow that ties to revenue outcomes, not engagement proxies.

  3. Decide your agent control plane: where agents live, how they're observed, how they're shut off.

  4. Fund the data hygiene layer: deduplication, enrichment, lifecycle accuracy. Agents amplify garbage.

  5. Build your AI-discovery loop: measure citations, identify narrative gaps, ship content formats that get cited.

  6. Prepare for buyer-side agents: structure your product data for algorithmic parsing, not just human skimming.

  7. Train your people: the differentiator is operators who can supervise agents and improve systems.

  8. Governance by August: treat the EU AI Act deadline as your forcing function, even if you're not certain it applies to you.

  9. Audit your Shadow AI: get visibility into what unsanctioned tools are already in use before you can govern them.

  10. Protect your human channels: don't let AI outbound completely replace the touchpoints that cut through noise.

2026 watchlist

The signals that indicate the market is actually moving:

  • Agent-in-the-loop penetration: percentage of revenue workflows where agents draft, recommend, or execute (not just "AI tool adoption"). Clarify levels: drafting assistance vs. recommendation with approval vs. autonomous execution.

  • Governance as procurement criterion: audit logs, permissioning, and observability required in vendor evaluations—not just nice-to-have.

  • AI discovery impact: changes in inbound quality, conversion rates, and narrative alignment, not just traffic or vanity metrics.

  • Buyer-agent readiness: structured data availability, comparison content depth, pricing transparency, attribution architecture experiments.

  • Control plane wars: which vendors successfully lock in agent orchestration, and which get bypassed by emerging orchestration layers.

  • Human channel premium: rising costs and attendance at in-person events, higher response rates on verified outreach, warm intro conversion widening vs. cold.

What would prove these predictions wrong:

  • If most deployments remain copilot-only and autonomous execution stalls → #1 and #4 are early

  • If procurement doesn't start requiring logs/permissions/audit artifacts → #2 is overstated

  • If strong outcomes happen without meaningful identity/lifecycle fixes → #3 is overstated

  • If orchestration innovation happens inside CRMs without lock-in games, or if CRMs simply acquire the orchestration leaders → #6 is overstated

  • If outbound response rates hold steady and verified channels don't outperform → #11 is vibes

  • If LLM reasoning capabilities plateau and complex agent handoffs remain error-prone → #1 and #4 timelines slip; "process owner" agents remain aspiration rather than operational reality

2025 was proof-of-possibility. 2026 is proof-of-competence.

Tags AI in sales, AI GTM automation, B2B GTM, B2B AI trends, GTM engineering
The Trial of Traditional SEO →

Latest

Featured
Dec 20, 2025
11 Predictions for AI-Enabled GTM in 2026
Dec 20, 2025
Dec 20, 2025
Dec 15, 2025
The Trial of Traditional SEO
Dec 15, 2025
Dec 15, 2025
Dec 6, 2025
The Next Trillion-Dollar AI Opportunity is GTM. SaaS Won't Capture It.
Dec 6, 2025
Dec 6, 2025
Nov 26, 2025
GTM Teams Still Can't Answer the Most Basic Question
Nov 26, 2025
Nov 26, 2025
Nov 15, 2025
The AI Bubble and the Curve We’re Not Measuring
Nov 15, 2025
Nov 15, 2025
Nov 3, 2025
Atomic Content: How to Make Your Pages Citable in AI Search
Nov 3, 2025
Nov 3, 2025
Oct 21, 2025
The Discovery Wars: Why AI Answers Are Replacing Search
Oct 21, 2025
Oct 21, 2025
Oct 9, 2025
How B2B Buyers Build Shortlists: GEO For AI Overviews, ChatGPT, And Perplexity
Oct 9, 2025
Oct 9, 2025
Oct 4, 2025
The Dark Traffic Effect: Measuring Invisible Demand from AI Answers
Oct 4, 2025
Oct 4, 2025
Sep 28, 2025
When the Flame Stays Lit
Sep 28, 2025
Sep 28, 2025

© 2025  Shane H. Tepper