The investor leaned back in his tufted leather chair, emitting a soft chuckle.
“Look, I like what you're building. But I have to be honest. I'm seeing a lot of AI pitches right now, and most of them are solutions looking for problems. The infrastructure spend is massive, but I'm not seeing the productivity gains show up in the numbers yet.”
On the screen between us: a PowerPoint slide telling a very different story.
Support teams cutting resolution times from double digits to low single digits. Engineering teams shipping code in hours instead of days. Marketers going from blank page to usable draft in a fraction of the time. All real, all live, all already inside companies that are just as messy and imperfect as any other.
The same technology, the same moment in history. Two completely different realities.
That’s the tension everyone is feeling right now. At the macro level, AI still looks like a giant question mark. But at the micro level, inside the bellies of actual businesses, at this very moment, it’s changing how work gets done.
You don’t resolve that tension by picking a side. You resolve it by noticing that there are two very different curves playing out on different timelines, and most of the conversation is stuck on the wrong one.
The Two Curves Nobody Names Out Loud
When people talk about an “AI bubble,” they’re usually staring at the first curve:
1. The infrastructure curve
This is the loud one: The GPU arms race, foundation models, data centers (in space!), platform bets. The numbers are mind-melting and very public.
You can watch a handful of companies commit hundreds of billions of dollars to chips and compute. You can watch governments scramble to write regulatory frameworks. You can watch benchmark charts leap upward every few months.
If you’re only looking at that curve, “hey, that’s a bubble” is an understandable reaction. The spend is immediate and enormous. The payoff? As foggy as a San Francisco morning in August.
The second curve is quieter and harder to see from far away.
2. The operational curve
This one takes shape inside teams:
A support org moves from “every ticket is written from scratch” to “AI drafts responses and humans approve or adjust.”
A product pod has an assistant summarizing research, rewriting specs, and generating copy options before anything hits design.
An engineering team uses an AI pair-programmer for boilerplate and glue code, keeping human focus on architecture and edge cases.
The net result is less drudgery, faster cycles, and fewer hours spent on work humans don’t need to do line-by-line anymore.
The infrastructure curve moves at the speed of capital allocation and model training. The operational curve moves at the speed of process change, internal politics, training, security reviews, and habit. (Read: slowly, unevenly, and mostly out of sight.)
So when someone says “AI isn’t paying off,” they’re pointing at the first curve. When someone else says “our team ships twice as fast now,” they’re living on the second.
They’re not talking about the same thing.
Inside a Company Where Both Stories Are True
Picture a mid-size company, big enough to have real complexity, small enough that nobody feels like they “own” AI.
On the board slide, there’s a bullet about “exploring AI use cases.” A few pilot projects, maybe a vendor logo or two. The CEO gets the same questions every quarter: Are we behind? Where’s the upside? What’s our AI strategy?
The conversation lives in that infrastructure frame: spend, risk, position, potential.
Inside the company, the reality is much messier:
A handful of engineers quietly turned on an AI coding assistant. They don’t talk about it much, but their time-to-complete on repetitive tasks has basically been cut in half.
Someone in support built a simple internal tool that drafts responses based on similar historical tickets. Agents edit instead of starting from a blank box.
A marketer started using a model to generate first-pass landing pages and sales emails, then rewrites them instead of staring at a blinking cursor.
None of this shows up as a capital project. These are local workflow tweaks. Nobody logs “AI impact” in a central place. There’s no “AI” line item on a dashboard.
From the top, the narrative feels like, We’ve spent a bit, we’ve experimented, but it’s hard to point to real ROI.
From the ground, people are thinking, I’m never going back to how we did this last year.
Both are telling the truth based on what they can see.
The Measurement Problem Masquerading as a Bubble
There’s a straightforward reason the AI story feels underwhelming at the macro level. It’s because we’re good at valuing the infrastructure side. We know how to talk about:
Market caps
Capital expenditure
Data center buildouts
Headcount shifts
GPU orders
We’re much worse at seeing the operational side:
How many hours did AI shave off this quarter’s product launch?
How much did faster support resolution actually move retention?
How often did an AI-assisted draft turn a task from a half-day to 20 minutes?
Those questions are answerable, but only if you instrument the workflows themselves. Most companies don’t. They track top-level outputs, not the micro-frictions that roll up into those outputs.
So early AI wins get treated like good vibes:
“It feels like we’re moving faster.”
“People say they spend less time on grunt work.”
“The team seems less buried.”
None of that makes it into a board deck. Economists don’t see it in quarterly productivity numbers. Investors can’t plug hunches or gut feelings into a model.
This is an attribution problem.
AI is already influencing purchasing decisions, sales cycles, and customer behavior in ways that don’t show up in the tools we use to track value. People are asking questions in ChatGPT and Perplexity instead of Google. They’re reading AI-generated summaries instead of your website. They’re copying answers from an assistant into internal docs and slide decks that shape buying decisions.
We can’t quantify what’s happening not because AI isn’t working, but because we’re measuring with instruments built for a pre-AI web.
From far away, that looks like “no clear ROI.”
Up close, it looks like, “we got a lot more done this quarter and we’re not totally sure why.”
What the Second Curve Looks Like Up Close
Take a support team that decides to stop dabbling and actually completes a focused redesign of a workflow around AI.
Before:
Human agents handle every ticket manually.
Resolution times hover around 10–15 minutes for common issues.
Backlogs spike at predictable times.
They run a narrow experiment:
Identify the most repetitive, low-risk tickets (password resets, simple how-tos, basic status checks, etc.).
Train an assistant on historical tickets, policies, and docs.
Route those tickets to the assistant first, with humans reviewing and approving responses.
Track a few simple metrics: time to first response, time to resolution, escalation rate, satisfaction.
After a few weeks of tuning prompts, fixing edge cases, and tightening guardrails:
That slice of tickets now resolves in three minutes instead of twelve.
People spend more time on weird, complex problems instead of copy-pasting answers.
The backlog chart stops looking like a heart monitor.
Zoom out and it’s still just one workflow in one team.
Now imagine the same pattern in engineering:
Boilerplate and tests get drafted by an assistant.
Engineers spend more time on architecture, debugging, and design.
Cycle times quietly compress, even if nobody puts “thanks, AI” in the release notes.
And in sales:
First drafts of proposals, follow-ups, and ROI summaries are generated from prior deals and internal knowledge.
Reps spend more time on discovery, qualification, and actual conversations.
Response times drop. Win rates nudge up in places that don’t have a clean, one-to-one explanation.
None of these changes announce themselves as “AI transformation.” They show up as a steady drip of “this is easier than it used to be.”
The second curve rarely arrives as a spectacular jump. It seeps through workflows as a thousand small, compounding tweaks.
If your measurement habits are built for giant, discrete changes, you’re going to miss it.
Why Leaders and Operators Talk Past Each Other
Once you see the two curves, a lot of the current confusion starts to feel less mysterious.
Leaders looking at top-level numbers, national stats, and high-level P&L say, “I don’t see a step-change in productivity. This feels overhyped.”
Operators inside teams say, “We just took work that used to eat half our week and turned it into an afternoon.”
Both are right in the frame they’re using.
The breakdown happens in the ugly middle, where nobody is connecting small, local time savings to big, system-level outcomes.
Attribution in this environment is inherently messy:
A sales cycle that shrinks by five days doesn’t leave a tag saying “this part belonged to AI.”
A product shipped one sprint earlier doesn’t split the credit between “better planning,” “fewer meetings,” and “the assistant that wrote half the boilerplate.”
An employee who doesn’t burn out because their job is less soul-sucking doesn’t send HR a note that says, “I stayed because the chatbot took the worst tasks off my plate.”
All of that gets tossed into the generic bin of “how things went this quarter.”
From a distance, the story becomes: Huge spend, ambiguous payoff.
From the ground: My job is still stressful, but at least I’m not doing the dumbest parts of it manually anymore.
The Curve That Actually Matters
The safest take right now is: “Give it time. The small wins will add up and eventually the overall data will catch up.”
Maybe, but that’s not really interesting.
The more interesting question is: who is even capable of seeing their own second curve?
Most organizations have dashboards built for a previous era:
Analytics that track web sessions and channel attribution, not AI-generated journeys.
CRM views that show opportunities and stages, but not the invisible influence of AI answers on which vendors make the shortlist.
Productivity metrics that treat all time as equivalent, regardless of whether it’s spent on thinking or formatting.
In that context, arguing about an “AI bubble” is a distraction. It’s like trying to assess the value of metal tools by counting the stone axes in your clan’s arsenal.
Or debating whether the combustion engine is overhyped while building a road system for horse-drawn carriages.
Or trying to judge the value of the internet using only the metrics from your mailroom and fax machines.
The organizations that will look unrecognizable in a few years won’t just be the ones that adopted AI tools. Plenty of companies will do that.
The real gap will be between:
Teams that treat AI as a vague “efficiency booster” and hope it shows up somewhere in the numbers, and
Teams that build the instrumentation to see, understand, and double down on their own second curve, where AI is already influencing workflows, decisions, and revenue in ways their old tools were never designed to catch.
Leaders asking “is this an AI bubble?” are missing the point.
The better questions are: “Inside our company, where are AI wins already happening? And why don’t we have the instrumentation to see them as clearly as we see our cloud bill?”
And when the macro story finally looks obvious in hindsight, the companies that pulled away will be the ones that realized early that the second curve was the main event (and treated it as such).