Anil Dash said something that's been rattling around in my head. Dash is the tech entrepreneur who ran Glitch, advised the Obama White House on digital strategy, and has written about the intersection of technology and culture for over two decades. In a recent interview with the New York Times, he explained why software developers are so much more enthusiastic about generative AI than everyone else:
"In the creative disciplines, LLMs take away the most soulful human parts of the work and leave the drudgery to you. And in coding, LLMs take away the drudgery and leave the human, soulful parts to you."
That's a slick line, but it's also incomplete.
Dash is describing two poles: the coder's paradise and the creative's nightmare. Spend any time on LinkedIn, X, or the conference circuit and you'll hear endless variations of this binary. AI either replaces you or empowers you. Pick a side!
But there's a third experience that doesn't get much airtime, and it's the one I live every day. What happens when your work depends on both the mechanical efficiency AI can provide and the human craft it can’t replicate? What happens when the drudgery and the soul are tangled together in the same deliverable?
That's the reality for anyone doing serious content strategy, brand narrative, or (in my case) optimizing how brands show up in AI-generated answers. The work has layers. There's a mechanical layer: crawling data, running queries at scale, mapping competitive landscapes, structuring audits. AI is extraordinary at this.
There's an analytical layer: interpreting what the data means, spotting narrative gaps, writing strategic recommendations. AI is surprisingly competent here too, though it still needs a human reviewer who knows the brand and the market.
And then there's the execution layer. The part where you actually produce the content. And this is where things fall apart.
Because when it comes time to write the content that fills those gaps, answers those queries, and earns citation in AI-generated results, the output can't be synthetic. It can't read like a language model wrote it. It has to be substantive, specific, and written with the kind of editorial judgment that makes a reader (or a model evaluating source quality) trust it.
The content itself has to have soul, while also doing the technical job of being visible to the systems that are increasingly deciding what gets cited and what gets ignored.
The Ouroboros of Slop
I was on a call recently with a marketing leader who'd just sat through a demo from a competing vendor, a fully automated content platform. The pitch was seductive: feed in your URL, the system generates content, publishes it, measures it, and iterates. No humans required after setup.
His exact word for the experience was "PTSD."
The demo wasn't bad. It worked. The system did exactly what it promised. And that was the problem. He watched synthetic content get generated, published, measured by synthetic analysis, and then used to generate more synthetic content. A closed loop with no human in it. An ouroboros eating its own tail, each cycle a little more divorced from anything a real person would actually find valuable.
He wasn't afraid of AI. He was afraid of that: the logical endpoint of treating content as a manufacturing problem.
And his instinct was dead on.
Where the Handoff Happens
I've spent the better part of a year building AI-assisted content workflows from scratch. The technology changes the math on what's possible, but only if you're ruthlessly intentional about where the human handoff happens.
AI can do more than most people give it credit for. They deliver real efficiency gains, and anyone who ignores them out of principle is going to get outrun by someone who doesn't. But there's a boundary (and it's not always where you'd expect) where efficiency stops being the point. That boundary is the content itself.
Call it taste. Call it judgment. Call it experience. I call it soul, and I'm not being precious about it.
Soul is the difference between a blog post that technically addresses a buyer's question and one that makes them feel like the brand actually understands their situation. Soul is the voice, the specificity, the editorial choices that make a piece worth citing, worth trusting, worth coming back to. It's what a procurement manager senses, even subconsciously, when they're reading a vendor's content and deciding whether this company gets it.
No model gives you that. No automation loop generates it. It comes from years of writing, years of sitting in rooms with CMOs who are trying to articulate something they can feel but can't quite name, years of understanding that the gap between what a brand says and what a buyer hears is where all the interesting work lives.
The Craft Argument
The discourse wants this to be a technology argument. I think it's a craft argument.
The best cabinetmaker in the world uses power tools. Nobody serious argues they should hand-plane every board to prove their artistry. But nobody confuses the power tools with the cabinetmaker, either. The tool doesn't know what the piece is supposed to become. It doesn't understand proportion, or function, or the particular way this client lives in their kitchen. The tool removes drudgery. The craft remains.
I spent years working on films where a mistimed cut could kill the emotional beat of a scene. I spent years in ad agencies where the difference between a headline that sold and one that didn't came down to two words and an instinct you couldn't explain to the client. That's the part of the work no model has access to.
What I see happening in content (and especially in the emerging discipline of optimizing for AI-generated search) is a rapid sorting. On one side, you've got the fully automated platforms building ouroboros machines, promising scale and delivering slop. On the other, you've got traditionalists refusing to touch AI at all, falling behind on speed and scope while congratulating themselves on purity.
The interesting space is in the middle. People who figured out how to run AI at full speed on the mechanical layer and then, at a very specific and very intentional point, take the wheel. People who treat AI as the most powerful research assistant ever built and the strategist as irreplaceable.
My own arc with this was fast. As a creative, my first reaction to LLMs was amazement. My second was a brief, visceral fear. And my third, which came surprisingly quickly, was recognition. An opening. A way to do the work I was already doing at a scale and speed that wasn't possible before, without giving up the part that actually matters.
That's the opportunity most people are missing while they argue about whether AI is good or bad. It's both. It's neither. The tools are here. They're fast, they're cheap, and they're getting better. The part they can't do is the part that was always hard. That hasn't changed.