Axios CTO Dan Cox just published a piece that every product and engineering leader should read carefully, but not for the reasons you think.
Cox cut his product and tech team from 63 to 43 people, doubled output in January, and expects to double again this month. Their 12-month backlog will be gone in months. His takeaway: tech debt is "gone, not because organizations solved it, but because AI made it irrelevant."
That's the headline everyone will run with. Here's the line that actually matters: competitive advantage is no longer velocity. It's narrative coherence.
Cox isn't celebrating speed. He's warning you that speed is now table stakes. And if speed is table stakes, then the companies still optimizing for it are already dead. They just don't know which kind of dead they are.
Two Ways to Die
The AI coding conversation has produced two distinct failure modes, and almost no one is cleanly separating them.
Technically dead is the failure mode everyone talks about. Your code doesn't work. Your AI-generated codebase is riddled with security vulnerabilities, your architecture can't scale, and your startup needs a six-figure rescue engineering engagement to unfurl the spaghetti. According to a TechStartups analysis, roughly 10,000 startups built production apps with AI coding assistants. More than 8,000 now need rebuilds, at $50K to $500K each. The total cleanup bill: $400 million to $4 billion.
This is the vibe coding hangover. It's real, it's expensive, and I've written about the security debt dimension of this problem before. But it's also the failure mode that gets all the coverage because it's visible. Broken code announces itself. The other failure mode is quieter, and worse.
Directionally dead is when your code works fine but your strategy doesn't. You built the right thing technically and the wrong thing strategically. Your product ships, your demos impress, your metrics look healthy. But you're solving a problem that doesn't matter enough, in a market that won't sustain you, against competitors with structural advantages you can't overcome.
The technically dead know they're in trouble. The directionally dead think they're winning.
The Perplexity Parable
Want to see directional death in real time? At the Cerebral Valley Summit, more than 300 AI founders and investors voted Perplexity the AI unicorn most likely to fail. Not because the product is bad. By most accounts, it's excellent. The problem is structural: Perplexity doesn't own its models, its distribution challenge can't be solved by better product quality, and its revenue model requires outrunning free alternatives from the very companies it depends on for infrastructure.
This is what directional death looks like. Brilliant execution aimed at an impossible position. The engineers are shipping. The product is polished. The trajectory is a wall.
The Data That Should Worry You
The Bain Technology Report 2025 found that AI coding assistants deliver only 10-15% productivity gains for most teams. The reason is structural: writing and testing code accounts for just 25-35% of the time from initial idea to product launch. Speeding up code generation while leaving everything else unchanged barely moves the needle.
But here's where it gets interesting. Faros AI's analysis of the same data reveals something that should alarm anyone conflating code velocity with business velocity. Developers using AI completed 21% more tasks and merged 98% more pull requests. But company-wide delivery metrics showed no improvement in throughput or quality. PR review time increased 91%. Bugs per developer increased 9%.
Read that again: 98% more code merged. Zero improvement in what actually shipped to customers.
This is the speed trap. You can measure engineering velocity going up while business velocity stays flat, or even declines. The code is moving faster. The company isn't.
The 20,000-Line Parable
Developer Joe Masilotti wrote about spending two weeks using Claude to build a complete newsletter platform: 20,000+ lines of code with authentication, subscriptions, analytics, and email integrations. It worked perfectly. Then he deleted all of it.
His realization: "I didn't have a code problem. I had a decision problem." An afternoon of strategic thinking revealed the answer didn't require any code at all.
Before AI, the friction of building forced natural pause points. When implementing a feature took weeks, you thought hard about whether it was worth building. That friction was a feature, not a bug. It created space for judgment. AI removes that friction, and with it, the forcing function that made teams stop and ask: should we?
This connects to something I explored in the AI capability gap piece: the real crisis isn't that organizations can't adopt AI. It's that they're adopting it without the organizational discipline to aim it. The 95% pilot failure rate from the MIT study isn't a technology problem. It's a judgment problem operating at scale.
Decision Debt: The Debt Nobody's Tracking
The tech industry has spent two decades talking about technical debt: the accumulated cost of taking shortcuts in code. We've built entire methodologies around measuring it, managing it, and paying it down.
But there's another form of debt that accumulates faster and compounds harder, especially in the age of AI: decision debt. The cost of shipping without validating whether you're solving the right problem.
When building was expensive, decision debt accumulated slowly. You could only make so many wrong bets per quarter because each bet took weeks or months to execute. AI removed that governor. Now a team can build and ship a wrong bet in a day. The decision debt compounds at the speed of code generation.
This is what I think Cox at Axios is actually pointing to with "narrative coherence." It's not a branding concept. It's a strategic discipline: the ability to articulate why you're building what you're building, and to have every feature, every sprint, every shipped artifact reinforce a coherent story about the value you deliver. Companies without that coherence are generating decision debt with every AI-accelerated sprint.
I saw this pattern at Capital One, long before AI coding tools existed. The most dangerous projects weren't the ones that ran into technical problems. Those got caught in code review, in testing, in deployment. The dangerous projects were the ones that executed flawlessly on a thesis that was slightly wrong. They shipped on time, hit their metrics, and the team celebrated. Six months later, the feature was deprecated because the customer need had been misread from the start.
AI makes that failure mode both faster and cheaper to execute, which sounds like it reduces the stakes. It doesn't. It increases the volume. Instead of one well-resourced wrong bet per quarter, you get ten AI-accelerated wrong bets per month.
The EOD Parallel
In Navy EOD, speed was never the metric that mattered. Precision was. The most dangerous technicians weren't the slow ones; they were the fast, confident ones aimed at the wrong device. Speed amplifies whatever direction you're already pointed, and if that direction is wrong, you don't get a second attempt.
The AI coding boom has created this exact condition at organizational scale. Teams are moving faster than ever. The question nobody's asking often enough: faster toward what?
What This Actually Means for Leaders
If you're a CTO reading the Axios piece and thinking "we need to cut our team and ship faster," you're drawing the wrong lesson. Cox didn't just cut headcount and add AI. He restructured around narrative coherence, a strategic filter for what gets built.
The companies getting 25-30% productivity gains (versus the 10-15% average that Bain found) aren't the ones that adopted AI coding tools the fastest. They're the ones that applied AI across the entire development lifecycle: from prioritization, to validation, to delivery, to measurement. They treated AI as a judgment amplifier, not just a code accelerator.
Here's what I'd tell any product or engineering leader navigating this:
Stop measuring engineering velocity in isolation. If your PRs merged went up 98% and your customer outcomes didn't change, you have an alignment problem, not a productivity win.
Invest in validation before acceleration. The cheapest wrong bet is the one you don't build. Before AI-accelerating a roadmap item, invest an afternoon (like Masilotti did) asking whether it should exist at all.
Track decision debt explicitly. How many features shipped last quarter that didn't move a customer metric? That number is your decision debt balance. AI will make it grow faster unless you build the discipline to counteract it.
Use speed for learning, not just shipping. The organizations that thrive in this environment are the ones that use AI-accelerated building to run more experiments, not ship more features. Learning velocity beats shipping velocity.
The Bottleneck Moved
The AI coding debate is stuck on the wrong question. Optimists say AI makes engineers 10x more productive. Skeptics say the gains are unremarkable. Both are arguing about velocity when the real constraint has shifted to direction.
I wrote about this dynamic from a different angle in The AI Software Selloff: Wall Street was pricing the demo, not the deployment. The same pattern applies here. The industry is celebrating the demo (look how fast we can build!) while ignoring the deployment reality (but are we building things that matter?).
The constraint was never code. It was always judgment. AI didn't eliminate the bottleneck. It moved it from the engineers' keyboards to the product leaders' decisions. The companies that recognize this shift will thrive. The ones still optimizing for speed will find themselves exactly where Perplexity is: building brilliantly toward a wall.
The technically dead at least know they need to rebuild. The directionally dead are still sprinting.