Skip to content
Back to Writing framework

The Capacity Problem

Ninety-five percent of enterprise AI pilots never reach production. The failure isn't technical. It's that most organizations lack the thinking infrastructure AI actually requires.

/ 10 min read

There is a number that should be keeping more executives up at night than it appears to be: ninety-five percent of enterprise AI pilots fail to reach production. Not because the technology doesn’t work. Not because the data isn’t ready. Not because the vendor oversold and underdelivered (though that happens too). They fail because the organization itself doesn’t have the capacity to absorb what AI actually demands.

I wrote recently about the tool problem; the reflex to treat AI like another software purchase, another platform to evaluate and roll out. But there’s a deeper issue underneath that one, and it’s less comfortable to talk about. Even organizations that understand AI isn’t a tool-shaped problem, that get past the vendor evaluation stage and start doing real work, still stall. The pilots look promising. A few people do remarkable things. And then nothing scales.

This is the capacity problem. And it has very little to do with technology.

The trickle-down myth

The most common assumption I encounter is some version of this: get the tools in front of people, let the high performers figure it out, and the rest of the organization will follow. It sounds reasonable. It follows the pattern of every previous technology adoption. Someone becomes the Excel wizard, others watch over their shoulder, and eventually everyone learns the pivot table.

AI does not work this way. There is no trickle-down effect here that you can bank on.

Using AI well requires something fundamentally different from learning a new interface or memorizing a set of features. It forces you to think about the nature of your work, its component parts, its subtasks and dependencies, instead of just doing the thing you’ve been doing since you started. It requires you to articulate clearly what good looks like, to set goals and standards and reshape them along the way. This is not using a tool. This is interrogating your own process at a level most knowledge workers have never been asked to operate at.

The high performers who take to AI quickly aren’t succeeding because they’re more technically adept. They’re succeeding because they already think this way. They were already decomposing problems, already questioning workflows, already comfortable with ambiguity and iteration. AI gave them a lever for a muscle they’d already built. But that muscle is not common, and watching someone else use it does not build it in you.

The adoption gap
01Learn an interface, like every other tool
02Memorize features, share tips at stand-ups
03Watch the power users, then copy their moves
04Skill transfers by observation and repetition

What this actually requires

There is a term for what AI demands of its users, and it isn’t “prompt engineering” or “AI literacy” or whatever the current training-catalog language is. It’s metacognition. Thinking about thinking. Systems thinking about systems and thought itself.

Unless you have a natural predisposition for this kind of reflection, for being genuinely interested in how you do what you do and why, it is a near-complete non-starter. The vast majority of knowledge workers have no experience with this mode of thinking and, frankly, no particular desire to develop it. They want to show up, do the work they were hired for, and go home. That is not a character flaw. It is a perfectly reasonable orientation toward employment. But it is fundamentally incompatible with the idea that AI adoption will happen organically if you just provide the tools.

This is the part most enterprise AI strategies refuse to say plainly: the capacity for this kind of thinking is not evenly distributed, and no amount of corporate training will change that. You can teach someone to write a better prompt. You cannot teach someone to see their own work as a system of decomposable parts if they have never thought that way before and have no interest in starting.

The metacognitive ceiling
100%
All employees
60%
Willing to try AI
25%
Can sustain usage
8%
Systems thinkers
3%
Already doing it

What pilot purgatory actually looks like

The failure mode isn’t dramatic. Nobody sends an angry email. No project gets formally cancelled. What happens is quieter than that.

You get a spike of usage in the first few weeks. People try the tools on the one or two tasks they truly dread doing. Some find it helpful for drafting emails or summarizing long documents. Usage settles into a low plateau. The high performers are humming along, doing genuinely impressive work. Everyone else has found a narrow, comfortable lane that saves them maybe a few hours a week. Mostly quality-of-life improvements with no transference to the bottom line.

And so you sit with an expensive set of licenses and subscriptions generating marginal benefit. Leadership reviews the adoption dashboards, sees that people are logging in, and calls it progress. But the arrows are not going up and to the right. The transformative workflows never materialize. The organization is spending real money to make some people’s Tuesdays slightly less tedious.

(This, by the way, is the state that most enterprise AI programs are in right now. The ones that will admit it, anyway.)

BCG captured the math neatly: AI success is 10% algorithms, 20% data and technology, 70% people, processes, and cultural transformation. That 70% is the capacity problem. And most organizations are spending almost nothing on it.

Where AI success actually comes from (BCG)
10%
Algorithms
20%
Data & tech
70%
People & process

The quiet resistance

There is another dimension to this that rarely makes it into the analyst reports. Some people will not adopt AI under any circumstances. Not because they haven’t been trained, not because they lack access, but because they have decided it is fundamentally untrustworthy. The “it’s a scam, LLMs hallucinate everything” position. They would rather be fired on principle than integrate a tool they believe is intellectually dishonest.

I am not dismissing this perspective. Some of the skepticism around AI is well-earned. But in organizational terms, this is passive resistance, and it can persist for a very long time before leadership even recognizes it as a pattern rather than a collection of individual reluctances. You might surface a few moderately useful adopters through some level of AI mandate, but the quiet refusal underneath will cost you in ways that don’t show up on any dashboard.

So if you can’t train your way out of this, and you can’t mandate your way through it, and you can’t rely on trickle-down from your best people, what’s left?

The skunk works move

You find the people who are already doing it. The ones who adopted AI before you bought the enterprise license. The ones who think about their work as systems, who are already experimenting, who have opinions about what they need to go further. They exist in every organization of meaningful size; usually scattered, usually unsupported, often working around IT policy rather than within it.

Then you do something that most enterprises find deeply uncomfortable: you invest in them disproportionately.

Not a lunch-and-learn. Not a center of excellence. Actual skunk works operations. Small, empowered teams with real problems to solve, time and space to experiment, and a direct line to share what they’re learning. Multiple of them, across departments, because the way AI reshapes legal research looks nothing like how it reshapes market analysis or content operations. You need people embedded in the actual work, not abstracted away from it.

And here is the critical part: you listen to what those people say they need. Not what the vendor says they need. Not what the consultant’s framework says they need. The people doing the work will tell you where the friction is, what’s missing, what would let them take things to the next level. Rapid experimentation, accommodated safely (whatever “safely” means for your particular organization), is the engine. Everything else is theater.

Then you have to make the harder decisions. The people who can be coached into this mode of thinking, you coach. The people who cannot or will not, you plan for. Whether that means slowly reshaping their roles or making faster structural changes depends on your organization’s risk tolerance and unique realities. But pretending the distribution doesn’t exist is how you end up in pilot purgatory permanently.

Building AI capacity: what actually works
Identify
Find the people already using AI to think differently about their work, not just to automate tasks.
Invest
Give them time, space, and resources for skunk works experimentation against real problems.
Listen
Let them tell you what they need to go further. They know things the vendor doesn't.
Restructure
Build outward from what works. Prepare to reimagine roles and workflows around demonstrated capability.

The compounding window

It is hard to say what the future of AI will bring. The landscape shifts every quarter in ways that are genuinely difficult to predict. But assuming our current trajectory holds, the skills and institutional infrastructure and knowledge you build now will compound as AI itself grows and changes. This is the part that makes the urgency real; not the urgency to buy something, but the urgency to build something.

Organizations that are developing metacognitive capacity across their teams right now, that are running skunk works and learning from them, that are honestly assessing who can make this transition and who cannot, are not just ahead today. They are accelerating. The gap between them and everyone else is compounding in exactly the way that compound interest compounds: invisibly at first, then undeniably.

Waiting three months or six months will not feel consequential in the moment. It will feel consequential eighteen months from now, when the organizations that started earlier have institutional knowledge you cannot shortcut your way to, workflows you cannot replicate by purchasing a platform, and people who have been thinking about their work as decomposable systems for long enough that it’s become second nature.

No amount of corporate training will close that gap once it opens. The capacity to use AI well is not a skill you acquire in a workshop. It is an organizational muscle built through sustained, honest, uncomfortable work: looking at every corner of how your people operate and reimagining its shape to fit a fundamentally different kind of tool.

The tool problem was about resisting the urge to buy your way to AI transformation. The capacity problem is about what comes after: the realization that your organization’s ability to think about its own work is the actual bottleneck. And that bottleneck doesn’t resolve itself.

The compounding gap
Investing now
Waiting
Now
+6mo
+12mo
+18mo

Ready to start? Or just want to talk it through?

Start a conversation

See how I work · Read more writing