Skip to content
Back to Writing perspective

The Tool Problem

Knowledge organizations have spent two decades solving problems by buying software. AI doesn't work that way.

/ 4 min read

For the better part of two decades, solving an organizational problem has meant buying software. You need project management, you buy a project management tool. You need content management, you buy a CMS. HR, cloud capacity, customer data; the pattern is always the same. Identify the problem, evaluate the vendors, pick the tool, roll it out. It worked. Organizations developed a kind of muscle memory around this, and it served them well.

So when AI arrived on the executive agenda at research firms, consultancies, and other knowledge-intensive organizations, the instinct was the same. Which platform should we buy? Should we go with Claude or GPT? Do we need an enterprise license? The questions feel natural because they follow the playbook that’s worked for everything else.

But AI is not a tool-shaped problem. Not yet, and maybe not for some time.

The SaaS playbook
What AI actually requires
Identify the problem
Understand how work gets done
Evaluate vendors
Map where AI fits (and where it doesn't)
Buy the tool
Design a bespoke approach
Roll it out
Iterate against real workflows

What you’re buying when you purchase an off-the-shelf AI solution is a harness. And a harness is useful, but it is simultaneously opinionated and agnostic. Opinionated in tooling: how its agents do work, what systems it connects with, how much you’re allowed to shape it. Agnostic in the ways that matter most: it will almost certainly not understand your specific organization’s knowledge stack, your intent, or the particular texture of how your analysts, researchers, and subject-matter experts actually work.

So if the tool isn’t the hard part, what is?

The work before the work

Consider something as seemingly straightforward as content. Say you’re in a highly technical industry and you want to produce whitepapers, blog posts, marketing collateral drawn from your technical documentation. You want it to speak to your specific audience, in your specific voice, and read like it came from your organization rather than from the same machine as everyone else’s. This sounds like an AI problem. In some sense, it is. But before you can even begin to address it, you have to ask a series of questions that no tool will ask for you. Are your tech docs up to date? What form are they in? How do you set up a pipeline like this given your organizational constraints? Do other things need to change first before you can even think about this? (These are not optional prerequisites. They are the actual work.)

The failure mode is quiet. It’s not dramatic. A knowledge organization spends six figures on a platform and adoption simply… doesn’t happen. The analysts keep doing research the old way. The reviewers still read every document cover to cover. It becomes just another widget that maybe some people use to write emails or draft a PowerPoint sometimes. Not because the tool is bad, but because the tool was never the problem.

The real question behind AI integration
Deterministic
What must produce the exact same result every time, and how do you guarantee that?
Probabilistic
What can vary within defined constraints, and what are those constraints?
Human-in-the-loop
Where is human judgment irreplaceable, and how do you design for it?

It’s a technical problem and an organizational one

What I find myself explaining most often is this: AI, and agents more specifically, represent both a technical and an organizational problem. You cannot integrate them with a GUI; that’s just a surface to interact with. Underneath, what you’re actually confronting is a fundamental change in understanding how knowledge work gets done. What must be deterministic, and how do you achieve that? What should be probabilistic, and with what constraints? Where are the humans-in-the-loop? These are not questions a vendor can answer for you, because the answers look different for every single company.

Not having a clear idea of what matters to you and why it matters will reveal the gap between stated and revealed preferences extremely quickly. A research firm says it wants AI-driven content. What it actually wants, once you start asking, might be cleaner documentation, or a faster review process, or simply for someone to tell them which of the twelve things on their list they should do first and which they should skip entirely. This is the nature of knowledge work: the problems are deeply contextual, the workflows are often invisible, and the highest-value interventions are rarely where leadership expects them to be.

The tool problem is almost never about the tool. It’s about the clarity that should have come before the purchase order.

Ready to start? Or just want to talk it through?

Start a conversation

See how I work · Read more writing