Managing AI Is Managing
For the past few months, I've been building things with Claude Code. Not just overseeing engineers building things, but building things myself: writing specifications, evaluating outputs, iterating when the result wasn't what I meant.
Somewhere around month two, I realized I wasn't learning to code. I was learning to manage.
The specification problem
When you work with an AI agent, you write a specification. The agent produces the work. You evaluate the result. If it's wrong, you don't fix the code. You fix the specification.
This loop should sound familiar to anyone who has ever managed people.
When your team delivers something you didn't want, the instinct is to think they got it wrong. But most of the time, the brief was ambiguous. You knew what you meant. They didn't.
AI makes this brutally clear. Unlike a human, an AI agent won't ask "did you mean X or Y?" It just builds what you described. Every ambiguity gets silently resolved with machine assumptions instead of human intuition. You don't find out your spec was unclear until you see the result.
Specification as a discipline turns out to be the same discipline that separates good managers from bad ones.
The flight simulator
When you manage humans, you get a handful of feedback loops per week. Direction on Monday, results on Friday, feedback, wait. Maybe twenty cycles per person per year.
With AI, I get dozens of loops per day. Specify, see the result, realize the spec was ambiguous, rewrite, see the improvement. In an afternoon, I go through more iterations on communicating intent clearly than most managers get in a month.
It's management in a flight simulator. Same core skill, orders of magnitude more reps.
The skills that transfer
The parallels are structural, not superficial:
Letting go. The hardest part of management is trusting someone else to do work you could do yourself. Every engineer who resists AI because "I could write it better myself" is the same as a manager who can't stop micromanaging. AI forces you to confront this faster.
Evaluating output, not process. Good managers don't watch their team type. They evaluate the result against the intent. With AI, you literally cannot watch the process. You specify and you judge.
Owning the brief. When the AI produces something wrong, the problem is almost always my specification. I've learned to ask "what did I fail to specify?" before "what did the agent get wrong?" Great managers develop this reflex over years. AI develops it in weeks.
Anticipating ambiguity. After thousands of rounds of "my spec was unclear and the AI filled the gap wrong," you develop an instinct for spotting ambiguity before you hit send. You start reading your own writing from the receiver's perspective. This is the hardest management skill, and AI gives you more practice at it than any management course ever could.
The emotional side
The first time an AI agent rewrites something you spent hours designing, you feel it. Defensiveness, loss of ownership, a whisper of irrelevance. "If the machine can do this, what am I for?"
New managers feel the exact same thing. Watching someone solve a problem differently than you would have. The impulse to intervene. Learning to sit with that discomfort is the emotional core of delegation.
AI compresses this journey. In traditional management, you confront these feelings gradually over months. With AI, you hit them in week one. I've watched engineers go through what looks like the five stages of grief, from "this can't produce anything good" to "my job is to specify, not to implement."
The engineers who adapt fastest aren't the most technically skilled. They're the ones who can separate their identity from their output. That's not a technical trait. It's the same trait that makes someone a good manager.
AI doesn't train empathy or the ability to motivate someone through a bad week. But the resilience to delegate, to accept imperfect output, to own the brief instead of controlling the execution? That it trains at speed.
What this website is
This whole site is a set of specifications. Not code specifications. Transformation specifications. Documents precise enough that a team of humans can read them and know what to build, how to work, and what success looks like.
I wrote them the same way I write specifications for Claude Code: by iterating until the output matched my intent. The difference is that here, the "agents" are humans undertaking an organizational transformation, and the feedback loop is measured in months, not minutes.
Building with AI didn't teach me to code. It taught me to specify. That turns out to be the most important thing a CEO does.
The uncomfortable implication
If managing AI develops management skills at an accelerated rate, then the traditional career ladder (years as an IC, then team lead, then manager) is no longer the only path to management readiness.
A junior engineer who spends a year working with AI agents may develop stronger specification and delegation instincts than a manager with three years of experience, simply because of repetition count.
This doesn't mean AI management replaces human management. But the cognitive core (clear communication, delegation, evaluation) and the emotional resilience it requires (letting go, accepting imperfection, owning the brief) are trainable at AI speed.
The IC track and the management track are starting to blur. That's probably a good thing.
The reframe
Engineers are uncomfortable with AI because it's unreliable. Same input, different output. That feels broken to someone trained on deterministic systems.
But managers have always operated this way. You give the same brief to two people and get different results. You learn to write better briefs, set clearer expectations, evaluate more carefully, and sit with the discomfort of not being in control.
Engineers working with AI aren't learning to tolerate unreliability. They're learning to manage. They just don't know it yet.
This post was written by François Lane after several months of building with Claude Code and realizing that the skills it developed had nothing to do with coding.
