The framework said AI transformation is an elevation. The research says it's also exhausting.
When I wrote this framework, I argued that AI transformation is an elevation. The shift from executing to directing. From producing to specifying. Higher-value work, more interesting, more human. I still think that's right.
What I missed — until the research caught up — is that "more interesting" and "more exhausting" aren't mutually exclusive.
In the last few weeks, three studies landed that I couldn't ignore:
- BCG and UC Riverside documented what they called "AI brain fry" — cognitive fatigue from AI oversight that exceeds capacity. 14% of heavy AI users showed measurable symptoms: 33% higher decision fatigue, 39% more major errors, and a 34% intent to quit.
- HBR published an eight-month study of AI adoption at a tech company. AI didn't reduce workload — it intensified it. People took on work they wouldn't have attempted before. Boundaries blurred. Multitasking increased.
- A large-scale analysis by ActivTrak found that AI users spent 27–346% more time on daily tasks. Email time doubled. Deep focus work fell.
None of this contradicts the framework. But it names a mechanism the framework wasn't talking about.
What I noticed looking at my own team
I'm not claiming burnout. I'm not claiming a crisis. What I am noticing are early signals worth paying attention to:
- Our most engaged T1.5 people — the ones actively building AI workflows — sometimes look more drained than the people further behind. That matches the research.
- When I ask "what are you spending your mental energy on?", the answers include a lot of "evaluating AI output" and "deciding which version to keep." That's the vigilance and decision fatigue the research describes.
- People are experimenting with more tools than I'd have predicted. I don't know yet whether that crosses the three-tool line the BCG study flagged as the tipping point.
None of this is proof. It's a set of observations that made me re-read my own framework and find it too optimistic.
What I did with that
I updated the framework. I added a page — The Cognitive Cost of AI Transformation — that names the eight challenges the research identifies and what to do about them. I revised Leading the Transformation to describe the cognitive J-curve alongside the productivity one. I added workload inflation to the pitfalls. I acknowledged in Vision that the transition has a real cognitive cost, not just reskilling friction.
I didn't add a playbook. Because I don't have one yet — not for my own team.
What I have is a hypothesis, a research-backed vocabulary, and an intention to pay closer attention. I'm flagging this out loud because I think the framework is more honest with it than without it — and because I'd rather be wrong about what my team is experiencing than be right too late.
What I'm watching for
- The T1.5 burnout pattern. The most engaged people doing the most cognitive work with the least established routines.
- Workload inflation. The temptation to raise output expectations proportionally to AI-enabled speed.
- "Stuck" people who might actually be overwhelmed, not resistant.
- Signs of learned helplessness — people deferring to AI without pushback. That's the dangerous one, because it looks like compliance.
I'll write a follow-up when I have more than observations. If the framework update turns out to be wrong, I'll revise it again.
What this is not
- Not a reversal. The transformation direction hasn't changed.
- Not a confession. Nobody on my team has burned out.
- Not a playbook. I don't have one yet.
It's just me noticing something I'd missed, and updating the framework before I had to.
