Assessing Your Organization
Turn the maturity model into an actionable team-by-team diagnostic.
Why Assess Before Transforming
Most organizations overestimate their AI maturity. 40% self-assess as AI mature, but only 22% actually qualify when evaluated objectively (JumpCloud/IT Brew, 2025). The gap is predictable: leaders without deep technical knowledge rely on enthusiastic early adopters who present overly optimistic views (California Management Review, 2024).
An honest assessment prevents two expensive mistakes: investing in transformation when the foundation isn't ready, and delaying transformation because you think you're further along than you are.
This page turns the maturity model into a diagnostic you can apply team by team. The output is a maturity map of your organization — the starting point for Leading the Transformation.
The Assessment Method
Step 1: Apply the disappearance test per team
The Reference Framework provides the core diagnostic:
"If AI disappeared tomorrow, what would change for this team?"
- Nothing structural → Level 1 (AI-Assisted)
- Some workflows break → Level 2 (AI-Integrated)
- The team can't function → Level 3 (AI-Native)
Run this for every team independently. A company at Level 2 overall might have engineering at Level 2, marketing at Level 1, and customer service already approaching Level 3. The point of the assessment is to see the map, not the average.
Step 2: Validate with observable behaviors
The disappearance test gives you a starting hypothesis. Validate it by checking what people actually do, not what they say they do.
Level 1 signals — AI is a tool individuals choose to use:
- AI usage is optional and uneven across the team
- No shared prompts, templates, or documented workflows
- AI outputs are manually copied into work products
- If you ask team members to describe their AI workflows, the answers vary wildly or are vague
- The team's processes would look identical without AI
Level 2 signals — AI is embedded in workflows:
- Saved prompts, templates, or prompt libraries exist and are shared
- AI is used across multiple steps of a task, not just one
- Some processes have been redesigned around what AI can do
- New team members are onboarded into AI-integrated workflows
- Removing AI would break specific, identified workflows
- Legacy work patterns are being recognized and addressed
Level 3 signals — Humans direct, systems execute:
- Roles are defined by judgment and direction, not execution
- The team starts from "what should be automated?" not "should we use AI?"
- AI agents, pipelines, or decision systems are built and maintained
- Impact is measured: time saved, costs reduced, quality improved
- AI literacy is a condition of participation, not a bonus
Step 3: Score the gap between theoretical and actual usage
The most sophisticated diagnostic available uses Anthropic's "observed exposure" methodology (Anthropic Economic Index, 2025): instead of asking what AI could automate, measure what it actually does automate.
For each team, ask:
- What percentage of tasks could AI handle? (theoretical coverage)
- What percentage of tasks does AI actually handle? (observed usage)
- What's the gap?
The gap is your transformation opportunity. Across the economy, AI has 94% theoretical task coverage in technical roles but only 33% actual usage. Your team-level gaps will vary, but the pattern is consistent: most organizations use a fraction of what's available.
What Each Level Looks Like by Department
For detailed descriptions of what Level 1, 2, and 3 look like for specific role families — Engineering, Marketing, Customer Service, Sales, and Design — see the Skill Progression Map. It provides concrete behaviors, self-assessment questions, and external benchmarks for each level by department.
When assessing your teams, use the Skill Progression Map as your reference for what the observable behaviors should look like at each level.
Common Assessment Pitfalls
1. Confusing tool adoption with workflow integration
The most common error. "We use ChatGPT" is not Level 2. Level 2 means workflows have been redesigned around AI. The test: if you removed the AI tool, would the process break, or would people just go back to doing it manually?
2. Overestimating because one person is advanced
One power user does not make a team Level 2. The assessment is about the team's operating mode, not its best performer. Ask: what is the median team member's AI usage, not the maximum.
3. Confusing enthusiasm with capability
70% of organizations place AI at the heart of their strategy, but most can't demonstrate tangible value (Wavestone, 2025). Strategy documents don't move the needle — redesigned workflows do.
4. Assessing once and assuming stability
AI capabilities change every few months. A team assessed at Level 1 might have the tools and skills for Level 2 but hasn't been pushed to redesign workflows. Reassess quarterly, not annually.
5. Using metrics that can be gamed
When assessment metrics become targets, they stop being reliable measures (Nature, 2022). "Number of AI prompts per day" or "percentage of tasks using AI" can be inflated without real workflow change. Focus on outcomes: what has been redesigned, what broke when AI was temporarily unavailable, what measurable improvements have been documented.
Building Your Maturity Map
The output of this assessment is a map, not a score. Each team gets a level. The map shows where you are and — more importantly — where the gaps are.
| Team | Current level | Key signal | Biggest gap |
|---|---|---|---|
| Engineering | Level 2 | Shared prompt templates, AI in code review | Not yet spec-driven; humans still write most code |
| Marketing | Level 1 | Individual AI use for drafts | No shared workflows, no systematic integration |
| Customer Service | Level 2 | AI handles 40% of tickets | Agents not yet retrained for AI-trainer role |
| Sales | Level 1 | Email drafts only | 70% of time still on non-selling tasks |
| Design | Level 1 | Mood boards and ideation | No production workflow integration |
This is an example. Your map will look different.
What to do with the map
- Identify the natural first-mover. Which team is closest to Level 2 (or already there)? That's where transformation will compound fastest. The research consistently points to customer service as the default candidate.
- Identify the blockers. Which teams are stuck at Level 1 with no readiness signals? What's missing — tools, training, management support, or willingness?
- Design the sequence. Use the map as input to Leading the Transformation, which provides the operational framework for moving teams through the levels.
The most repeated finding across all the research: workflow redesign — your Level 1 to Level 2 transition — is the #1 predictor of financial value capture. That transition is the one to prioritize (McKinsey, 2025).
Sources
- MIT CISR (2024). "Building Enterprise AI Maturity." cisr.mit.edu
- Anthropic (2025). "Labor Market Impacts of AI." anthropic.com
- BCG (2025). "From Potential to Profit." bcg.com
- Worklytics (2025). "2025 AI Adoption Benchmarks." worklytics.co
← Back to home · The business case · The reference framework · Leading the transformation
