Glossary

Definitions of concepts used in the AI transformation.


AI Maturity

AI-Assisted — AI is a personal tool; nothing structural changes if it disappears. See the reference framework.

AI-Integrated — AI is embedded in workflows; roles shift from doing to directing. See the reference framework.

AI-Native — Work design assumes AI as a first-class resource; roles defined by judgment, not execution. See the reference framework.

AI-Supportive — Leadership endorses AI personally without pushing organizational adoption. See the reference framework.

AI-Operational — Leadership sets role-based expectations and funds automation before hiring. See the reference framework.

AI-Strategic — Leadership redesigns the organization around AI and makes AI literacy a condition of leadership. See the reference framework.

AI-Aware — Individual uses AI ad hoc without changing workflows. See the reference framework.

AI-Augmented — Individual integrates AI into recurring workflows systematically. See the reference framework.


AI Engineering

Autonomous production (Rung 5)

Engineering model where the spec goes in and software comes out without human intervention on the code. The human defines architecture, constraints, and scenarios; AI produces, tests, and iterates the code. Also known as dark factory. See the AI Lab.

Assisted coding (Rung 0)

Development mode where the human codes and AI suggests completions. The lowest level of AI assistance in software engineering.

Non-interactive development

Working mode where specifications and scenarios drive autonomous agents. The human doesn't code and doesn't converse with the agent during execution. See the AI Lab.

Scenarios

End-to-end user journeys that describe expected behavior from the user's perspective. Favored over unit tests because they are harder for agents to circumvent. See the AI Lab.

Satisfaction metric

Evaluation approach that measures the fraction of trajectories across all scenarios that satisfy the user, rather than a binary green/red test result. See the AI Lab.

Deliberate naivety

The stance of removing traditional development conventions and systematically asking: "Why am I doing this? The model should be doing it instead." See the AI Lab.

Greenfield

A project started from scratch, with no existing code. The most natural terrain for non-interactive development. See the AI Lab.

Brownfield

A project with existing code and habits, transitioned to the autonomous production model. Harder than greenfield, but more impactful. See the AI Lab.


AI Skills

AI literacy — Structured use of AI tools and the ability to distinguish ad hoc usage from workflow integration. See the employee guide.

Prompt craft — Clear instructions, specified format, examples, resolved ambiguity. See the execution standards.

Context engineering — Structured context file loaded before AI tasks. See the execution standards.

Intent engineering — Defined objective hierarchy, tradeoff rules, and escalation conditions. See the execution standards.

Specification engineering — Every non-trivial task has a complete written specification built from five primitives. See the execution standards and the Specification Engineering Guide for practical examples.

Specification — A document defining a problem precisely enough for an agent to solve it autonomously. See the execution standards and the Specification Engineering Guide.

Self-contained problem statements — Problem stated with enough context to be solvable without additional information. See the execution standards.

Acceptance criteria — What done looks like, verifiable by an independent observer. See the execution standards.

Constraint architecture — Four categories per task: Must, Must not, Prefer, Escalate. See the execution standards.

Decomposition — Tasks broken into independently executable, testable, and integrable components. See the execution standards.

Evaluation design — Test cases with known-good outputs to validate and catch regressions. See the execution standards.

Seam design

The practice of structuring work so that transitions between human and agent phases are clean, verifiable, and recoverable. A good seam defines the handoff artifact, allows checking agent output at the transition point, and enables intervention without starting over. The seams shift as capabilities evolve. See the employee guide.


Transformation Economics

Value migration

Technology reassigns value to the scarcest layer. In the AI transformation, value leaves execution (commodity) and concentrates on judgment, framing, and risk ownership (premium). See the vision.

The 5 human functions

Direction, Judgment, Taste, Relationship, Accountability. The functions that remain irreplaceable in an AI-native organization. See the vision.


Role Evolution

Convergence — Multiple roles merge because AI removes the coordination overhead that justified separating them. The converged role retains the combined judgment surface. See Role Evolution.

Specialization — A role narrows to its irreducible human core as AI absorbs the routine layer. The role becomes sharper, not smaller. See Role Evolution.

Elevation — Humans shift from producing artifacts to specifying and evaluating them. Maps to the Universal Translation Rule. See Role Evolution.

Absorption — A role's responsibilities get absorbed into adjacent roles or systems. The responsibilities redistribute; the role contracts or disappears. See Role Evolution.

Emergence — Structurally new roles arise from the AI-native organizational structure. Named for their responsibility, not the technology. See Role Evolution.

Role Decision Matrix — A structured tool mapping observable conditions to the most likely evolution pattern and recommended action. See Role Evolution.


Adoption and Transition

Adoption J-curve

The predictable productivity dip during AI adoption. Productivity drops before it rises. Organizations that climb out are the ones that redesign their workflows around AI capabilities. See the manager guide.

Transition brief

A structured document delivered by an employee that describes their current role, AI-first vision, gap, systems to build, metrics, and 30/60/90 plan. See the employee guide.

AI clinics

Regular sessions (weekly or biweekly) where the team shares discoveries, blockers, and workflows. Short format (30 min). The goal is peer learning. See the manager guide.

Six-month wall

Failure pattern where AI-driven projects without strong human involvement (specs, scenarios, architecture) accumulate structural debt that explodes after roughly six months. Scenarios are the primary defense. See the AI Lab.

Calibration decay

AI skills expire as capabilities evolve. A person who calibrated their sense of the human-agent boundary six months ago is now either over-trusting or under-using current models. The antidote is feedback density: frequent delegate-evaluate-adjust cycles with current models, not one-time training. See the manager guide.


← Back to home · The reference framework · AI Execution Standards