Coaching the 'Definition of Done': Why Output Completion Is Not Enough
In most agile teams, 'done' means the same thing: the code is written, the tests pass, the pull request is reviewed and merged, and the feature is deployed to the staging environment. This is a reasonable definition of technical completion. It is not a sufficient definition of value delivery. A feature can be technically done — fully functional, thoroughly tested, cleanly deployed — and still fail to change a single user behavior. The Definition of Done that most teams operate with tells them when they have finished building something. It says nothing about whether what they built accomplished the purpose for which it was built.
Expanding the Definition of Done to include outcome validation is one of the most structurally significant interventions available to an agile coach. It does not require new ceremonies or new roles. It requires the team to answer one additional question before a story is closed: 'Do we have evidence that this change moved the user behavior we hypothesized it would move?' Embedding this question in the DoD changes what the team treats as finished work — and in doing so, changes what the team builds next.
A DoD that includes measurement criteria changes what teams treat as finished work.
What Is Actually in Your Definition of Done?
Start with a DoD audit. Gather your team and ask everyone to independently write down the criteria they personally apply before marking a story 'done'. Then compare lists. In most teams, you will find significant variation — some team members apply a six-point checklist, others apply two criteria they have internalized. This variation is itself diagnostic: if the team does not share a common understanding of what done means, they are not applying a consistent quality standard, which means 'done' is functionally meaningless as a shared commitment.
Once you have surfaced the variation, build a canonical list together. Most teams end up with a DoD that includes technical criteria (code reviewed, tests written and passing, feature flagged if appropriate, deployed to staging), documentation criteria (changelog updated, API docs current, product spec linked), and accessibility or design criteria (WCAG compliance checked, design reviewed against implementation). What is almost universally missing is any measurement criterion: 'Instrumentation is live and the behavioral metric associated with this story is being tracked.' This is the addition that begins to connect the DoD to the Lean UX outcomes framework.
Instrumentation as a done criteria ensures every shipped feature has a measurement plan.
Adding Measurement Criteria to the DoD
The measurement criterion does not mean that the feature must have demonstrated its outcome before it can be marked done. Behavioral outcomes often take weeks to measure with statistical confidence, and requiring that evidence before closing a story would grind sprint cycles to a halt. The measurement criterion means something more tractable: that the instrumentation needed to measure the outcome is in place, and that the team has defined the specific metric or proxy indicator they will track and the timeframe in which they expect to see movement.
This is the behavioral outcome equivalent of writing acceptance criteria. Just as a story is not done without defined acceptance criteria being met, a story is not done without a defined measurement plan. Coaching teams to write this measurement plan as part of story refinement — before the sprint begins, not after the story is shipped — creates a forcing function for outcome thinking. Teams that routinely write 'we will track X metric and expect to see Y change within Z weeks' before they build have effectively operationalized the Lean UX hypothesis structure within their existing agile workflow, without adding any new ceremony or documentation requirement.
Closing the Loop: The Outcome Review
A Definition of Done with measurement criteria creates a data obligation: the team committed to measuring something, which means someone needs to follow up when the measurement window closes. Build this into your sprint rhythm by adding a brief 'outcome review' segment to every other sprint retrospective. Pull the measurement plans from stories closed in the preceding two weeks and ask: Have any of these measurement windows closed? If so, what did the data show? This creates a consistent cadence for converting delivery commitments into learning commitments.
Outcome reviews also create a team accountability mechanism that story-point tracking cannot. When the team collectively reviews whether the features they built moved the behaviors they predicted, they develop calibration over time — they get better at estimating what will and will not work, and they become more precise in their hypotheses. Teams that run outcome reviews for six months consistently report a shift in how they evaluate new work: they start asking 'what behavior are we trying to change?' about proposed features before they agree to build them, which is exactly the mindset that Lean UX is designed to cultivate.
The Bottom Line
The Definition of Done is a team's most powerful quality standard because it defines the floor below which no work will be shipped. Keeping it at technical completion keeps the floor at output. Raising it to include outcome measurement raises the floor to value. Teams that make this shift do not slow down — they redirect the effort that was going into building and shipping features toward understanding whether what they built was worth building. That redirection is, in the long run, the most significant efficiency gain available to a mature agile team.
Related Posts from Sense & Respond Learning
From Story Points to Outcomes: Coaching Teams to Measure What Matters
Writing Better User Stories: Why You Need 'Hypothesis Statements' Instead
Fixing Broken Standups: How to Run a Daily Sync That Actually Surfaces Blockers
The Two-Week Learning Cycle: Running Discovery and Delivery in Parallel
Further Reading & External Resources
Lean UX — Gothelf & Seiden (O'Reilly) — Core text on connecting agile delivery to behavioral outcomes
Outcomes Over Output — Josh Seiden — A focused primer on measuring value through user behavior change
The Scrum Guide — The source document on Scrum's Definition of Done as a quality commitment
Want to go deeper? This post is part of the Sense & Respond Learning resource library — practical frameworks for product managers, transformation leads and executives who want to lead with outcomes, not outputs.
Explore the full library at https://www.senseandrespond.co/blog