Why 'Velocity' Is a Vanity Metric (And What to Measure Instead)
Velocity is the most popular performance metric in agile product organizations and one of the least informative. Measuring velocity — the number of story points a team completes per sprint — tells you exactly one thing: how much work the team is producing. It says nothing about whether that work is creating value, driving user behavior change, contributing to business outcomes, or solving the problems it was designed to solve. A team can have excellent velocity and be building an excellent product. A team can also have excellent velocity and be rapidly constructing something that users will never adopt. Velocity cannot distinguish between these two situations.
The root of the velocity problem is that it is a pure output metric. It measures production, not value creation. In an era when most product value comes from solving customer problems well rather than from shipping code fast, output metrics are increasingly insufficient as a proxy for performance. Jeff Gothelf and Josh Seiden argue throughout their work — from Lean UX to Sense & Respond to Who Does What By How Much? — that the transition from output measurement to outcome measurement is one of the most important shifts a product organization can make. This post lays out how to make that transition in practice.
Velocity measures production. Outcome metrics measure whether that production created value.
What Velocity Measures (And Why That Is Insufficient)
Velocity was designed to serve a specific and legitimate purpose: giving agile teams a tool for capacity planning. If a team averages 40 story points per sprint, they can use that average to estimate how many sprints a given body of work will require. This is genuinely useful for project scheduling. The problem arises when velocity migrates from a planning tool to a performance indicator — when the organization starts using it to evaluate whether the team is performing well. At that point, velocity stops being a planning input and becomes an incentive: teams optimize for story point completion rather than value creation.
The optimization pressure that velocity creates is not theoretical. Teams that know their velocity is being tracked will size stories smaller to inflate their sprint numbers. They will deprioritize exploratory, research-heavy work that generates learning but no story points. They will rush features to 'done' (as defined by deployment) to hit their velocity target, skipping the validation work that would tell them whether the feature actually produced value. Over time, a team optimized for velocity becomes a feature factory: highly productive in terms of output, increasingly disconnected from the user outcomes that justify that output.
Outcome Metrics vs. Output Metrics
Outcome metrics measure whether the team's work is changing user behavior in the ways the team intended. They come in two categories: leading indicators and lagging indicators. Lagging indicators are the business results that ultimately matter — revenue, retention, Net Promoter Score, customer lifetime value. These metrics confirm that value has been created, but they have two limitations as operational metrics: they are often slow to move in response to specific feature changes, and they are influenced by factors outside the product team's control (marketing spend, sales activity, macroeconomic conditions). Teams cannot manage to lagging indicators alone.
Leading indicators are behavioral metrics that predict lagging outcomes and move faster in response to specific product changes. For a team trying to improve user retention, leading indicators might include: frequency of core feature usage in the first 30 days, completion rate of the onboarding flow, activation of at least one integration or collaboration feature, or number of sessions per week among users in their first month. Each of these behaviors is a predictor of retention that the product team can directly influence through specific feature work. The key question for any leading indicator is: 'If this behavior increases, do we have evidence that retention will also increase?' If the causal link exists, the leading indicator is a valid proxy for the outcome the team cares about.
Leading indicators connect daily product decisions to the business outcomes you care about
Designing Your Outcome Metrics Framework
Building an outcome metrics framework for your team starts with the OKRs. Each Key Result defines a behavioral outcome — who should do what, by how much. The metrics you track operationally should map directly to these Key Results: they are the leading indicators that predict whether the Key Results are on track. For each Key Result, identify two to four behavioral metrics that the team can observe in near-real time through your analytics platform. Instrument your product to capture these metrics, establish baselines, and review them in every sprint review alongside the delivery output.
The discipline that makes this framework work is reviewing the outcome metrics as a team, not just reporting them upward. When the team sees its own behavioral data — when engineers and designers see directly how users are responding to features that just shipped — it creates a feedback loop that is more powerful than any KPI report. Teams that review behavioral data together develop intuitions about what drives outcomes and what does not. They start proposing hypotheses grounded in behavioral patterns rather than stakeholder preferences. The outcome metrics framework is not just a measurement system. It is a learning system. And learning is ultimately what separates teams that get better over time from teams that maintain a steady velocity of undifferentiated output.
Communicating the Shift to Stakeholders
One of the practical challenges of moving from velocity to outcome metrics is communicating the shift to stakeholders who have grown accustomed to velocity reporting. Executives who have been receiving sprint velocity reports as evidence of team performance need a new framework for evaluating progress. The key is to present outcome metrics not as a replacement for accountability but as a more meaningful form of accountability. 'Here is how fast we are shipping' is a less useful update than 'Here is how user behavior is changing as a result of what we have shipped, and here is what the trend tells us about whether we will hit our Q3 Key Results.'
The transition works best when it is gradual and evidence-led. Keep velocity reporting if it is politically necessary while introducing behavioral metrics alongside it. As the team accumulates evidence that behavioral metrics predict business outcomes more reliably than velocity does, make the case for reducing the emphasis on velocity in favor of the more informative signals. The ultimate goal is a stakeholder community that evaluates product teams on the question 'Are users behaving differently, and is that behavior driving the outcomes we care about?' rather than 'How many story points did the team complete this sprint?' This shift requires education, patience, and — most importantly — a track record of behavioral metrics that actually predicted business results.
The Bottom Line
Velocity tells you how fast the car is going. Outcome metrics tell you whether you are heading in the right direction. A team that has both — that ships consistently and tracks the behavioral results of what it ships — is the gold standard. But if you have to choose one to optimize for, optimize for outcomes. A team that ships slowly but consistently changes user behavior in valuable ways is a better team than one that ships quickly but generates no behavioral signal. Speed in the wrong direction is not a competitive advantage. It is a faster route to a product nobody uses.
Related Posts from Sense & Respond Learning
Stop Building 'Zombie' Features: How to Prune Your Backlog with Outcomes
Moving from Dates to Deliverables: How to Build an Outcome-Based Roadmap
Writing Better User Stories: Why You Need 'Hypothesis Statements' Instead
Moving L&D from Output (Course Completion) to Outcome (Behavior Change)
Further Reading & External Resources
Who Does What By How Much? — Jeff Gothelf & Josh Seiden — The definitive guide to writing OKRs as behavioral outcome metrics
Outcomes Over Output — Josh Seiden — A short, practical guide to shifting from output to outcome measurement
Measure What Matters — John Doerr — The widely-read introduction to OKRs at Google and beyond
Want to go deeper? This post is part of the Sense & Respond Learning resource library — practical frameworks for product managers, transformation leads and executives who want to lead with outcomes, not outputs.
Explore the full library at https://www.senseandrespond.co/blog