Escaping the 'Build Trap': How Designers Can Lead via Outcomes
Melissa Perri's concept of the 'build trap' — the organizational condition in which teams measure success by features shipped rather than value created — is usually discussed as a product management problem. But designers are equally susceptible to it, and often more so. A designer whose primary deliverables are screens, flows, and specifications is in the build trap by definition. Their job, as organizationally constructed, is to produce design artifacts: beautiful, well-considered, user-centered artifacts, but artifacts nonetheless. The measure of their success is whether the artifacts are approved and implemented, not whether the product they informed produces the behavioral outcomes it was designed to create.
Designers who want to escape the build trap — who want to be strategic partners in product decisions rather than execution resources who make the decisions look good — need to reframe their contribution around outcomes rather than outputs. This means shifting the conversations they lead from 'here is what I designed' to 'here is what we are trying to make users do differently, here is my hypothesis for how we do that, and here is how we will know if it worked.' This reframe positions design as a thinking discipline applied to outcome problems, not a production discipline applied to specification delivery.
Outcome-oriented designers ask 'what behavior change are we trying to create?' before opening any tool
Reframing the Brief from Output to Outcome
Every design brief is an invitation to ask a better question. When a product manager hands a designer a brief that says 'Redesign the onboarding flow,' the output-oriented designer takes the brief at face value and begins designing a new onboarding flow. The outcome-oriented designer asks: 'What are we trying to change about user behavior through this redesign? What does the current onboarding flow cause users to do or not do, and what would we want them to do instead?' The answer to these questions transforms the brief from a design specification ('redesign the flow') into a hypothesis ('we believe that if we simplify onboarding to three steps rather than seven, more users will reach activation within their first session').
This reframe is not a refusal to do the work — it is a precondition for doing the work well. A designer who understands what behavioral outcome the redesign is supposed to drive makes better design decisions throughout the project. They know which elements of the existing flow are causing behavioral problems and can target those specifically, rather than redesigning everything and hoping for improvement. They know what success looks like, which means they can instrument the new design to measure whether it achieves the intended behavioral change. And they know when to stop iterating, because the success criterion is objective rather than aesthetic.
Using the 'Who Does What By How Much' Framework for Design Goals
The 'Who Does What By How Much' framework, from Jeff Gothelf and Josh Seiden's OKR guide, is as applicable to individual design initiatives as it is to organizational OKRs. Before beginning any significant design project, the designer should be able to answer: Who — which specific user segment is this design intended to serve? Does What — what specific behavior change is this design intended to produce? By How Much — what measurable change in that behavior would constitute success for this design? These three questions, answered before a single wireframe is sketched, create the outcome clarity that separates strategic design from execution-mode design.
For a redesign of a product's activation flow, the answers might look like: Who — trial users who have completed account creation but not yet created their first project. Does What — create their first project within 24 hours of account creation. By How Much — increase the 24-hour project creation rate from its current 22% to 35%. With these answers in hand, every design decision in the redesign can be evaluated against a clear criterion: Does this design choice make it more or less likely that a trial user who has completed account creation will create a project within 24 hours? This is a much better design question than 'Is this visually appealing?' or 'Does this match our design system?'
Measurement is the designer's quality assurance — design the metric before you ship the feature.
Advocating for Measurement as Part of the Design Process
Outcome-oriented designers do not just design the product — they design the measurement. Before a feature launches, the outcome-oriented designer asks: how will we know if this works? What behavioral metrics will we track, at what time intervals, against what baselines? This question often reveals that the team does not have the instrumentation in place to measure the behavioral outcomes they care about — a gap that is much cheaper to fix before launch than after. Advocating for measurement instrumentation as part of the design specification is one of the most concrete ways designers can operationalize their commitment to outcomes.
This advocacy can feel like overstepping for designers who have been trained to see measurement as a product management or analytics function. The reframe is this: measurement is the designer's quality assurance. In a manufacturing context, the quality assurance step checks whether the physical product matches the specification. In an outcome-oriented design context, the quality assurance step checks whether the product produces the behavioral change it was designed to produce. A designer who ships a feature without a measurement plan is shipping without QA. The outcome metric is the spec that the feature needs to pass.
Building Credibility Through Evidence
The transition from execution-resource designer to strategic-partner designer is not made through argument — it is made through evidence. Designers who can point to a track record of design decisions that produced measurable behavioral outcomes earn the organizational credibility that gives them access to earlier and more strategic design conversations. This track record is built one project at a time: define the behavioral goal before beginning, design toward that goal, measure the behavioral result after launch, and share the result explicitly — both when the design worked and when it did not.
Sharing failed designs is as important as sharing successful ones. A designer who can say 'We hypothesized that simplifying the onboarding flow to three steps would increase activation rates. We measured a 3% increase — meaningful but below our target. Here is what we learned about why, and here is our revised hypothesis for the next iteration' is demonstrating the kind of evidence-based discipline that earns strategic credibility. Design as hypothesis testing, with transparent learning shared across cycles, is the practice that escapes the build trap and positions design as a learning function rather than a production function.
The Bottom Line
Escaping the build trap requires designers to stop measuring their success by the beauty and completeness of their deliverables and start measuring it by the behavioral change those deliverables produce in the product they inform. This shift is uncomfortable because it extends accountability beyond the handoff — beyond the moment when the design is approved and implementation begins. But that extended accountability is exactly what makes design strategic rather than executional. The most influential designers in any organization are not the ones who produce the best artifacts. They are the ones who consistently drive the outcomes that matter.
Related Posts from Sense & Respond Learning
The Death of the Handoff: Why 'Over the Wall' Design Is Failing
From Requirements Gathering to Assumption Declaring: A Mindset Shift
Writing Better User Stories: Why You Need 'Hypothesis Statements' Instead
Why 'Velocity' Is a Vanity Metric (And What to Measure Instead)
Further Reading & External Resources
Lean UX (3rd Edition) — Jeff Gothelf & Josh Seiden — The foundational text on outcome-driven design and its application to agile teams
Escaping the Build Trap — Melissa Perri — Essential reading on why features ≠ value and what to do about it
Strategic UX — UX Collective — A community resource for designers transitioning into strategic product leadership roles
Want to go deeper? This post is part of the Sense & Respond Learning resource library — practical frameworks for product managers, transformation leads and executives who want to lead with outcomes, not outputs.
Explore the full library at https://www.senseandrespond.co/blog