The 'Feature Fake': Testing Demand Without Wasting Engineering Time
Building software is expensive. A single feature — from design through QA to deployment — can consume weeks of cross-functional team time. And yet, the most common failure mode in product development is not bad execution. It is building the right feature brilliantly for a problem that does not exist at the scale the team imagined, or for a user need that could have been surfaced with a two-day experiment instead of a two-month sprint. The Feature Fake is a discipline for reversing that cost structure: test demand cheaply before you invest expensively.
The core idea is simple. Before you build a feature, create the minimal possible representation of that feature — a link, a button, a landing page, a mock email — and measure how real users respond to it. Not how users say they would respond in a survey or a focus group, but how they actually behave when they believe the feature exists and they can access it. The gap between stated and revealed preference is one of the most reliable findings in behavioral economics, and it is the reason why Feature Fakes outperform user interviews alone as a validation method.
Feature Fakes let you test demand before committing a single sprint of engineering time.
The 'Button to Nowhere' Technique
The simplest form of Feature Fake is the 'button to nowhere' — a UI element that appears real to the user but links to a holding page rather than live functionality. The user encounters the button in its natural context, makes a genuine decision about whether to click it, and you measure the click-through rate as a signal of demand. When the user clicks, they land on a page that honestly explains the feature is coming soon and offers them the option to join a waitlist or be notified at launch. You learn two things simultaneously: the organic rate at which users seek this functionality, and whether the users who clicked are interested enough to register for early access.
Implementing a button to nowhere correctly requires discipline. The button must appear in the natural context where users would encounter the real feature — not buried in a corner of the interface where click rates would be depressed by poor placement regardless of demand. It must be visually indistinguishable from real functionality, so you are measuring genuine intent rather than curiosity about a conspicuously different UI element. And it must be accompanied by a clear baseline: what click-through rate would you consider sufficient evidence to justify engineering investment? Define that threshold before you run the experiment, not after. Defining success criteria retrospectively is one of the most common ways product teams deceive themselves about the results of their validation work.
The Landing Page Experiment
For features that are more complex than a single button can represent, the landing page experiment is the next level of the Feature Fake toolkit. Rather than a button within your existing product, you build a standalone page that describes the proposed feature in specific, concrete terms — what it does, who it is for, why it is valuable, and how to access it. You drive targeted traffic to the page through email to your existing users, organic search, or paid channels. The page includes a clear call to action — sign up for early access, join the waitlist, request a demo — and you measure the conversion rate as your demand signal.
The value of the landing page experiment is that it forces a discipline that most product teams skip: articulating the value proposition of a feature in terms a real user would care about, before the feature is built. Teams that try to write landing page copy for a proposed feature frequently discover that they cannot do it clearly — that the feature's supposed value is vague, overlapping with existing functionality, or compelling only to the internal stakeholders who championed it. That discovery, made before a sprint of engineering work begins, is enormously valuable. The landing page experiment is both a demand test and a clarity test. It surfaces ambiguity at the cheapest possible moment in the product development cycle.
Set your success threshold before you run the experiment, not after
What Data to Collect and How to Interpret It
Feature Fakes generate behavioral data, and behavioral data requires careful interpretation. Click-through rate on a button to nowhere is a leading indicator of interest, not a guarantee of adoption. Users who click a button for a feature that does not yet exist may not use that feature when it does exist, especially if the finished product fails to deliver the value they anticipated. Your job is to layer multiple signals: raw click rate, waitlist signups, and — where possible — early access adoption rates once a version of the feature does exist. No single data point is conclusive. The value is in the pattern.
Set your success threshold in advance. Before running a Feature Fake, your team should agree on what constitutes sufficient demand to justify the engineering investment. A reasonable starting point for a button to nowhere experiment is a click-through rate meaningfully above your baseline for comparable existing features. For a landing page experiment, define a minimum conversion rate that would need to be true for the business case to make sense. If 1,000 targeted users see the landing page and 8 convert, that tells you something very different from 1,000 users and 220 converts. Both results are informative, but only if you decided in advance what threshold matters. Feature Fakes without pre-defined success criteria are qualitative exercises dressed in the language of measurement.
Integrating Feature Fakes into Your Sprint Rhythm
Feature Fakes work best when they are a standard part of your discovery process rather than a one-off tool deployed in moments of uncertainty. The goal is to create a habit: no feature enters the development backlog until it has passed a behavioral demand test of some kind. This does not mean every feature requires a full landing page experiment with paid traffic. A five-minute conversation with three users can be a Feature Fake if it surfaces genuine behavioral signal. A pre-announcement email to a segment of your user base can be a Feature Fake. The principle is that demand is validated through behavior, not assumption.
In practice, integrating Feature Fakes into your sprint rhythm means allocating time in your discovery backlog for lightweight experiments alongside the delivery backlog for engineering work. The two tracks run in parallel: as engineering builds what has already been validated, product and design are validating the next wave of features using Feature Fakes and other low-fidelity experiments. This dual-track approach — sometimes called Dual-Track Agile — keeps the pipeline full of evidence-backed features while preventing the team from building things that users do not actually want. The Feature Fake is not a hurdle to slow the team down. It is the mechanism that ensures the team's speed is directed toward real problems.
The Bottom Line
Building is an act of commitment. Every feature you ship represents a decision to spend team capacity, absorb maintenance overhead, and add complexity to your product. The Feature Fake gives you a way to make that commitment with evidence rather than assumption. Whether it takes the form of a button to nowhere, a landing page, or a pre-announcement email, the discipline is the same: represent the feature as realistically as possible, expose it to real users, and measure how they behave. If demand is real, build with confidence. If it is not, you have saved your team weeks of work and your product from another zombie.
Related Posts from Sense & Respond Learning
The Truth Curve: How to Choose the Right MVP Fidelity for Your Idea
The 'Wizard of Oz' MVP: Simulating AI and Automation Manually
Writing Better User Stories: Why You Need 'Hypothesis Statements' Instead
Stop Building 'Zombie' Features: How to Prune Your Backlog with Outcomes
Further Reading & External Resources
Lean UX (3rd Edition) — Jeff Gothelf & Josh Seiden — The definitive guide to hypothesis-driven product design
The Mom Test — Rob Fitzpatrick — How to ask customers the right questions before building
Sprint — Jake Knapp et al. — A five-day process for answering critical business questions through prototyping
Want to go deeper? This post is part of the Sense & Respond Learning resource library — practical frameworks for product managers, transformation leads and executives who want to lead with outcomes, not outputs.
Explore the full library at https://www.senseandrespond.co/blog