The Original Definition — and Why It Has Been Distorted
The term Minimum Viable Product comes from Eric Ries's book The Lean Startup, published in 2011. Ries defines an MVP as the version of a new product that allows a team to collect the maximum amount of validated learning about customers with the least effort. The operative word is "validated." An MVP is not a cheap product. It is a learning instrument — an experiment designed to test a specific hypothesis about user behaviour or market acceptance.
In practice, the term has shifted. "We're building an MVP" has become, in many organisations, a euphemism for: we have limited budget, so we're building less. That is a fundamental misapplication of the concept. An MVP that tests nothing and holds no hypotheses is not an MVP — it is simply an inferior product. The difference is not technical but intentional: a genuine MVP is strategically designed to answer a precise question. Everything else is improvisation dressed up in Lean Startup vocabulary.
MVP vs. Prototype — an Important Distinction
MVP and prototype are often used interchangeably, but they describe fundamentally different things. A prototype is a simulation artefact — it looks like a product, behaves like a product, but is not a real product. It can be made of cardboard, a click-through mockup, or a half-built interface. Its purpose is to gather feedback on a concept before anything is built. Prototypes are inexpensive, iterable, and disposable by design.
An MVP, by contrast, is a real product — or at least a real experience that actual users encounter under real conditions. It may be manually operated, rudimentary, and missing many features — but it must genuinely deliver the core value of the product, not merely simulate it. Dropbox launched its MVP as a simple explainer video: no real product, but a real test. The sign-up list generated after the video launch provided validated market interest. That was neither a prototype nor a finished product — it was a precisely constructed experiment with a measurable outcome.
The Most Common Misconception — MVP as Cheapest Possible Product
Steve Blank, one of the founders of the Customer Development methodology, emphasises that an MVP always exists in the context of a hypothesis. The case for not building an MVP is when you already know what customers want — because you have demonstrated it through previous iterations, direct market access, or deep domain knowledge. In that case, an MVP does not yield more learning; it only produces a slower launch.
The misconception arises when teams emphasise the "minimum" and ignore the "viable." An MVP must be genuinely minimal and viable — meaning it must solve the core problem well enough that real users can evaluate it seriously. CB Insights, in its analysis of startup failure, has documented that 35 percent of startups fail because no market need exists for their product. MVPs are meant to surface precisely this insight early — but only when designed to actually test market need, not simply to omit features while building something nobody asked for.
When an MVP Is Right — and When It Is Not
An MVP makes sense when fundamental uncertainties about the market, the user, or the technology exist and can be resolved through rapid validation. Early-stage startups, new products in unfamiliar markets, features with no direct existing equivalent — these are MVP territory. The central question is: what is the riskiest assumption in our business model, and how can we test it with minimum effort? Structuring an MVP around that question transforms it from a cost-cutting exercise into a strategic instrument.
An MVP is the wrong decision when a product enters a mature market competing against established players that have already set a high quality standard. In this case, a visibly unfinished MVP signals not agility but a lack of professionalism. Users who already know better alternatives have no patience for a significantly inferior product, regardless of its potential. Here it is more effective to build longer and launch with a complete product that exceeds the market standard, rather than conceding the first impression permanently.
Measuring Success — What Comes After the MVP
An MVP without success criteria is not an experiment — it is a release without an evaluation. Before an MVP is built, the team must determine: which metrics will measure whether the underlying hypothesis is confirmed or refuted? Vanity metrics do not help here. Page views and downloads say little about whether the product delivers real value. Activation rate, retention rate, Net Promoter Score, and direct qualitative feedback from users — these are the indicators that determine whether the experiment has passed or failed.
The result of an MVP is not binary — it is a learning log. What worked? What did not? Which assumptions were confirmed, and which were contradicted? These findings determine the next step: build further, change course, or discontinue the project. Organisations that treat an MVP simply as the fastest way to launch something miss its actual purpose — as a structured instrument for converting uncertainty into knowledge, systematically and with as little wasted effort as possible.