You have a business idea. Maybe it is a new internal tool that would save your team hours every week. Maybe it is a customer-facing product you want to bring to market. Either way, you are facing the same question every business owner faces: how much should I invest before I know if this actually works?
The answer, in most cases, is: less than you think. That is the whole point of an MVP.
What is an MVP β and what it is not
MVP stands for Minimum Viable Product. It is the smallest version of your idea that still delivers enough value to get real feedback from real users.
What an MVP is not:
- A rough, buggy prototype thrown together over a weekend
- A "lite" version of everything you eventually want to build
- A demo that only impresses investors but nobody actually uses
What an MVP is:
- A focused, working solution that solves one core problem well
- Something real users can interact with and give you honest feedback on
- The fastest, cheapest way to learn whether your assumptions are correct
The critical insight: most products fail not because they were built badly, but because they were built for a problem that was not important enough β or for users who did not actually want what was offered. An MVP is how you find out before you spend serious money.
Why skipping the MVP phase is expensive
It is tempting to build the full vision from the start. You know what the product needs to do. You have thought through the features. Why not just build it all?
Here is the problem: every unvalidated assumption is a financial risk.
Imagine you invest six months and β¬80,000 building a customer portal with five major features. You launch. Users sign up β but they only use one of those features. The other four were based on your best guess about what users wanted. You guessed wrong on four out of five.
This is not an unusual story. It is the norm. Studies consistently show that a significant majority of software features are rarely or never used.
An MVP would have let you test that one core feature for a fraction of the cost, confirm users valued it, and then build the rest with confidence and real data.
What the MVP process actually looks like
Step 1: Identify the single most important assumption
Every business idea rests on assumptions. "Users will pay for this." "Our team will actually adopt this tool." "This will save more time than it costs to run."
Pick the one assumption that, if it turns out to be wrong, would make the whole idea collapse. That is what you test first.
Step 2: Design the smallest test that could validate it
Not a survey. Not a focus group. An actual working thing that real people can use and respond to.
For a software product, this often means:
- A stripped-down version with just the core workflow
- One user type, one core job to be done
- Enough polish that users take it seriously, but no more
Step 3: Get it in front of real users fast
The goal is not perfection β it is speed of learning. Every week you spend building instead of testing is a week of delayed feedback.
At Workbox, we typically have clients interacting with an MVP within 4β8 weeks of starting a project. That is fast enough to learn something meaningful before a significant budget is spent.
Step 4: Measure, not just observe
Define in advance what success looks like. Not "users seem to like it" β something measurable:
- Did users complete the core workflow without help?
- Did they come back a second time?
- Did any of them pay, or express willingness to pay?
If you cannot measure it, you cannot learn from it.
Step 5: Decide: build, pivot, or stop
Based on what you learn, you have three options:
Build β the assumption was correct. Users want this. Invest more and keep going.
Pivot β users engaged with the product but not in the way you expected. Adjust the approach and test again.
Stop β the assumption was wrong. Users do not value this enough to change their behavior. Cut your losses and move on.
Stopping early is not failure. It is the most valuable outcome an MVP can deliver β saving you from spending months or years on something the market does not want.
What an MVP typically costs
There is no single answer, but some realistic ranges for common scenarios:
| Type of MVP | Typical investment | Timeline |
|---|---|---|
| Internal business tool (simple) | β¬8,000 β β¬20,000 | 4β6 weeks |
| Customer-facing web app | β¬15,000 β β¬40,000 | 6β10 weeks |
| Mobile app (single platform) | β¬20,000 β β¬50,000 | 8β12 weeks |
| Integration / automation | β¬5,000 β β¬15,000 | 3β5 weeks |
These numbers are for a working product that real users can interact with β not a mockup, not a prototype. The goal is always to spend the minimum needed to get a genuine answer to your core question.
Common mistakes that inflate MVP costs
Scope creep β "while we are building X, let us also add Y." Every addition delays feedback and increases risk. Resist it.
Perfectionism β an MVP does not need to be beautiful. It needs to be good enough that users take it seriously. There is a difference.
Building for edge cases β designing for every possible scenario before you know which scenarios actually happen in practice is expensive guesswork.
Skipping user testing β building without showing it to real users at every step means you only find out at the end if you built the wrong thing.
How to know if your idea is ready for an MVP
Ask yourself:
- Can I describe in one sentence what problem this solves and for whom?
- Can I name 5β10 real people who have this problem today?
- Can I describe what "success" looks like in measurable terms?
If you can answer all three clearly, you are ready. If not, spend a day or two clarifying β it will save weeks of building in the wrong direction.
Thinking about testing an idea? Get in touch β we will help you define the right MVP scope, estimate costs honestly, and get you from idea to real user feedback as fast as possible.