Arrow back

How Foundations: Roadmaps prioritization works

Any change you make when scoring *your* user outcomes against competitive dynamics, user adoption, and LOE (level of effort) can and likely will lead to a different placement and recommendation on the roadmap. Let’s walk through an example with one real user outcome we are considering supporting for Foundations:

Define user value

We have nine features bucketed under this outcome - this means there are nine things we feel we need to build to make it possible for users to "take external data into consideration while sequencing and prioritizing development and research."

Sequencing and research

Now let's move on to how we score the user outcomes for competitive dynamics - yes, people we work with are "taking external data into consideration when sequencing and prioritizing development and research." When it comes to where the data comes from and how it gets pulled into the process - we'd call that a self-created solution or hack.

So in this example, you'd select yes, self-created solution, or hack and input how they are doing it today.

User needs definition

How do they achieve this?

Next, because most people we work with do some research to understand their market and the competitive landscape, we’re going to say that over 50% but less than 90% of people would use a solution that seamlessly integrates external data into the roadmapping process.  

Feature set adoption

Finally, this is definitely a hard problem to solve - but because we feel there is a real need for the people we work with, we have been researching data source options and feel confident that with focus we can define an initial data source, test its value, and integrate it into Foundations within 8 weeks.

How long to build a feature set?

With the above scoring, supporting this user outcome landed on our initial roadmap: Green = Yes build; Recommendation: Good Space, Good Solution - Speed up. Real people want this, so go faster and make the solution in less than eight weeks.

Your roadmap

But then, as we started scoping, we reflected further and realized that while we are confident that there is a real need here, we are NOT yet sure that integrating data sources into Foundations is the right solution. So we re-scored just customer adoption from over 50% to less than 50% of people wanting this solution.

Customer base adaptation

With just this one change to scoring, you can see the roadmap placement and Recommendation changes from Green to Pink = Do more research. Recommendation: Good Space, Bad Solution - a nicer way of saying this is problem/market fit. Meaning that while we are still confident that people have this need, we haven’t determined with confidence that our solution (a data integration into Foundations) is the right way to meet that need.

Roadmap

Taking this one step further, let’s now pretend that while we were conducting user research and customer discovery, we saw a new product “The One Stop Shop, Best Market and Competitive Research Product” on Product Hunt and anecdotally heard from lots of our friends and clients that they’re using it and like it. Then we might decide to re-score how people are solving this problem today - from hack to single product or service.  

Does the user do this today?

And again, just with this one change, we get yet another recommendation: now the outcome moves to Red = Don’t build. Recommendation: People don’t want this - with a note about the space being too competitive! Now we have all the problems - there is a robust solution out there gaining traction, and we’re still eight weeks away from having our initial version - and we don’t even know if people want our solution. We’re late to market and being outcompeted!

Build/don't build

Oh no, this leaves us so unhappy because our ongoing research and discovery work told us that a good 50% of people want to have market and competitive research seamlessly integrated into their roadmapping process in the way we have been envisioning.

Customer base adaptation

And you know what, we believe in delivering this value, even if we’re a little behind the market - this new tool just came out and who knows how good it is - so we think we can go just a bit faster with development, cutting our LOE (level of effort) from 8 to 6 weeks:

User can seamlessly take external data into consideration

Now we’re back in business with a Green = Yes develop and a new Recommendation - Good space, good solution - execute & differentiate. We will keep up the research and discovery parallel to the development, looking for hidden needs we can solve with our product to distinguish it from the competition.

Roadmaps

I hope that helps - seeing the power of the dynamic prioritization and sequencing algorithm in action! If you have any questions or issues, please feel free to reach out to us at: [email protected]. You can also message us directly on Discord.

Start using Foundations for free now.

Gather your features