Epic Portfolio Prioritization


Last week, I sent out an overview of key points product teams should consider when making prioritization decisions. And promised to follow up that issue with dives into specific contexts.

I also encouraged readers to reply with questions and suggest scenarios. One of the emails I received had a great question about prioritization in a scenario I hadn’t considered.

So while I’m working on the deep dives I planned, I thought I’d share that reader’s question and my response.

Could frameworks work for portfolio prioritization?

Hello Kent,
Thanks for the article. I agree with most of the views but am confused when it comes to your (and Saeed Khan's) thoughts on prioritization frameworks being "practically useless". While not a purely objective, it seems like getting closer to a relative objective measure would be better than a purely subjective (often HiPPO driven) decision.
Can you further elaborate?
The context that I am picturing in this case ... is a number of Epics in a Portfolio Backlog aligned with a team that owns more than one product. Each Epic represents a project and a set of features that could be enhancements to an existing product, or the first release of a new product owned by the team. Portfolio management, working with the business stakeholders and team, needs to determine which Epic should be next.
Thanks!

Hi,

Thanks for writing in, and great question.

Frameworks do offer the promise of providing a score you can use to rank potential epics and potentially reduce the impact of the Highest Paid Person’s Opinion. I can also relate to the scenario where your team is responsible for more than one product and you have to decide which epics for which products you do. I’m in that same scenario right now.

My personal distaste with frameworks for tackling that particular problem is that they can distract you from doing the necessary research and asking the hard questions in order to decide what not to do.

More specifically:

  • With frameworks, there’s work involved in coming up with the factors and the scales you come up with to score candidate epics. On the surface that may look like a one time effort but I’d suggest that if you were going to use a framework, you should occasionally reflect and adjust the scoring based on experience.
  • Then for each epic, there’s some time involved in scoring them. Sure, you can score the epics quickly, but then are you going through the motions of scoring for the sake of coming up with a score, or are you really doing what you need to do to understand the purported benefits of the epic? My experience with story points heavily influenced my view of prioritization frameworks that have you score epics. I find value in sizing discussions when they help the team better understand the backlog item being sized. I start to tune out when the conversation turns to an argument about the difference between a set of artificially contrived numbers (is this a 3 or a 5?)
  • The resulting scores are more often used for sequencing efforts, not actually deciding not to do them.

To be fair some of those problems arise in how teams use frameworks, and not the frameworks themselves. But I’ve seen that type of behavior enough times to know it’s awfully tempting to go down that route.

Instead of using a framework in my current scenario, here’s what I’m doing:

  1. Start with the organization’s priorities and make sure I understand what they mean.
  2. Work with the Director of the business unit I’m working with to develop a Product Strategy that will states where we want to focus to achieve those priorities
  3. Use that Product Strategy to develop decision filters I use to say yes or no to each of the epics suggested.
  4. For those epics that we said yes to, determine which one will have the biggest impact and do a quick as possible experiment to test our assumptions.

In other words, spend the time we would have spent scoring epics against arbitrary measures to do a bit of benefit estimation to make explicit do/don’t do decisions.

I don’t start out with the assumption that we have to do every epic. Rather I focus on what we’re trying to accomplish and then pick the epics that we think will help us accomplish that objective soonest.

I didn’t have this scenario as one I was going to cover, but I think I’ll add it in. (And here you go.)

So with all that said, I’ll readily admit that my outlook on frameworks could be wrong and I should always be seeking real life experiences where they’ve been effective. I’d love to hear about how you’re handling your current scenario and what you’ve found to work well, and not so well.

Some additional thoughts

What made the question even more interesting was the reader’s thoughtful response and additional questions. Here they are with my responses/clarifications.

Story Points

Since you mentioned story points. For me story points are mostly valuable for the team to "generally" be on the same page. A 3 versus a 13 is worth discussing but debating a 3 or a 5 is a waste of time. Go with the higher value and move on.

The last team where we were really doing sizing, we ended up going with Small, Medium, Large, Holy Cow in the sizing discussions then converted those to a numerical value. (Small 1, Medium 5, Large 8) For "holy cow," we talked further about how to break it down.

The team saw value in the conversation and further understanding.

We converted to numbers because those outside the team were expecting ongoing velocity measurements.

The facade pattern writ large.

Determining which epic will have the biggest impact

For Step 4, how do you ... "determine which one will have the biggest impact"?
This is where a framework (WSJF or similar) could be helpful.
In addition to impact (value), should we consider effort? Experiments take time and effort.
Could it be better to take on an epic that is of even lower impact (value) if it can be completed more quickly and the value be realized sooner?
Even if we said yes to an epic, if it keeps getting pushed down the list relative to the others ... maybe ends up really being a no.

To be honest, this is often based on instinct but inputs to that instinct are definitely anticipated benefit, and a rough sense as to how “big” it is. So if you want to consider a very rough “bang for the buck” analysis as a framework, you got me. I’d rather use approximate impact to actual metrics (We think option A will move retention by 10%, option B will impact it by 15% but is twice as big. Let’s try A first because we’ll find out faster)

I haven’t used WSJF because it goes back to the subjective ratings, and I’m irrationally allergic to it because it comes from SAFe.*

I can see where you initially decide "yes" to an epic, but it won;t be the first one you implement. It may get pushed down because things you did before got you to where you need to be, or more important challenges keep coming up.

In either case, it’s ok to change your mind and not do something, especially if your users/customers are getting what they need. Why do the extra work?


*And here’s where I get to do a correction.

As I was lightly editing this exchange for the newsletter, I thought it would be a good idea to confirm that my statement about WSJF coming from SAFe.

I was wrong.

WSJF (which stands for Weighted Shortest Job First) predates SAFe by a few years. Don Reinertsen described Weighted Shortest Job First as its used in product development in his book The Principles of Product Development Flow in 2009.

SAFe did what they often do by picking up a good technique, morphing it into something not so good and then incorporating it into their Cheesecake Factory sized menu of techniques.

There is a chance that WSJF, done properly, can be a useful tie breaker as the reader suggests. I don’t personally use it, but your mileage may vary.

If you want to try out WSJF, refer to Joshua Arnold’s description, not the definition on the SAFe website.


Thanks for Reading

Thanks again for reading InsideProduct.

If you have any comments or questions about the newsletter, or there’s anything you’d like me to cover, just reply to this email.

Talk to you next week,

Kent J. McDonald
Founder | KBP.Media

InsideProduct

Hand-picked resources for product owners, business analysts, and product managers working in tech-enabled organizations. Check out the resources I offer below and sign up for my newsletter!

Read more from InsideProduct

I try not to overuse click bait titles, too much. For this particular topic, however, it was just too tempting. As I mentioned in a previous newsletter, there were several reasons I went to Agile2024 after a six year hiatus. One was to get a sense of the current state of agile. There’s been a bit of chatter about the decline in agile coaching opportunities as well as trends in agile transformations in general. I certainly caught some inklings about what’s going on, but I didn’t want to rush...

I’m back from Agile2024, and it was good to be back after a 6 year hiatus. I caught up with some old friends, made some new acquaintances, and had some great discussions along the way. Some of those discussions will influence future issues of InsideProduct, and others will go down as obnoxiously long running inside jokes. If you're wondering about the latter, ask me about retractable stairs or comic sans the next time you see me. While I’m not going to talk about those topics here, I did want...

For the last few weeks, I’ve explored what prioritization looks like in a variety of different scenarios. This week finished up that series with a look at how you can go about prioritizing a bunch of stakeholder requests. This is a common scenario when you’ve already built software for your company (ie internal product) and you’re trying to optimize or maintain it. A unique aspect for internal products is that most requests for changes come from other people inside your company who may, or...