The MoSCoW Debate: Why Tool Effectiveness Must Consider Context

A black and white illustration of a man casually sitting at a desk with a coffee mug, his feet propped up on a whiteboard or presentation board. The board displays text that reads 'MOSCOW IS A BAD PRIORITISATION TECHNIQUE! CHANGE MY MIND!' in a provocative meme style format, referencing the popular 'Change My Mind' meme template.

What should we do better when we talk about tools?

It’s never the tools.

It’s about the tools in context.

The right tool in the right context is amazing. The right tool in the wrong context is not.

This all sounds self evident. Obvious. It’s unlikely that anyone will argue with me over this claim.

Except, while absolutely true, it is deeply unhelpful because it means that all tools are magically excellent and that all issues with tools are due to user error. A somewhat awkward starting point when the whole point of using any of these tools in the first instance is to get stuff done within a context. Tools are only ever used within a context.

The excellence (or otherwise) of a tool in the absence of context is simply theoretical.

MoSCoW is a bad tool.

Recently I was reproached for my criticism of the MoSCoW prioritisation tool (you know, the method whereby you sort things into Musts, Shoulds, Coulds, and Won’ts). My claim that the tool is useless (or at least a huge waste of time) was dismissed as user error because I was talking about it’s ineffectiveness when used to prioritise features. Prioritising features isn’t the original intent of the tool. Thus, I was told, my criticism is invalid.

And yet, in my experience, the vast majority of the time MoSCoW is used on features, and to average effect. In fact, I’ve been racking my brain and I cannot recall an instance where I’ve seen it used to prioritise user needs as originally intended. Using the techniques on feature backlogs is an easy mistake to make as most how-to guides for the tool don’t explain that it should be used for user requirements, not solution requirements, and never features.

But that’s beside the point. My experience of the tool has led me to conclude that it’s mostly a bad tool. Yes, this isn’t the context the tool should be used in (theory). But this is absolutely where the tool is actually used (practice).

But that’s not how the audience saw it. The issues I was raising with MoSCoW were simply user error (because of the wrong context) and had nothing to do with the tool. The tool is perfect, the user is not.

It’s never the tools. All tools are, in theory, perfect.

How we talk about tools and approaches is unhelpful

In theory there is no difference between theory and practice, but in practice there is” – Yogi Berra

Yogi is right. Since tools are only ever used in context, the effectiveness of any tool should only be evaluated within a specific context. And all discussion about the value of a tool should consider the tool and context pair.

The success or failure of a tool in one context doesn’t predict it’s success in other contexts (if MoSCoW works in your situation, that doesn’t mean it will work in mine). Any data collected about that tool and context pair is only relevant to that tool in that context, and should not be used to make blanket statements about the tool itself (“MoSCoW is great” or “MoSCoW is terrible”).

And yet, that’s not the usual approach. Much more commonly we see the unhelpful blanket defence of tools (“it’s always user error”) or blanket criticism of tools (“MoSCoW is a bad tool” as I claimed last week).

Both positions miss the point because they ignore context.

But that is not to say that we solve the problem simply by adding a context caveat to our discussions – that would be focusing in on the wrong problem. Yes, it is a problem, but it’s not the real problem.

A far better question than “does this tool work in this context?” is ”will this tool help us achieve the outcome we need in our context?” Or perhaps even better – but harder to ask and answer – ”do we have the capabilities and knowledge to use this tool to help us to achieve the outcome that we need in our context?

Answers to these questions won’t fit into a LinkedIn post, so you’re not likely to see a lot of this being discussed. It is far easier to just trash or celebrate a tool.

A practical guide to (better) thinking about tools

And while it is easy to dismiss all of this as hand-wringing about how we discuss tools, this stuff actually matters. Choosing the wrong tool – or using it poorly – wastes time, effort, and usually makes things worse.

If you want to get better at thinking and choosing tools, then building capability to look beyond the binary good/bad and identify the underlying characteristics that influence the effectiveness of a tool is the first step.

Rather than “MoSCoW is a bad tool”, try to identify what made it not work for you, or why you don’t think it will work in your situation. Was it that the output wasn’t usable for the next step of the process? Or was it that it enabled the loudest person in the room to override more thoughtful and considered voices?

Or think back to the last time you successfully used a tool. Instead of “opportunity canvases are excellent,” try and tease out why they worked well in those contexts. Was it that you managed to get the product manager and the key stakeholders in the same room to chat it through? Was it that the output was concise, hence easy to digitise and share? Was it the discussion about the ideal customer that really refined the approach?

Now you’re thinking in tool+context pairs.

But perhaps even more importantly than unpacking context you should try to think in terms of outcomes. The clearer you can be on the outcome you’re trying to achieve, then the better you’ll be at evaluating the suitability of a tool for a given situation. Are you trying to build stakeholder alignment or define scope? Manage risk or build a roadmap? Communicate impacts or understand them?

If you’re working out which approach to take, you should evaluate the situation starting with the outcome in mind.

Outcome first thinking

Here’s a simple approach that will help you to pick the right tool in your context.

  1. Start with the desired outcome. What are you actually trying to achieve? What’s the problem here? What does success look like?
  2. Next, analyse the context you’re working in. Who needs to be involved? How risk adverse is this organisation? What time pressure are you under? What are the constraints?
  3. Third, be honest about your (collective) capabilities. What do you and your team have the skills to do well? Any approach off the table because your stakeholders won’t get it? What organisational knowledge can you leverage?

And you should do all of this before you ponder which tool or technique is appropriate.

But don’t get all worried about selecting the perfect tool for the job, that’s a mirage!

Your only goal should be to find something that helps you move forward and sometimes that means:

  1. Using a “bad” tool because it’s what your stakeholders like and understand
  2. Adapting a “good” tool to fit your context
  3. Creating something on the fly that works for you

Progress is what matters.

Having better conversations about tools

So the next time someone tells you that a tool is amazing, or terrible, take it all with a grain (or bucket) of salt. Because if they haven’t included an outline of what they were using it for, what they were trying to achieve, the context that they were operating within, and how they evaluated success, then it’s all just surface level nonsense.

And be especially wary of anyone presenting something as a binary good/bad.

But also …

MoSCoW really is a bad tool.

MoSCoW is a bad tool. Yes, I’m arguing the tool. Yes, even knowing all the above. This is the hill I will happily die on. MoSCoW sucks.

Let me explain why …

MoSCoW is a sledge hammer approach to a delicate and nuanced conversation that should focus on user needs, objectives, priorities, and dependencies. Not lines in the sand.

But the lines in the sand is precisely what MoSCoW focuses on (“which bucket should we put this thing into?”) and it ignores the rest. And by focusing on the boundaries between the groups, MoSCoW encourages the wrong conversations, the wrong focus, and contributes to bloated backlogs.

You might argue that MoSCoW is a good approach if you’ve inherited a large and unwieldily backlog and need a way to sort the nonsense features quickly from the things you might want to think about. Maybe having big blunt buckets to categorise features into could help? Colour coding your junk doesn’t make it any less junky. And will only make you feel better if you can’t get away with the far superior approach of burning the backlog to the ground.

MoSCoW is a crap compromise to make you feel better about the chaos you’ve inherited and does little more.

And don’t mistake simply using MoSCoW language (must, should, could, and won’t) in your conversations with stakeholders as actually using the technique. MoSCoW (the tool) doesn’t have a trademark over these common english words.

In short: MoSCoW is crap, with the exception of one very precise tool+context pair:

Commercials! In commercials you really do need to focus on the line in the sand. That’s especially true when you’re drafting statements of work (SOWs) or compiling requirements to use in a procurement process.

However, in all other contexts – and I genuinely do mean all – there are better techniques to use for prioritisation: value/effort matrix, stack ranking, even dot voting is likely to deliver better outcomes than the blunt buckets of the MoSCoW technique.

Disagree?

Change my mind.

Hey, tell me what you think!

Please do hit me up on LinkedIn or by email if you have any thoughts on this! I’m always up for difficult questions, and I’d love to know what you think of this article!