Thursday, June 12, 2014

Software Doesn't Have Requirements

I'm back!  Today I'm going to talk about one of the most pernicious myths in software development today.  It's this: Software has Requirements.

This may seem like a surprising thing to take issue with.  People have been talking about "software requirements" since, well, since there's been software.  It's the industry standard term to describe "what we're going to build."  Even leaders in the Agile space talk about software having requirements.  

So why am I picking on the word "requirements"?  Because it's the wrong word.  But more than that, it's a dangerous word.  Thinking about software as having "requirements" affects how we think about our development process in many suboptimal ways.

Software (by and large) doesn't have requirements!

In this article, I want to convince you why "requirements" is the wrong word, how thinking about "requirements" holds back our thinking and collaboration, and propose a better way to think and talk about what we're trying to build. 

Why software doesn't have requirements

I'll admit this isn't a completely new idea.  Among others, Kent Beck wrote about this back in 2004, and Jeff Patton wrote a very influential article on the subject back in 2007.

What do I mean when I say software doesn't have requirements?  I mean that very little of what goes under the heading of "requirements" is really "required" for us to build successful software.

If software has requirements, the first question to ask is "required by what (or whom)?"  And "required for what?"  Something that absolutely must be included for our product to have any hope to succeed in the market?  Something that's the only way to solve the business problem we're trying to tackle?  Something that we're certain we can't omit without risking failure?  How would we even know these things (if they exist)?

What we're really talking about when we talk about requirements are DECISIONS made by potentially fallable humans.  A trusted human, perhaps.  One who has a good understanding of the problem, hopefully.  But it's someone's best understanding.  Not some great truth about the market for software that's been hiding out there waiting for us to discover it.  They're not something that's "required" for success.  They're our BEST UNDERSTANDING of what successful software MIGHT look like.

Why am I belaboring the point here?  Because what we call requirements...aren't.   There might be other things we could do that could succeed.  There might be things we think are required that it turns out we could have done without.  They're not objective fact.  They're subjective opinions.

The closest most projects come to having true "must have" requirements are actually the much maligned, little understood non-functional requirements.  If we're rolling out an app to 10,000 people, it's pretty much required that it can support 10,000 users.  If we're building an app to integrate our HR and accounting data, it's got to be interoperable with our HR and accounting systems.  If we're building software for medical records, it needs to comply (in the US) with HIPAA rules.  This isn't hard-and-fast--many non-functionals requirements are still more of decisions.  But to the extent true "requirements" exist, they're more likely to be on the non-functional list. 

Why believing in requirements is dangerous

Regardless of what we call them, in any software project there's going to be a set of stuff that we build.  So why do I care so much what we call them?

Because labeling the "stuff we're building" as requirements is actively dangerous to our thinking. 

First, belief that software HAS objective requirements implies we can (and should) discover those requirements.  The requirements exist independent of the team.  Belief in "requirements" is belief in a right answer.  There's a correct "set of stuff" out there.  We just need to figure out what it is.  This implies investing (potentially considerable) time trying to determine and build a list of requirements.

And once we do, we should think the list is largely static, since everything on the list is "required."  After all, if we can decide not to do it, it must not have been "required" in the first place!  Yes, in Agile projects we manage our backlog, and re-prioritize frequently.  We can discover new "requirements" over time.  But (in my experience) we rarely REMOVE items from the backlog.

Thinking about "requirements" drives a wedge between people who ought to be collaborating.  It sets up a distinction between "the smart people who understand what's required" and "the people who implement those requirements."  It's not the team's job to think about what we should be building.  We're just building what's "required."  Requirements, by definition, are non-negotiable. 

Belief in requirements inverts our thinking.  The most important piece of building software is determining what we want to build and why.  But if the "what" is a non-negotiable list, and the "why" is "because it's required," we're telling the team to only focus on the "how."  But the "how" is the least important piece - we can do the wrong thing as well as humanly possible, but it's still the wrong thing. 

In practice, we know this is the wrong way to think about software.  There are usually multiple ways to solve a problem.  There are many different approaches that could potentially succeed in the market.  There's not a "right" answer - there are MANY right answers.

As Fred Brooks wrote in The Mythical Man Month, "The hardest single part of building a software system is deciding precisely what to build."  (quote stolen wholesale from Patton's article)  We need to keep our best efforts focused on solving that problem, not assuming the answer and focusing on less important matters. 

Rather than try to figure out in advance what will work, it's usually a better idea to TRY things and see what works.  This principle underlies the Lean Startup movement, A/B testing, and other experiment-driven approaches.  Don't try to figure out what OUGHT to work.  Don't rely on an expert to DECIDE what should work in advance.  Try something, see what works, and adjust.

Hey, isn't experimentation a way to "discover" requirements?  Sort of.  But only retrospectively.  Believing that we can have "requirements" for software is the belief that there's something knowable IN ADVANCE that tells us where to go.  Experimentation turns this on its head - explore many possibilities, and the ones that work were the right ones.  Even here, it's not clear that the working approaches were "required" - there could be other things that would have worked we didn't try.

Problems, Ideas, Decisions, Designs

This would be a pretty boring article if all I was doing was complaining about a word without offering any constructive ideas on how to think about "the stuff we build" differently.  Here's one analyst's thoughts on a better way to think about it.

In my view, there are four major components of the "what do we build?" problem.


Problems are our current perception of issues that people have that need to be addressed.  They might be based on things someone told us, things we observed, or just things we think could be better.  No matter where they come from, there's some set of "problems" we think exist for some set of people in the world that we could potentially try to address with our software.

There's no guarantee our list of problems is CORRECT.  We might think something's a problem that isn't actually something that's important.  We might not understand a problem that's really important.  Our list of "problems" is our BEST UNDERSTANDING of what is out there to potentially be solved.

If we're thinking about the online banking space, here are some examples of problems we might perceive:
  • When I want to buy something, I don't always know how much money I have.
  • Dealing with paper checks is a hassle.
  • I can't always find an in-network ATM when I want cash. 
  • It's inconvenient to pay monthly bills one at a time on different sites. 
We don't have to solve all of these problems together.  We might decide not to solve some of them at all.  But the list of problems is a good start for the universe of "what issues we MIGHT choose to address."

Problems, by definition, exist independent of any particular solution.  Some problems might have a single solution.  Some might have many.  Some might be insoluble.  When we're identifying problems, we shouldn't care. 

A key thing to avoid when identifying problems is to avoid the self-justifying solution.  The absence of a particular solution is never ipso facto a problem.  "I don't have a mobile application that guides me to the closest ATM for my bank." isn't a good problem statement, for two reasons.  First, it pre-supposes one (and only one) solution (a mobile app).  Second, it doesn't tell us anything about WHY we want that solution - what's the issue that causes me to want an ATM finder app? 


Smart, creative people can formulate a number of ways of ways that we might completely or partially address some of the problems we've identified.  That set of possible solutions are our ideas of things we can potentially do.

There's no expectation that we have a single good idea on how to solve every problem we've identified.  Sometimes we might have no ideas.  For some problems, we might have multiple ideas.  We might have ideas that completely solve a problem, or only partially address it.  Our ideas might be contradictory.  This is OK.

Here are some ideas we might have on how to deal with "I can't always find an in-network ATM when I want cash":
  • Build a mobile app that uses geolocation and a map of known ATM locations to guide me to the closest ATM.
  • Partner with Google Maps to have an option to show our ATM locations as a native overlay.
  • Waive our fees from using out-of-network ATM's and reimburse fees so our customers can use any ATM without penalty.
  • Build a NFC-based mobile wallet application linked to a bank account so I don't need cash so often.
  • Deploy iBeacons on all our ATM's so they're easier to find.  
  • Partner with a well-known company with a large footprint (e.g. McDonald's) to have an ATM in every one of their stores, to increase our footprint and increase our visibility.
  • Build mini-ATM's and install one in the home of every customer who requests one.  
Not all our ideas necessarily need to involve software.  Some of our ideas might not be great in combination (would we partner with Google Maps and ALSO build our own native app?)  Some of our ideas might be ridiculously infeasible (mini-ATM's in everyone's home) or cost-ineffective (waiving all our ATM fees).

Our ideas represent the universe of things we think might be worth doing.


Obviously, we can't implement every idea.  If we're thinking creatively, we will almost certainly have more ideas that we're capable of implementing.  Some problems are more important than others.  Some ideas are better than others.  Sometimes we need to make a choice between several plausible ideas to address a problem.  These choices are our decisions.

Decisions represent our choices of which of our existing ideas we think are worth implementing and want to implement first.

We don't have to exhaustively decide on every idea we want to implement before we start.  We don't necessarily have to rank every idea we want to do.  It could be enough to start with a single idea to implement (lean-ists would probably recommend this approach).  We don't have to make all our decisions up front (and we probably shouldn't).

Deciding to implement an idea isn't an irrevocable decision.  If we're following an experimental approach, we might determine that an idea we thought was good didn't work out.  That's OK - our decisions can be reconsidered.  

A prioritized backlog of "As X, I want Y, so that Z" user stories can be thought of as the output of our decisions (the ordering of the stories) about our chosen ideas (the "I want" clause of the stories) to address important problems (the "as a" and "so that" clauses).  That's not to say that a good backlog "just implements" my model - while you can produce a backlog this way, the backlog (being the decision output) hides the universe of problems and ideas we did NOT choose to pursure. 

Our decisions represent our evaluation of  which ideas we think are the best way to address our known problems. 


Once we've decided a given idea is worth implementing, we have to figure out ho we're going about it.  These decisions include the fine details of exactly what our idea means, how we'll determine we're successful, the technical design we plan to use, what our solution will look like visually, and how it will fit in with everything else we've built or are planning to build.  Those decisions encompass our design.

Our design can (and should) be informed by what we know or have learned about the problems we're solving, what other ideas have and haven't worked, and potentially some quick lo-fi experimentation we might chose to do as part of designing our solution.

If we're building software, our design encompasses all the things we need to do to translate our idea into high-quality, well-tested code that addresses our chosen problem.   The output of our designs is working software.

By point of comparison, the "designs" piece is the ONLY piece the team really owns in "requirements-based" thinking.  All the team does is implement other people's choices.  The determination of the problems, generation of ideas, and decisions around which ideas have merit are all part of the "requirements" that come to the team from the outside.

That's not to say designs aren't important - they're the only piece that directly results in working software.  But a good design is only valuable if it represents a good idea on how to solve an important problem.  Keeping the team isolated from the problem determination, idea generation, and decision making makes it very difficult to feel invested in the business problem.  And it keeps your skilled, experienced team at arms length from contributing to those important pieces of the process.

Why change?  

Why go through all the trouble to replace a common, reasonably well understood term?  And why replace it with four concepts that only replace it when taken as a whole? 

The reason I'm advocating changing the terminology is primarily to change our thinking about the most important process in software.  It doesn't matter how well we build the wrong thing.

The thing I like about my terminology is that every one refers to something we shouldn't expect to be static.  We can always identify a new problem.  We can always come up with new ideas.  We can always revisit decisions.  We can always redesign something.

All of these words imply things we should expect to change.  They're all inherently negotiable concepts.  None of them imply a correct answer, or an expectation that a single correct answer exists.  In short, they describe the software development world as we find it - dynamic, indeterminate, complex, and full of valuable, important problems that deserve to be solved.

Thanks to many of my colleagues at ThoughtWorks who gave feedback on a version of these thoughts presented at our North American AwayDay 2014.  


  1. Mike, you are a man after my own heart. I too am an advocate of design thinking, to the point where I decided to call my own blog (sorry, couldn't resist the plug). And we're not alone - there's a ground swell of people starting to say similar things.

    On terminology, I also like the words "idea", "decision" and "design". I'm less keen on "problem" because, like "requirement", it implies that something *needs* to be fixed. Whether or not something is a problem is, in my view, subjective - so if I have to say "problem" I always prefix it with "perceived". Preferably, I refer to the "as-is", the "context" or perhaps an "observation" of the as-is state. Once I've observed the as-is, I can then set an "objective", which has an associated "benefit".

    Another important word for me is "options". As you point out, there are generally many plausible ideas to address a (perceived) problem, so there are choices, or options. In my experience, people often don't consciously generate options (ideas) - especially under the "requirements thinking" model. For me, this is where we can add most value as BAs - rather than just going with the first plausible idea that someone has thought of, we can force the (collaborating) team to understand the underlying objective and then think of *better* ideas (cheaper, simpler, more elegant, less convoluted, whatever).

    As a side note, Alistair Cockburn (one of the authors of the Agile Manifesto) said in a recent article on his own blog that he's been hunting for a replacement noun for "requirement" but he hasn't yet found one - but he *has* replaced the verb phrase "gathering requirements" with "deciding what to build".

    Oh, and you'll need to change your strapline, because you can no longer legitimately claim to be writing a blog about Software Requirements :).

    1. Thanks for the comment. Definitely agree when I talk about "problems," I mean our perception of problems, which are not necessarily accurate or complete. That said, I personally like the word mostly because (in my view) software isn't valuable if it doesn't solve a problem for someone somewhere. That person might not realize they HAVE a problem, but solving problems is (to me) the ultimate measure of success. I kind of like "context," however.

      Definitely agree that one of the biggest challenges we have in software today is generating and exploring ideas/options. It's one of the big dangers around thinking about "requirements" - generally it means one person (often outside the team) is the only one who's job it is to think of possible ideas and pick one.

      Will check out the Cockburn article - thanks!

      As to the strapline, read the whole thing carefully and you'll understand why I'm keeping it as is. :)

    2. Ah, OK, I get it - software requirements are a myth. Nice!

      I agree that mostly we are solving problems, but I think sometimes it can be a limiting perspective. If we only ever look to solve problems, we might miss opportunities for truly creative innovations. Amazon, Ebay, Facebook and Twitter didn't solve existing problems, they delivered innovative new services that just made life better in various ways. On a smaller scale, think of Google search features like auto-completion, instant results or "I'm feeling lucky". Again, there was no problem to solve, just some clever people generating ideas for how to make their service better.

      Plus, "problem" is a negative word, whereas "opportunity" is a positive word. And I think a bit of positive thinking does us all some good!

    3. @Tony,

      Like so many word choices, it comes down to what you want to play up.

      Agree "problem" has a negative connotation which isn't ideal. However, I like the focus of the word better.

      PEOPLE have problems. ORGANIZATIONS have opportunities.

      So, thinking about "opportunities" could be used for a company to self-justify -- "here's the business case we gave to the VC's, so these are the opportunities we should focus on." Thinking about "opportunities" can lead us to think about what WE want, not what our (future) users need.

      I disagree that autocomplete or I'm feeling lucky didn't solve problems. Perhaps they solved problems people weren't aware they had, but they were real problems. I'd say autocomplete solved the problem "it sometimes takes a lot of typing for me to enter the search criteria I'm looking for." No one complained about it, but it's a problem google perceived in the world, and that people would appreciate going away. And they were right.