Thursday, June 13, 2013

Measure your Non-Functional Requirements

Like most people who work in the software industry, I often hate working with non-functional requirements.   They're the ugly stepchild of software requirements - forgotten as soon as they're created, difficult to manage, and generally never rearing their heads until the end of the project, when they become a stick to beat the development team with and a cause of late-breaking deployment delays.

I'd like to change that.  I believe it's possible for us to work effectively with non-functional requirements, make them visible, and do more to make sure they're met than simply crossing our fingers. We just need to plan for them. 

For purposes of this post, I'm going to define Non-Functional Requirements (now sometimes called Cross-Functional Requirements, often abbreviated NFR's) as follows:  NFR's are the set of "things your team is required to deliver" AND the set of "conditions your team must make true" over and above the delivery of working, tested code that implements user-visible features.

Examples of non-functional requirements are documentation requirements ("We need to produce a user guide"), performance requirements ("All page load times need to be 2 seconds or less under production load"), and a variety of "ilities" like supportability, accessibility, deployability, etc. ("All major events need to be logged to a common file," "Every page and message needs to be I18n compliant," etc.)

On "traditional" waterfall projects, the typical handling of NFR's is that in the planning phase, we make a list of all of the NFR's.  Then we record them on a spreadsheet.  Then we hand it to the architect to "keep in mind" when planning out the project.  Then we stop thinking about them until the late-game pre-deployment testing, when cross our fingers and see if they're actually met.  Then, when they aren't (and trust me, at least some of them aren't), we argue about whether we live with it or slip the date.

On "agile" projects, the process is largely the same up until we record them on the spreadsheet.  Then we....well...we're not sure.  In theory, some of these become "coding standards" that we make people aware of (but often don't enforce).  Maybe we'll remember to cross our fingers and test before we deploy.  Maybe we'll wait until someone complains after we go live to say "oh, yeah...we should fix that."  Regrettably, since they fit poorly into our "focused on features" development process, they're easily ignored. 

The real problem with non-functional requirements is that they tend to play by different rules than "everything else" we're deciding to build.  I see two key differences. 

First, non-functional requirements have a "cost visibility" problem.  It's hard to estimate "how much" a requirement like quick page load times will cost the project, because it (generally) increases with the number of pages we build.  It's hard for us to give customers visibility into tradeoffs like "how many other features will I need to cut in order to include ADA compliance?"

Second, most non-functional requirements have a "focus" problem.  On an Agile team, we're constantly looking at some kind of board with all the "in flight" functional features for the current iteration/sprint/cycle.  But because (most) non-functional requirements span all features, they're never visible on the board - they're "universal acceptance criteria" for all stories/features.  And like everything else that's "boilerplate" to each item, we stop thinking about them. 

So, what do we do about non-functional requirements?  I've had some success using a two-part approach to handling non-functional requirements.

First, we need to separate non-functionals into "playable requirements" and "standards."  Some requirements are genuinely "do once and then you're done" items that look and act just like User Stories/Features.  An example would be "we need to build a pre-production staging environment with data copied regularly from production."  That's a thing we could choose to build in, say, Iteration 3, and once it's done we don't have to build it again. I tend to treat these like "standard" requirements - estimate them, assign them an iteration, and run through the normal process.

Then, we have the requirements left that are NOT "one and done."  They're the ones that span all stories.  For these stories, I focus on having a METRIC.  I have a conversation with the team (INCLUDING the product owner) where I ask them "how do we want to ensure, over the course of the project, that this requirement is met?" 

For example, let's say our requirement is that "all page load times will be less than 2 seconds under production load."  Great - how do we ensure we meet that requirement?  There are a number of ways we could in theory ensure that.  At one end of the spectrum, we could say "OK, we'll build a dedicated, prod-like load test environment, on a clone of prod hardware, along with a dedicated pool of load generation machines.  We'll also build a set of load test scripts that test every major function of the system.  Every check-in that passes CI gets auto-deployed to this environment and load tested." 

That's probably the most robust possible testing we could have to meet the load requirement.  Unfortunately, it's also expensive and time-consuming to build.  Is this requirement worth it?  If not, what might we do that's less than this?  Maybe we'll just build some JMeter scripts that run in CI to collect directional performance metrics we'll watch over time - it won't tell us DEFINITIVELY we'll perform under load, but it will tell us if we're getting worse/slipping.  Riskier, but cheaper.  Maybe we'll periodically have the testing team step through some key manual scenarios with a stopwatch and measure times.  Or maybe we'll choose NOT to invest in load testing at all -we'll accept the risk that we might not meet this requirement, because the load numbers are expected to be small and the technology we've chosen for this project is the same technology we're using elsewhere at far greater volumes, so we think the risk is very small. 

The point is we have these discussions as a team, and INCLUDE the product owner.  This gets us over the "cost visibility" problem - if you want to team to have extremely high confidence they meet this requirement, what will that cost us?  Are we willing to invest that much?  Or are we willing to take on more risk, for a cheaper cost? 

Once we've decided on how we'll measure our compliance, we need to make it "part of the plan."  Let's say our plan for performance testing was for a tester to manually go through the application once an iteration with a stopwatch and measure response times in the QA environment.  Great!  We put a card on our wall for "Iteration 4 performance test," and (whenever in I4 we feel it's appropriate) have a person do that test and publish the results.  If they look good, we've got continued reassurance that we're in compliance.  By publishing them, we remind the team it's a "focus", so we remember we need to be thinking about that for every story.  If we find an issue, we add a card to the next iteration to investigate the performance issue to get us back on track.  

You can have similar conversations around things like a user guide.  How will the team produce this document?  One option is to say we won't do anything during development.  Instead, we'll engage a tech writer at the end of the project to look at the app and write the guide.  That would work, but it means we'll have a gap at the end of the project between "code is done" and "we're in production." 

Another approach would be to build this up over time - with every story, we agree that someone (maybe the analyst, maybe the developer, maybe the tester) will update the user guide to cover whatever new thing we built.  Thus, we build up a guide over time. 

This is great in theory, but again, how will we ensure the team's doing this?  Are we going to periodically review the user guide?  Is the user guide going to be presented/reviewed in our regular demo meetings?  Are we going to add this to the testers' checklist for "what do I need to validate before I sign off on a feature?"  The goal is to make our decision explicit, make sure everyone understands the level of investment we expect, and how we're going to demonstrate our compliance.

Having explicit conversations around our investment in non-functional requirements, and setting explicit metrics has the ability to turn "non-functional requirements" from vague semi-forgotten dictums into explicit common-sense cost/benefit tradeoffs made between the product owner and the development team.  We can take them from items we don't think about until the end of the project to something that's constantly visible.  And we can take them from a cause of massive heartburn and late schedule slips into a source of pride and confidence for the team.   

Thursday, June 6, 2013

Stop expecting your customers to know how to solve their problems

One of the most important things a business analyst needs to understand is this:
Your users are not (in most cases) skilled application designers.

Your users are people trying to check a bank balance, or order Season Three of The Wire on DVD, or add a new employee to the payroll system.  Most of them are not technologists.  Seems pretty obvious, right?

So why am I bringing it up?  Because very often, business analysts don't recognize the implications of this fact.  Your users are good at finding problems with your system.  They're good at evaluating potential solutions.  They're good at telling you an implemented solution solved their issue.  But what they shouldn't be expected to be good at is determine exactly what that solution should look like.

A business analyst needs to be an analyst, not a short-order cook taking tickets.

Let's consider an example.  A user of our grocery delivery website has a fixed food budget each week, and it's important for him not to exceed that budget when ordering.  However, we don't show the user in the shopping path information on how big their order currently is, so our user doesn't know if he can afford the T-bone this week, or needs to settle for burgers.  We do keep track of the current value of the order, but it's on the "View shopping cart" page, so our user has to keep flipping back and forth between the shopping page and the cart page to keep tabs on his order.

Sounds good so far, right?  There's a real problem here, and it's probably one that can be solved with technology.

The problem is that this isn't usually how the problem is presented to us.  Very often, our users (in the process of being frustrated by something) will envision an idea that could solve the problem for them.  And so what we get from the user will be a "feature request" that looks something like this: "When I go to add a new item to my shopping cart, I want a popup that says 'This will make your total order $X.  Are you sure you want to add this item?'"

The wrong approach to this feature request is to ask "OK, what color do you want that popup?"  Then add the request to the backlog and build it.  The user asked for it!  It's a user requirement!

A better approach is to start with the request, and work with the customer to understand the reason for that request.  "OK, so help me understand how this popup makes life easier for you." "Well, I have a fixed budget, and I need to know if the item I'm adding is going to put me over that budget."  "OK, and you need something new because you don't have a way to see that today?" "Right - the only place I can get the current order total is on the Shopping Cart page, and it's a pain to keep flipping back and forth to a different page.  I need to keep track of this while I'm shopping." 

Aha.  Now we have the most important piece of a user story - the goal the user is trying to accomplish.

The feature injection approach to requirements is really useful here.  You start by "hunting the value," then building the features you actually want to implement off of that.  To borrow one of their techniques, I might write a user story for my customer's request "value first" in this case - In order to keep my order within my set budget, as a shopper I need a way to keep track of my order value from within the shopping path." 

Now that we have the goal, we can leverage our skilled development team to come up with a range of ideas on how to meet that goal.  Instead of showing everyone using the site a popup every time they try to add an item, what about just showing a running total price in the upper right corner under the shopping cart icon?  What about flashing a notification after each added item in the lower right like "Added 12 oranges for $5.68.  Order total is $96.78."?  What about allowing the user to expand the shopping cart contents from within the shopping path to see what's currently in there?  Now that I have some possible solutions, I can circle back with the user, and we can evaluate the best way to solve their problem. 

So, why do so many projects seem to have an issue with this?  My suspicion is that it's related to the deference that "user requirements" (more on why I hate this term in later weeks) are given in the industry today.  The notion is that there are certain "requirements" the system needs to have, and if we want to uncover it, we just ask the users, and eventually they'll tell us what the system "needs" to do.  In this case, we have a "feature request" that came directly from a user.  It must be a user requirement!  We'll add it to the list, and build it.  What could go wrong?

We need to avoid mistaking "what the users need to accomplish their goals" and "what the users' best design for the system looks like."  Users are not great system designers.  That's OK. 

Users are really good at feeling pain, and feeling it's absence.  Users need to be valued and listened to.  User feedback on your application needs to be welcomed and acted on.  But that doesn't mean we should blindly expect them to design a software application.  Translating from user pain to user goals to effective solutions that allow users to meet their goals is your job and your team's job.  Expecting to users to do that translation into solutions for you isn't valuing your users.  It's abdicating your responsibility, and hiding behind "Hey, I'm just doing what the users asked me to."