Thursday, June 13, 2013

Measure your Non-Functional Requirements

Like most people who work in the software industry, I often hate working with non-functional requirements.   They're the ugly stepchild of software requirements - forgotten as soon as they're created, difficult to manage, and generally never rearing their heads until the end of the project, when they become a stick to beat the development team with and a cause of late-breaking deployment delays.

I'd like to change that.  I believe it's possible for us to work effectively with non-functional requirements, make them visible, and do more to make sure they're met than simply crossing our fingers. We just need to plan for them. 

For purposes of this post, I'm going to define Non-Functional Requirements (now sometimes called Cross-Functional Requirements, often abbreviated NFR's) as follows:  NFR's are the set of "things your team is required to deliver" AND the set of "conditions your team must make true" over and above the delivery of working, tested code that implements user-visible features.

Examples of non-functional requirements are documentation requirements ("We need to produce a user guide"), performance requirements ("All page load times need to be 2 seconds or less under production load"), and a variety of "ilities" like supportability, accessibility, deployability, etc. ("All major events need to be logged to a common file," "Every page and message needs to be I18n compliant," etc.)

On "traditional" waterfall projects, the typical handling of NFR's is that in the planning phase, we make a list of all of the NFR's.  Then we record them on a spreadsheet.  Then we hand it to the architect to "keep in mind" when planning out the project.  Then we stop thinking about them until the late-game pre-deployment testing, when cross our fingers and see if they're actually met.  Then, when they aren't (and trust me, at least some of them aren't), we argue about whether we live with it or slip the date.

On "agile" projects, the process is largely the same up until we record them on the spreadsheet.  Then we....well...we're not sure.  In theory, some of these become "coding standards" that we make people aware of (but often don't enforce).  Maybe we'll remember to cross our fingers and test before we deploy.  Maybe we'll wait until someone complains after we go live to say "oh, yeah...we should fix that."  Regrettably, since they fit poorly into our "focused on features" development process, they're easily ignored. 

The real problem with non-functional requirements is that they tend to play by different rules than "everything else" we're deciding to build.  I see two key differences. 

First, non-functional requirements have a "cost visibility" problem.  It's hard to estimate "how much" a requirement like quick page load times will cost the project, because it (generally) increases with the number of pages we build.  It's hard for us to give customers visibility into tradeoffs like "how many other features will I need to cut in order to include ADA compliance?"

Second, most non-functional requirements have a "focus" problem.  On an Agile team, we're constantly looking at some kind of board with all the "in flight" functional features for the current iteration/sprint/cycle.  But because (most) non-functional requirements span all features, they're never visible on the board - they're "universal acceptance criteria" for all stories/features.  And like everything else that's "boilerplate" to each item, we stop thinking about them. 

So, what do we do about non-functional requirements?  I've had some success using a two-part approach to handling non-functional requirements.

First, we need to separate non-functionals into "playable requirements" and "standards."  Some requirements are genuinely "do once and then you're done" items that look and act just like User Stories/Features.  An example would be "we need to build a pre-production staging environment with data copied regularly from production."  That's a thing we could choose to build in, say, Iteration 3, and once it's done we don't have to build it again. I tend to treat these like "standard" requirements - estimate them, assign them an iteration, and run through the normal process.

Then, we have the requirements left that are NOT "one and done."  They're the ones that span all stories.  For these stories, I focus on having a METRIC.  I have a conversation with the team (INCLUDING the product owner) where I ask them "how do we want to ensure, over the course of the project, that this requirement is met?" 

For example, let's say our requirement is that "all page load times will be less than 2 seconds under production load."  Great - how do we ensure we meet that requirement?  There are a number of ways we could in theory ensure that.  At one end of the spectrum, we could say "OK, we'll build a dedicated, prod-like load test environment, on a clone of prod hardware, along with a dedicated pool of load generation machines.  We'll also build a set of load test scripts that test every major function of the system.  Every check-in that passes CI gets auto-deployed to this environment and load tested." 

That's probably the most robust possible testing we could have to meet the load requirement.  Unfortunately, it's also expensive and time-consuming to build.  Is this requirement worth it?  If not, what might we do that's less than this?  Maybe we'll just build some JMeter scripts that run in CI to collect directional performance metrics we'll watch over time - it won't tell us DEFINITIVELY we'll perform under load, but it will tell us if we're getting worse/slipping.  Riskier, but cheaper.  Maybe we'll periodically have the testing team step through some key manual scenarios with a stopwatch and measure times.  Or maybe we'll choose NOT to invest in load testing at all -we'll accept the risk that we might not meet this requirement, because the load numbers are expected to be small and the technology we've chosen for this project is the same technology we're using elsewhere at far greater volumes, so we think the risk is very small. 

The point is we have these discussions as a team, and INCLUDE the product owner.  This gets us over the "cost visibility" problem - if you want to team to have extremely high confidence they meet this requirement, what will that cost us?  Are we willing to invest that much?  Or are we willing to take on more risk, for a cheaper cost? 

Once we've decided on how we'll measure our compliance, we need to make it "part of the plan."  Let's say our plan for performance testing was for a tester to manually go through the application once an iteration with a stopwatch and measure response times in the QA environment.  Great!  We put a card on our wall for "Iteration 4 performance test," and (whenever in I4 we feel it's appropriate) have a person do that test and publish the results.  If they look good, we've got continued reassurance that we're in compliance.  By publishing them, we remind the team it's a "focus", so we remember we need to be thinking about that for every story.  If we find an issue, we add a card to the next iteration to investigate the performance issue to get us back on track.  

You can have similar conversations around things like a user guide.  How will the team produce this document?  One option is to say we won't do anything during development.  Instead, we'll engage a tech writer at the end of the project to look at the app and write the guide.  That would work, but it means we'll have a gap at the end of the project between "code is done" and "we're in production." 

Another approach would be to build this up over time - with every story, we agree that someone (maybe the analyst, maybe the developer, maybe the tester) will update the user guide to cover whatever new thing we built.  Thus, we build up a guide over time. 

This is great in theory, but again, how will we ensure the team's doing this?  Are we going to periodically review the user guide?  Is the user guide going to be presented/reviewed in our regular demo meetings?  Are we going to add this to the testers' checklist for "what do I need to validate before I sign off on a feature?"  The goal is to make our decision explicit, make sure everyone understands the level of investment we expect, and how we're going to demonstrate our compliance.

Having explicit conversations around our investment in non-functional requirements, and setting explicit metrics has the ability to turn "non-functional requirements" from vague semi-forgotten dictums into explicit common-sense cost/benefit tradeoffs made between the product owner and the development team.  We can take them from items we don't think about until the end of the project to something that's constantly visible.  And we can take them from a cause of massive heartburn and late schedule slips into a source of pride and confidence for the team.   

1 comment:

  1. I love this! Why didn't you tell me you started blogging? #absurdlymodest

    ReplyDelete