Thursday, May 16, 2013

INVESTing In User Stories Is Hard

If you're reading this blog, I'm hoping you're somewhat familiar with the concept of user stories (if not, here's a good place to start). 

A key concept around user stories is that they should ideally embody 7 properties, which are usually represented by the acronym INVEST.  What's NOT often talked about is that creating stories that follow all 7 of the guidelines in the INVEST acronym simultaneously is actually pretty hard.  The various "generally good" properties can often be in tension, and getting stories to follow one can often mean trading off another. 

I don't think enough time is spent thinking about why following all the tenets of INVEST is difficult.  I don't think teams always recognize clearly that they ARE making tradeoffs, and that they could make different ones.  And I don't think all teams communicate well about those tradeoffs are, why they've made the choices they have, and understand what might cause them to reconsider. 

This week, I want to talk about the most important tensions I see in INVEST.  I want to raise awareness that some of these tradeoffs are hard, and present some ways of thinking about how to make the right tradeoffs for your project.

What's INVEST anyways?  

For those of you who aren't familiar or need a refresher, INVEST represents 7 properties that "ideal" user stories should have.  They are:
  • INDEPENDENT.  Good stories should not have implicit dependencies on other stories.  This is important because it gives us one of the key advantages of Agile - that our product owner should be free to choose the order in which to play the stories based on business value.  When stories have dependencies, there's an implicit sequencing - you can't play story D until after A, B, and C.  
  • NEGOTIABLE.  Stories should describe the business problem rather than a specific implementation.  This allows negotiation/back-and-forth between the skilled development team and the product owner on what the solution might look like.  Non-negotiable stories lead to "because I said so" development, where the team does what they're told without having input.  It's also a leading cause of "it's what I asked for but not what I need."
  • VALUABLE.  Every story should have a clear tie to why it delivers business value.  If a story doesn't deliver meaningful business value, why should we spend time on it?  This is often expressed in the "so that..." or "in order to..." clause of the one-sentence story formulation.  Stories that don't express value clearly are likely to be deprioritized by the product owner (or, if we actually do them, frustrate the product owner by doing something they don't care about).  Related - "valuable" stories are our method for having "thin threads through the system" - end-to-end slices of useful end-user functionality.  If we don't focus on keeping stories valuable, we can wind up splitting "build the database for X" from "build the UI for X" into different stories, neither of which is actually useful to an end-user. 
  • ESTIMABLE.  Stories should be sufficiently specific that the development team can estimate the relative complexity of implementing that story.  Stories that can't be estimated are difficult to fit into iterations,  because they take away our ability to determine what's "reasonable" in a given timeframe.
  • SMALL.  Stories should be as small as reasonably possible.  A single story that takes 4 weeks to implement is not only difficult to fit into a single iteration, but it also reduces our ability to track our own progress.  It's much easier to track done/not done than it is to track "I'm 45% complete with this story."  Keeping stories small makes planning easier, prioritization easier, execution easier.
  • TESTABLE.  It should be obvious to everyone on the team (including the product owner) when a story is complete, and the team can move on to another story.  This means we need agreed to, testable conditions of completeness.  This is often expressed in terms of acceptance criteria (e.g. an "I will know I am done when...." list, a set of "Given/When/Then" criteria, or any other agreed-on formulation).  
I believe there are a wealth of ways that these items can come into conflict with each other, but here are some of the more common tensions I've seen.  



Small vs. Valuable


The single most common tension I see on teams is trading off Small vs. Valuable.   

"Valuable" stories have a clear tie to something that delivers clear business value.  i.e. we know why the story makes the user's life easier.  "Valuable" is often the Product Owner's domain on the team - the Product Owner is often the person articulating the business value, and needs to understand it to help the team prioritize effectively.

"Small" stories are in a working size that's effective for the team.  As an upper bound, stories should be no larger than can fit in a single iteration.  Most effective Agile teams use stories significantly smaller than that, which helps maintain a healthy workflow (having several smaller stories finish over the course of the iteration is easier to track than one story that's "worked on" all iteration and finishes at the end.)

The tension arises because, with a little creative thinking, any story can almost always be split into smaller pieces (i.e. there's no "smallest POSSIBLE size").  Teams can generally choose the granularity they want.  However, the smaller we split the story, the harder it is to maintain a clear grasp on how each individual piece delivers business value.  If we split far enough, we'll pass the point where our Product Owner can understand why a given piece of work is important to deliver value, and we won't be able to prioritize effectively.

For example, let's say we have a story like "As a customer, I want to view my shopping cart online prior to checkout, so that I know what I'm buying."  But that story is estimated at six weeks.  Maybe we break it up into "view order basics," "view items," "view item quantities," "view total price for each item," "view order subtotal," "view order taxes," and "view estimated shipping charges."  OK, still reasonably clear how all those items are valuable.  Then "view estimated shipping charges" has to break down into "view shipping charges for ground items only," "view shipping charges for air-eligible items," "view shipping charges when there are hazardous items," "view shipping charges for international items..."

At some point, we pass the threshold where the set of stories we break down into are so far removed from the thing the product owner actually wants ("I want to see my shopping cart contents before checkout") that it becomes difficult-to-impossible for us to clearly see the value proposition for each one.  Asking the product owner to prioritze the 12 stories related to seeing the shopping card against the 15 stories for checkout and the 29 stories for selecting items is a very difficult task.  On the other hand, asking the development team to work on a project that has only 3 huge stories (because they're the things the product owner cares about) is likely to have serious planning and workflow issues.

How do we reduce this kind of tension?  First, if nothing else, be aware that the finer we carve stories into pieces, the harder each is to prioritize (and vice versa).  Second, most teams I've seen be successful target a certain story size.  If you're one of those teams, talk about how that's working in your retrospectives.  Don't be afraid to suggest your stories are too large to execute (or so small that we can't get a clear prioritization).  Third, ensure that WHEN you split stories, the team is focused on ensuring that each sub-story is valuable (e.g. do NOT split "build the UI" from "build the business logic.)  Fourth, consider how you parallelize stories - is a story that's "too big" something that could have two developers (or two pairs if you're pairing) work on different pieces at the same time?  And if so, does that make more sense than splitting into two smaller stories we need to torture to express as "valuable"?

I think a bigger issue here is whether User Stories are in fact really the right unit for BOTH prioritization and workflow, but that's my article for next week, so stay tuned.

Small vs. Negotiable

A central principal for most Agile teams is to defer decision making until the "last responsible moment" - the moment when we have to make the decision in order to avoid future pain.  Having negotiable stories is the embodiment of this principal - we deliberately elect to defer deciding on implementation details until we've come to the point where we can't reasonably proceed without those details.

As noted above, we can in principal split stories ad infinitum if we wish, and often will do so to ensure stories are small enough to fit our "standard" granularity.  And (as we just discussed) one thing we want to ensure we do when splitting stories is to keep each of the substories valuable.  It's very easy, however, for this to mean that we lose negotiability - as we split a story into smaller pieces, we start making decisions on how we're going to implement the story, and each piece becomes a story.

For example, we might have a story like "As a returning customer, I want to be recognized during a future puchase, so that I won't have to re-enter all my information."  If that story is too large, we might break into stories like "As a returning customer, I want to provide a username and password when I check out, so that my previously used information can be retrieved," and "as a returning customer who logged in successfully, I want to select a previously used shipping address during checkout, so I don't have to re-enter the data," and "as a new customer, I want to be prompted to optionally register an account and provide a password during checkout, so that I can be recognized in the future.

All these sub-stories are potentially small, and all seem valuable.  But notice that we have a lot more decisions made about how the feature will be implemented than we had in the original story.  We're using a username and password (as opposed to, say, storing the information in a cookie on the user's machine for later retrieval).  We added a step the checkout process to choose a password (as opposed to, say, sending an e-mail with the option to register post-purchase).  We have the ability to select from multiple previously used shipping addresses (as opposed to the most recent one).

All these decisions might be correct, and this might be the best implementation.  But by breaking down the story this way, we've clearly implied a number of details of the implementation, and so reduced the negotiability of the story.  Creative ideas the developers might have (for example, using a third-party ID tool instead of rolling our own) may be squeezed out.

How can we reduce some of this tension?  Again, a big piece is awareness - understand that when we have a story that needs to be broken down, we might need to make certain decisions.  Second, make sure the development team is involved in those decisions - if we're not going to restrict their available choices by our choice of stories, let's make sure that the way we're breaking this down is what the development team thinks is correct (as opposed to being "the only way the analyst who broke the story down could think might work.")

Independent vs. Small

Sorry, Small.  I know I'm picking on you, but you're hard to do well. 

As mentioned above with "Small vs. Negotiable," one common artifact of the process of breaking a story into smaller, valuable pieces is to make certain technical decisions up front, thus reducing the overall negotiability of a feature.  A related common practice is break a given story up into smaller pieces by creating smaller stories that need to be done in a required order to be meaningful, which means they're not longer independently prioritizable.

For example, if we're building what we've decided is a three-step registration wizard, we might break it up into the logical steps the user goes through, like "Provide personal contact details in the wizard," "Provide address details in the wizard," "Provide payment details in the wizard," "Check the data collected by the wizard for errors," and "Create the profile based on wizard data."  As noted above, this breakdown is less negotiable, but lets say we talked it over with our developers, and we all agree this is the right way to do registration. 

Now we might have a different problem, which is that the stories are assuming they need to be done in order.  If the only way to get to the "Payment Details" is from a button on the "Provide personal contact details," then logically we need to do contact details first.  And we probably can't do "check for errors" or "submit" until we've collected all the data. 

The problem with creating "flows" of stories like this is two fold.  First, we've reduced our ability to reconsider pieces of the flow - if we decide later "You know what? Storing payment data is a security risk we don't want to take, and it's not really hard to re-enter it, so let's cut that story," then we have to re-work (at least) the "check for errors" and "submit" stories, and maybe others.  Second, by having flows, we can "trap" ourselves for development - if the stories MUST be done in an order, then we can't do story 2 until story 1 is done, and can't do story 3 until after story 2.  This means we "single thread" on these stories, which means we can't have more than one in play at a time (since they depend on each other).  This can lead to long lead times. 

The first thing I'd consider to reduce this tension is thinking about whether we've actually split the stories the right way, or if there's a different split that allows more independence.  The piece that's suspicious to me here is the separate "check for errors" and "submit."  A possibly better split would be "collect personal information from a wizard and save it to a profile," "collect address information from a wizard and save it to a profile," and "collect payment information from a wizard and save it to a profile."  Rather than needing to build up a whole profile before saving, add the mechanism with each piece to error check and save that piece.  If we want to do the "payment" story first, so be it.  We might have (at least temporarily) profiles that are just anonymous payment information - is that actually a problem, or just a different way of thinking about profiles?  That's not to say EVERY dependency issue can be resolved by a different splitting of stories, but it's sometimes the case that we block ourselves by thinking too narrowly about our options (in my example, thinking of the "profile" as an atomic thing that we need to build completely before submitting).  

Negotiable vs. Estimable

When stories are negotiable, a range of potential solutions are possible, so long as we solve the underlying business problem.  However, a "range of possible solutions" can be difficult-to-impossible to estimate.

For example, a story to "provide feedback to a user when they are missing required data elements on the registration form" could be as simple as re-displaying the form with the missing fields highlighted in red, or could be as complicated as providing a wizard to guide the user through the form, with step-by-step instructions translated into the user's language.   

 It would be nearly impossible for a reasonable development team to provide a single estimate that covers both ends of this spectrum that has any degree of accuracy.  On the other hand, if we decide that, in order to be estimable, we will choose to estimate the first option (redisplay with a color highlight), then we've reduced negotiability and made a choice that we might regret later (when we have a great framework we need in 6 other places to provide text guidance, but we have to do something different here because "the story says to."

To resolve some of this tension, a good practice is to separate the necessary (the business problem and the acceptance criteria) from the assumed (the specific implementation we're assuming in order to estimate).  I like to keep a separate "estimating assumptions" section in stories, where we clearly record what assumptions our estimate was based on.  Related, we need to revisit these estimating assumptions - if it becomes clear that the way we're going to implement the story is different from what we assumed earlier, we should revisit whether the estimate still makes sense.  Finally, we need to set the expectation with the team that solving the business problem is more important that justifying our estimate - if there's a better way to solve the problem than the one we assumed when we estimated, we will pursue the better solution rather than justify our estimate.

 Negotiable vs. Testable

I actually see two separate sources of tension here I want to tease apart.

First, similar to the tension with Negotiable vs. Small, there's a temptation to assume a specific implementation in our acceptance criteria for a story, either to make the story more concrete, or because there's only one possible solution in the mind of the person writing the acceptance criteria, and they don't realize other solutions are possible.

Second, similar to the tension with Negotiable vs. Estimable, a story that's highly open to a variety of options is very difficult to plan specific test cases for.  We can't plan what we're going to test if we don't know exactly what the code is going to do yet.

For the first tension, I think it's important to think about our acceptance criteria as different entities from the tests we're going to execute to ensure the code works properly.  Acceptance criteria ought to be "properties that EVERY acceptable solution to the problem must have."  It takes thought to make our acceptance tests implementation agnostic.  It also introduces a translation step - when we test the actual implementation, we need a mapping from the abstract "thing we need" to the specific "what we're going to do."

Consider the following two formulations:
  • GIVEN I have not provided all the required registration information, WHEN I attempt to register, THEN I should get clear feedback that my information is incomplete AND I should be told what additional information is required.
  • GIVEN that I have not filled our some required fields on my registration form, WHEN I submit the form, THEN I should see my form re-displayed with an error message at the top and the missing fields highlighted in red.
The first form is relatively implementation agnostic.  Multiple implementations could satisfy it.  However, it's not easily executable - to actually test this, we need to know what "required information" is, what "attempting to register" means, what "feedback" will look like, etc.  The second formulation is considerably closer to a runnable test case.  However, it assumes a number of implementation elements - a "form," some kind of "submit" action, "fields" to highlight, and a separate "error message" at the top.  

The second tension is more a question of when we need to do the translation from an abstract "need this to be true" into "we will do the following to know that it's true." It's virtually impossible to practically test a user story if you don't know whether you're submitting a form full of data or retrieving information from a third-party repository.  At some point, we need tests specific to our chosen implementation. 

There are a few things I suggest to lessen this tension.  First, just as I suggested separating the "necessary" from the "assumed" when talking about Negotiable vs. Estimable, I suggest separating Acceptance Criteria (describing the necessary) from Acceptance Tests (describing how we'll verify we meet the acceptance criteria).  Ideally, we present the developers with the Acceptance Criteria, and work out the details of the Acceptance Tests with them when we're ready to implement the story and so have to pick a specific design.  Related - just as it's often an anti-pattern to have too large a backlog of fully fleshed out user stories, we should resist the temptation to build significant test plans for stories we haven't yet begun to work on.  One technique I've found successful with multiple teams is to have a "story kick-off" with the analyst, testers, and developers (and ideally product owner) when we start development on a story.  This ensures we have a common understanding of what the goals of the story are, and that the developers can articulate their expected vision for the implementation of that story.  This allows the testers to develop more concrete test cases to the actual design "just in time" when the actual design is known.

Wrapping up

As I mentioned earlier, I could go on with this - there are a lot of other places where there can be perceived tension between the various tenets of INVEST (Independent vs. Testable, you got lucky this time...)

What I hope I've accomplished in an overly-long blog post is at least illustrate WHY there are potential tensions, and that no matter what you're doing on your team, you ARE making some of these tradeoffs.  If things are going well, you're probably making the tradeoffs that work well for your team, so well done.  If things are frustrating, hopefully I've suggested some places to look and some balances you might want to revisit. 

Tuesday, May 7, 2013

Trust the people, not the process

In my view, one of the most misunderstood pieces of the Agile Manifesto is "People and Interactions over Processes and Tools."

Too many people believe this point is limited to one or both of the following statements:
  • The process steps in your SDLC in Agile is different from the process steps in waterfall.
  • Changing to Agile means you need to use different tools like Mingle or Rally instead of Clearcase, Traq, or MS Project.
In fact, "People and Interactions over Processes and Tools" is much more fundamental mindset change that's one of the hardest things to accomplish in an Agile transformation.

Quick quiz.  In a "traditional" waterfall environment, who is responsible for ensuring the team produces high-quality, useful software?

Is it the project manager, who's responsible for the overall project (even though they don't build anything)?  The business analyst, who puts together the requirements (but doesn't execute them)?  The developers, who build the software (but don't have a lot of say in what we're building or why)?  The testers, who test the application meets its requirements (but don't have any say in what those requirements are, and in many cases don't really understand them)?

I believe the "right" answer for waterfall projects is that it's not any person's responsibility.  Instead, it's the PROCESS that is responsible.

Here's how this usually works.  Before the project begins, a subset of the team spends a lot of time putting together a highly detailed set of requirements.  They write detailed use cases and produce high-fidelity comps.  They make all the decisions on what the moving pieces need to be, and create class diagrams and database models for everything.  They break all the development work down to highly detailed tasks.  And then those artifacts are are handed off to "the team" to execute the tasks.

How does a developer on the team know what they're doing is right?  Easy!  They read the plan.  If the plan says to build these three classes with these 12 methods, they build those classes.  Why those classes and not others?  Because the plan says to, and I trust that the process put together the "right" plan.  Is the stuff I built useful?  It must be - the plan said so!

How does a tester know we built a good user interface?  Easy!  They compare "what it does" to the list of "what it's supposed to do" in the requirements.  Can a customer actually figure out how to use the screen to accomplish a goal?  They must be - the plan said so!

The "build a plan, follow the plan" mentality actively reduces agency by individual team members.  Team members are supposed to do "their job," and if everyone does, well, I guess we'll get high-quality useful code as a byproduct.  The team needs to have faith that the people who built "the plan" knew what they were doing.   They understood all the customer needs, all the architectural foibles, all the possible edge cases, and put together a plan that covered all the contingencies (other than a few tweaks that will come in through the oh-so-friendly change control process).  Because that was their job.  Doing what they told me is mine. 

If I did my job, I'm no longer accountable.  Hey, don't blame me the system cratered - I built it just the way the architect designed it.  Don't blame me - I tested everything against the documented requirements.  Don't blame me - we built exactly what the customer told us when they signed off on the spec's.  If it doesn't work, it's not MY fault.  We all followed the process!  

In an Agile world, yes, we have different steps in our processes.  Yes, we document our requirements differently.  Yes, we are more iterative in our approach.  Yes, we use different tools.  But more importantly than ANY of those things, we stop believing that "the process" is the thing that produces positive results.

In Agile, if a developer doesn't believe that a given user story is the right way to achieve that story's stated goal, we EXPECT that developer to question it.  They need to have a conversation with the analyst who put the story together, the product owner, the customer - whoever is the right person.  We don't trust the "story generation process" produced the right story.  Instead, we trust are smart and thoughtful developer to ask reasonable questions and expect either good answers or appropriate changes.  If a tester doesn't think that the acceptance criteria documented for a story really capture what's necessary for a user to accomplish the stated goal, we don't expect them to ignore that belief and blindly trust that "the process" captured what's needed.  We expect them (we demand of them) that they express their concerns and hold others accountable for getting it right.  If the user has a problem and suggests a possible solution, we expect the business analyst to work with them to explore other solutions, and validate that their suggestion really is the best approach, rather than blindly trusting "the customer must know exactly how to solve the problem" and write down exactly what the user said.  

To move to Agile effectively, we can't just swap out a team's process with a different process, or their tools with other tools.  We have to attack the mindset that the processes and tools are the things that make us successful.  We have to attack the mindset that "doing your job" means "doing what you're told."  We have to attack the mindset that blindly "doing your job" inherently leads to success.  We have to attack the mindset that understanding what someone meant means reading a document.  We have to attack the mindset that responsibility for the project being successful resides with "someone else."  We have to attack the mindset that the thing that makes us successful in Agile is doing standup meetings and estimating in story points. 

This is a hard problem.  It's one a number of putatively "agile" teams I've worked with have not in fact solved. 

Customers and Product Owners need to expect and get used to being questioned on whether what they asked for is right.  Analysts need to get used to pushback on whether what they wrote up is the right thing.  Developers need to get used to being questioned on why they built it that way.  Testers need to get used to being expected to be expert on the business problem, and pushing back on things that (strictly speaking) aren't in the requirements.

People who "grew up" in traditional software environments are often scared of this.  How can I "do my job" when I'm no longer completely sure what "my job" means?  Won't my manager yell at me if I push back on "what the customer asked for" and "slow down" the process?  Will I be called on the carpet if I do something that's not on the script and it turns out to be wrong?

The key to all of this is establishing trust.  The team needs to feel trusted to be good at their jobs.  Trusted to solve problems creatively.  Trusted that they will make the right decisions.  Some of this trust can come from within the team.  But even more important, the team needs to feel trusted by management - that they won't be constantly second guessed.  That they have freedom to occasionally make mistakes.  That the time they spend talking through issues won't be considered "waste" time to be minimized.  That their estimates will be respected.  But most importantly, that they know what they're doing, and they're trusted with ownership of their own quality. 

A team that's trusted, and that trusts each other, will naturally build the communications links necessary to validate their assumptions.  They'll talk to each other constantly.  They'll develop "just enough" processes to ensure they all know what's going on, and that everyone is aligned.  A group of trusted people with a clear goal is the most powerful force in software. 

Thursday, May 2, 2013

The Power of "I Don't Care"

A few years back, one of the best Product Owners I've worked with taught me the three most important words in his vocabulary.  "I don't care."

Seems a little crazy, right?  Shouldn't the Product Owner always care?  Why am I spinning this like it's a good thing?

Let me paint the picture.  I was the lead analyst working on an Agile pilot program for a large financial company.  The project focus was on improving their mathematical modelling tools.  The Product Owner I was working with wouldn't recognize that job title - he was one of the key users of the tools we were developing, and had taken on the job of being both our "go to" subject matter expert and the keeper of prioritization for his requests and the requests from other users.

As part of the Agile pilot, we were building user stories.  We'd established the basic "As a...I want....So that..." story sentences.  We'd put together some acceptance criteria for each.  We'd moved into sketching out some additional details of the "top of the list" user stories we'd be developing soon.  I had some questions laid out of a "should it work like this, or more like that?"  And as we were talking about them, he looked at me and said, "Look, Mike, I don't care."

What do you mean you don't care?  He told me (wildly paraphrased): "We've already talked about what I care about.  You understand what the goal.  You understand the acceptance criteria.  I appreciate you asking, but for a lot of these details, I don't feel strongly.  Solve my problem however makes sense the team and I'll be happy."

So we ran quickly through my list of questions, marked most of them "don't care," talked through the few that he had an opinion on, and we were off and running for development.

Why do I think this is so significant?

For me this was a great reminder that "if you ask the question, most people will feel obliged to provide an answer."  If you ask your customer/user/product owner "what font do you want us to use for the pop-up help text?" most will give you an answer.  Because you asked, and expected an answer.  This is human nature.  "I don't care" is an out-of-the-box choice most of the time.

And it's an important choice to have available.  Because a Product Owner telling you they don't care about a detail isn't telling you they don't care about the product.  Or that they're unhelpful.  They're telling you they TRUST you.  They trust the team to make a good choice.  They're giving the team permission to solve the Product Owner's business problem as well as they can, without introducing constraints that aren't important to them.  They're telling us to focus on the things they DO care about. 

The takeaway here shouldn't be that we stop asking our product owners questions.  Simply assuming the Product Owner won't care and not talking with them is a great way to make them feel they're not listened to.  What we need to do instead is set the expectation that they're ALLOWED to tell us they don't care.  That the fact that we asked the question does NOT obligate them to decide the answer.

A big piece of this is expectation setting.  Make sure before you start talking through details with your Product Owners (and Stakeholders and SME's and Users) that they're aware that it's OK to tell us they don't feel strongly.  Remind them they'll have the opportunity (in desk checks and demos) to review what we're building, so not every decision has to be made up front.  Give them permission to say "I don't care."

Another piece to think about is framing.  "Hey, Jane, should the text be left-aligned or centered?" demands a binary decision.  "Hey, Jane, we're brainstorming on the display of the pop-up help text.  Is there anything we need to talk about?  Or would you rather just see a mock when we think we've got it?" asks the Product Owner to first think about what's important to them, and invites them to provide detail they care about.

A final piece to think about is having awareness about who ought to be involved in certain decisions.  There are certain decisions that the product owner legitimately needs to be involved in.  There are other decisions the team needs to make that might not.  "Should our web services return XML or JSON?" is a decision a team might be faced with.  If your project is an architectural re-design moving to SOA, and your Product Owner is the Chief Architect, then you should probably pull them into that decision.  If you're building an e-commerce site and the Product Owner is the Head of Retail, maybe not.  Again, it is human nature to try to answer a question if the question is put to you.

"I don't care" are three powerful words.  I hope I've helped you hear them more often.  

Edit: Embarrassing typos.  Thanks, Bill!