Sometimes, conveying a user story in the equivalent of a tweet-sized sentence just doesn’t work very well for me. After struggling with it on my first Agile team, we tried out several variations before we collectively settled on one we liked. With great success, I applied the same approach to my current team.
When translating a client’s “ask” into a user story, I start with the traditional one-sentence format following a pattern of who, what, and why. For example, “As an account holder I want to log in to my bank account from my web browser so that I can view my balance.” Agile bloggers point out this structure is not a fully formed requirement (such as here, here and here). Rather, it is a conversation starter between the stakeholders and the development team. It helps all parties understand the goal of the work.
However, the story is missing a critical piece of information – a definition of done. Knowing when the team is “done” is key to keeping work from dragging on indefinitely. To address this, I add a statement of acceptance criteria to the story that defines when we are done: “… We will know we have done that when _________.”
I don’t know about you, but I have never seen a user story with just one acceptance criteria. Most of the time, it takes several points to fully describe the definition of done. The longer the list of acceptance criteria, the more difficult it is to fit into the description field in the developer tool.
At this point, I consider whether the expanded sentence will be “enough,” or whether a short document is more appropriate. In keeping with the spirit of the Agile Manifesto, as an Agile Business Analyst, I want to provide just enough documentation to support the development of working software.
I have other user story dimensions to consider too, and my team virtually always asks about:
- Context or background. What business problem or opportunity makes this work necessary? Sometimes we also want to understand when there is stakeholder sensitivity around the problem or opportunity we are addressing. Knowing this, we can refine the solution to better meet the acceptance criteria and hopefully exceed client expectations.
- Preconditions. Are there any preconditions? What are they? These conditions must be true (or not) to trigger the user story.
- Assumptions. What do we believe to be true but still need to confirm? In the ideal world, the team only works on the story after we have confirmed or refuted all identified assumptions. Reality often dictates otherwise. By stating them, at least the team knows what the assumptions are and can work around them for a little while as I continue researching.
- Testing approach. How will we test this work? This is a great opportunity to identify special scenarios and get a head start on a more detailed plan.
- Terminology. Are there any new terms I need to add to the business glossary? If not, I can leave this section out.
- Business rules. What business rules affect this work? How do they impact the user story, and how does it impact them? Has a new business rule emerged? Again, if it’s unnecessary, I don’t include it.
The beauty of it is that each element is flexible enough to allow for whatever level of detail the development team wants. As a final bonus, I already have most of what I need to supplement my repository of reusable requirements.
When I am done, my final user story document is still just one or two pages and covers:
- the context of the user story (it’s background) if it’s helpful,
- the user story itself,
- its acceptance criteria,
- its preconditions, or a note that we did not identify any,
- known assumptions, or a note that we did not identify any,
- the high level testing approach,
- and optionally, new or revised business rules and terminology.
Whenever the teams I support skip over examining one of these items, something came back to bite us. If we were lucky, it happened in testing. If it happened in production, it cost twice as much to fix between the rework and credibility hit we took.