Development teams using Scrum, or similar agile forms, will find that the constraints on the system that are represented by Nonfunctional Requirements (NFRs) can be a pain to capture and reference in product backlogs. The NFRs aren’t acceptance criteria, but the “story” isn’t really done unless it meets them. The NFRs also usually span multiple stories, or the entire application itself (such as performance) so cannot be managed on an individual story level.
So how can we represent these constraints in the backlog?
User Story Constraints
One way to model NFRs is by the use of constraints within the description of a user story. Here is an example:
The Story
As a Product Owner, I want to be able to communicate to developers which constraints apply to their work and want to ensure that the constraints are tested for, in order for our system to meet nonfunctional requirements such as performance and usability.
Acceptance Criteria
- Product Owner must be able to store NFRs in a manner that can be consumed by other team members.
- Developers must be able to access the NFRs.
- Testers must be able to access the NFRs.
- Individual stories must be linked to their related NFRs.
- Product Owner must be able to centrally manage NFRs to support linking to multiple stories at once.
Constraints
- NFRs must be accessible within the backlog management tool used by the team.
In the above user story example, I display one way to capture NFRs on a user story, which is as a constraint directly on the story. This method of capturing NFRs is very useful for any constraints that apply directly to a user story and do not span multiple stories (or only span a few stories).
The reason it is not listed as an acceptance criteria is because it implies implementation detail. We are enforcing a specific path upon the implementation team, constraining their work. For this reason I prefer to store these types of requirements on a story as a separate Constraint directly in the user story definition.
Central Management
In our user story, however, we haven’t met one of our acceptance criteria: central management. When an NFR (or constraint) applies across multiple stories, placing the constraint directly in the user story definition means that we lose the link between these stories. We cannot change the NFR centrally and have it update everywhere, and we cannot find all the stories that are affected by an NFR. So what do we do?
One solution is to create items within the backlog management tool that are not intended to be worked on (Constraints). These items can be cross-referenced by the user stories:
Constraint #1 (C1):
- NFRs must be accessible within the backlog management tool used by the team.
User Story #1 (US1):
- Story Description
Acceptance Criteria: Must 1, Must 2, Must 3, etc.
Constraints: C1, C2, etc.
This model is consistent with the way some tools track impediments or epics. A single impediment could block many stories, and an epic usually breaks down into multiple stories.
The Problem with ‘Done’
Okay, sounds easy. As we find NFRs, we create a new constraint, and link our story to it. Bam. Central management, direct references, we got ourselves some nice traceability.
Except how do we know we have completed an NFR? If our definition of ‘Done’ includes completion of the constraints, we need a way to reach this goal.
Typically, an NFR like performance is usually tested during specific times in the project, not on each individual user story. You may only run a single security scan for SQL injection, and other system tests may not be run at all due to budgetary constraints. How does a story close if we aren’t testing the constraints?
I have a few suggestions for this situation:
- If we can cheaply test for a constraint as it applies to a specific user story, we should do so and log any issues that are found as soon as possible.
- Constraints (and any associated issues found) should not block a story reaching Done, they only help to guide the team in their implementation and provide early feedback.
- Constraints themselves must reach Done via system testing.
I believe these constraint items need to be treated like true backlog items and scheduled for a sprint based on priority. The work may have started for the constraint on individual stories, but the constraint completion itself is actually blocked by the completion of the stories themselves. We need to allow the stories to be built, tested, and reach a ‘Done’ state. Once all the stories for a constraint have been finished, we can start testing the constraint and putting it into the iteration.
This means we have a new bucket of work that needs to be tracked, and estimated. While we were initially just using the constraint item to capture the constraint text in a central manner, we now have to worry about testability and effort estimates to actually ensure we have completed the requirement.
Does this sound familiar?
Any of you who have a project management background have likely put together a project plan including performance tests, or other types of non-functional testing. These blocks usually happen near the end of the project timeline, at some point in time after feature development is expected to be complete.
When transitioning to agile, our backlogs usually only contain the part of the project timeline for the feature development, and forget about the nonfunctional requirements (like documentation) that need to be tracked for effort. Iterations are time boxes where we execute effort that delivers value. While we may not like it, documentation delivers value. Performance delivers value. Having a system that cannot be hacked by inserting SQL into the querystring delivers value. They aren’t exciting new features, but I believe these are an integral part of the system that should be tracked in the backlog and scheduled into iterations.
Yes, that does mean estimating the effort and figuring out the tasks during Sprint Planning.
Prioritization is also required, as the end of a project introduces constraints: time and budget remaining. There may not be time or budget for performance testing AND accessibility testing, so the product owner needs to determine priority of which constraints will be tested for to reach the Done state.
Are we ‘Done’ Yet?
Earlier, we discussed the problem of ‘Done’ in relation to stories, and we allowed stories to reach a ‘Done’ state even if their constraints had not been tested yet.
However, our project “Definition of Done” likely enforces that our NFRs need to be met, which means we need to be able to report and track work on the NFRs.
By using the constraint items in the backlog, we can use our backlog itself to track how much effort we have remaining to reach project completion, and we can also check to see if we have any open NFRs that still have bugs or stories that need to be completed. This allows for an easier tracking method than the traditional ‘cross-check’ test where a business analyst reviews all the stories and all the completed test cases and verifies against a list of NFRs from a business requirements document.
In Summary…
I suggest the following steps:
- Create backlog items for each NFR. These will be for the testing of the NFRs, and should be scheduled into iterations.
- Create links on user stories to associated NFRs. This will inform the development team as to which constraints apply to them and will help with estimation of the story.
- Ensure there is a way to isolate progress against NFRs (constraint backlog items) when reporting backlog burndown or burn-up. Many organizations will want to see report on the team’s progress against the NFRs.
Thanks for this article. What do you see as problematic if we were to make constraints an integral part of feature stories, as opposed to having stand-alone NRF stories?
I have tried this before and it worked well. I was wondering if you had different experiences.
Thanks for checking out the article Itamar! Always good to hear from others on their experience. I assume you use the term ‘Feature Stories’ referring to higher level stories that might encompass multiple user stories about a feature. In this case, if the NFR/constraint applies only to that feature, then it absolutely would make sense on that Feature Story. The Feature cannot be considered complete without the constraint having been met, and you also accomplish the centralization of the constraint management.
That being said, other NFRs (such as system performance or security testing) are system wide and do not apply to a specific feature. These constraints are generally dependent on completion of a shippable release, and also have effort and specific tasks related to them that are unrelated to other user stories. Trying to shoehorn these types of constraints into a specific Feature is usually costly (i.e. multiple rounds of performance/security testing for each feature).
If you happen to be following a continuous deployment (or even continuous delivery at a feature level) then absolutely these constraints should be on that Feature Story to make sure that all testing is completed before the feature goes out to production.