At Arcbees, we’ve had more discussions that I’d like to admit about issue types in sprint planning. On any given day, the same issue might be reclassified from story to improvement and back again, for reasons that seemed to have little to do with the issue itself.
If issue definition was inconsequential, this would pose no problem for us, but we have found that mis-categorizing issues actually impacts morale, velocity, estimation, and a host of other factors that have real consequences for the success of the project.
The realities and constraints of our work amplify the consequences of improperly classifying issues. We’ve come to identify five key factors that determine how smoothly the development process will unfold for us at Arcbees. All of them are impacted by improperly classified issues. These include:
- time zones;
- remote workers.
In this article I will describe a tool we have created to help speed and facilitate the correct classification of issues at Arcbees. First I will explain the negative consequences of getting issue classification wrong. That will make the value and simplicity of the solution clear. We hope our tool helps your development team avoid the problems and enjoy the benefits of easier and more accurate issue categorization.
Bad issue classification will kill you!
We observed four principal negative impacts of issue misclassification in our projects.
We use Jira to manage our projects. Jira’s documentation defines four types of issues, as listed below (https://confluence.atlassian.com/display/JIRA/What+is+an+Issue) :
- Bug — A problem which impairs or prevents the functions of the product;
- Improvement — An enhancement to an existing feature;
- Story — A new feature.
- Task — A task that needs to be done (love this one :P)
The team needs to communicate about the work that needs to be done in the project. Communications quickly become confused if team members start to define all issues as “improvements” to the product, which becomes tempting after the first few iterations. The key question to ask is: “Is the feature already developed or not?” If yes, the issue is an improvement, if no – if new value is going to be experienced by the end users – it is a story. You don’t want to fall into the trap of calling too many issues “improvements”, because if you do, you’re going to miscalculate the real velocity of your team.
Frustration and feeling no progress
The value you create in a product, as a team or as a developer, is defined by the stories you deliver at the end of each sprint. Each completed feature adds value to the product. The other types of issues – bugs, improvements or tasks – do not deliver new features, so developers working on them may feel like they aren’t adding value to the product. The value delivered by working on bugs, improvements and tasks often takes time to become appreciated, and may never be apparent to the end user. Developers assigned to these issue types may feel frustration that their efforts do not count when value delivery is assessed, and this can have a negative impact on team morale.
This is important. It means that developers’ self-perceptions of their contribution to projects is directly linked to how issues are identified. Once we realized this, we began paying much more attention to getting issue definitions right, and that had a significant impact not only on each Arcbees team member, but also on the whole team.
By sharpening our process for defining issue types, we were ultimately able to clarify further the value of working on each type. This helped us counteract the perception that stories were the only important types of issues to work on. Developers were more willing and able to both define issue types properly, and value their own work on issues of every type.
Caring about estimation meetings
A lack of clarity about issue types also has a negative impact on estimation meetings. Estimation meetings are important at Arcbees, since we need to manage project scope and also coordinate the efforts of remote developers on a distributed team. Estimation meetings clarify the work that needs to be done in a sprint, and helps us assess progress on the project.
Here again, the fundamental differences between improvements and stories asserts itself. Supposedly, if you follow best practices, you only estimate the size of stories. You do discuss all tickets in the backlog, but you are not supposed to estimate the other story types.
Confusion about issue types can really kill estimation meetings. Everyone gets frustrated with the lack of clarity over how issues are classified, and after a few estimation meetings developers begin to lose focus. The meetings become empty and formalistic, with developers putting less of their heart into them. A process for clarifying issue types corrects this problem and brings focus and enthusiasm back to estimation meetings.
Velocity should become consistent as a project takes off, with this should become apparent at the end of each sprint. It is important for product management that you be able to project velocity for upcoming sprints by looking at the history of your previous sprints.
Since you only estimate stories, the only complexity points which contribute to velocity assessment at the end of a sprint are story points. What if most of the work in that sprint was done on improvements or bugs? With the traditional way of estimating stories and calculating velocity, you wouldn’t see the real work involved.
This is why identifying the right type has a direct impact on the meaning of what has been accomplished. And since you are looking at your previous sprints to identify what you are capable of doing, it has an indirect impact on your work commitments for the next sprint, and the consistency of your velocity
Problem explained! Show us the real deal!
At first, we tried to clarify issue identification by writing better definitions of each type. This helped, but by itself it was not enough. We kept finding edge cases that did not fit neatly into our categories. We quickly decided to improve the process by creating an infographic to support better decision making. This made all the difference.
What a better way to represent issue classification than a decision tree! Visuals are easier to use and support better outcomes than verbal definitions in team meetings. We started from the general definition of an issue type and we drew this decision tree. It helped us define a clear path for choosing the right classification for each issue in a sprint.
As you can see in this tree, we started by defining a story and epic:
A feature gives value to the end user and wasn’t previously available
For those of you who are used to Agile terminology, you know a story can be reclassified as an epic. It’s not just a matter of the complexity of the issue, it’s a matter of whether or not it can be split into smaller stories.
Let’s take this story as an example:
“As a user, I want to be able to manage my documents, so that I can classify them as I wish.”
This Epic can encompass many smaller stories needed in order to describe all the functionality required to deliver its value. Usually Epics describe a high level requirement of your application.
From the definition of a story, we identified what would differentiate a spike from a story. In this case, the decision point is “Do we know all the details?”.
In our projects, when we need to work on a new feature but we are not sure which library we should use or how it should be implemented, we create a “spike”.
A spike is time limited effort to research a specific feature before implementing it.
The next decision point differentiates a story from an improvement. This decision was the one that had usually stimulated the most discussion and disagreement in our meetings. Since the official definition of an improvement is that it affects an existing feature, and features deliver value to users, we decided improvements by definition should not be detectable by the user.
We defined an “improvement” as follows:
An improvement is an issue which affects an existing feature and has no effect on how the user experiences the application.
Refactoring and testing a piece of code to make it cleaner would be an improvement, not a story, even though it might be a big piece of work that might impact velocity. For this reason, improvement issues, like spikes, are time-capped within an iteration. Impact on velocity is controlled through the time cap.
Changing the label on a button would be a “story” by our definition, because it would improve the UX, and so delivers value to the end user. In the past, some team members wanted to call these kinds of small changes related to existing features “improvements”, which is understandable. However, lots of these small pieces of work taken together makes a big difference for end users. That is why we had to clarify that even small issues are “stories” if the user can detect them. These small stories factor into velocity assessment like any other story.
The last definition concerns the meaning of a “task”:
A task is not related to a feature, but needs to be done for the success of the project.
A task could include releasing a new version of the product, or writing developer documentation.
These definitions plus the decision tree removed the confusion for our planning and estimation meetings, clarified velocity calculations, and helped us scope and coordinate work. We could control time spent on improvements and spikes, and improve our projections for feature delivery against time and budget. It had a major impact on estimation meetings, focusing discussion on key decision-points, most especially: “Will this issue deliver value to the user?” That is always a valuable point to clarify.
Most importantly, this framework gave team members clarity when discussing their work. That had a big impact on improving morale and clearing up disagreements and misunderstandings. There are objective standards that explain how issues are classed, so now we can all literally work “from the same page” when classifying work.
The issue-definition decision tree and definitions work for us at Arcbees, and they fit the realities and constraints of our work. We believe every methodology should evolve to fit its environment. If you do adopt and adapt these standards to fit your context, please let us know what you did. We are always learning and adapting our own processes, so any new insights you want to share are deeply welcome. Until then, happy sprinting!