While holding a retrospective about regression testing, we improved how we work with quality during the sprints. By analyzing and discussing the bugs we found during the regression we were able to discover some things we could easily improve with test automation, find process improvements and some patterns of bugs that we missed.
I believe that all of us who have ever had to do manual regression testing can relate to the agony of it. I’ve seen different types but in each case it has felt like monkey work, checking things that could be checked by a computer. Nevertheless, this is the reality for a lot of teams and change can be slow.
In the mean time, this is no excuse to not improve our ways of working and we decided to do a regression test retrospective. In particular I wanted to do a root cause analysis for one or more bugs. The goal was double: to work more efficiently during regression and during the sprints.
I asked for advice on the Agile Testing Mailinglist on how to do a root cause analysis and got some good ideas from there. Here is how we proceeded.
Preparation
In preparation for the retro we used Jira to print out all of the 50 or so bugs we found during this period, each on a piece of paper. We used a conference room with a big table and removed the chairs. During the regression test period we had asked everyone who submitted a bug to mark it with “test specification” if it had been found while following the steps of a test specification and “exploratory” if it had been found during exploratory testing so that we would know the technique used to find the bug.
Sorting by time
I asked the participants, programmers and testers from our teams, to sort the bugs on date. We then had a timeline with the amount of bugs per day visible. We decided to mark the major bugs so that we could distinguish them easier. We talked a bit about what we saw and realised that there was nothing surprising. The days were a lot of people were involved in testing, we found more bugs. The major bugs were evenly distributed in time. This could seem as a waste of time because there was nothing surprising, but I don’t regret spending time on it because there could have been something interesting there. Had we found all of our major bugs in the last day, for example, we might want to do something about that.
We talked a bit about how we would do this in the future when the sales process forces us to release even less often and thus increasing the time between regression test periods and thought of trying bug hunts a few times.
Sorting by category : old/new functionality
After a break, I asked the participants to sort the bugs according to categories. The categories we decided on were “bugs in functionality which is new with this release” and “bugs in old functionality”.
We decided to keep the timeline and put the new bugs on the top and the old bugs in the bottom. We could tell that about 2/3 of the bugs were related to old functionality.
We then decided to sort those into “old functionality affected by new functionality” and “just plain old bugs”.
Here, things got interesting. We could see for example that we had new bugs, found using a test spec. Did this mean that the test spec wasn’t run when the functionality was implemented or that it had become obsolete already?
We could also reflect on the fact that some of the old bugs, we already knew of, but none of us had bothered filing them until a new tester joined the team.
During the discussions we identified that we wanted to do more pairing in the teams, that we wanted to brainstorm a test plan together at the start of each user story and that we needed to talk more about the value of our test specs.
Root cause analysis of one bug
Finally, I wanted to do a root cause on one of the bugs to see if there was any specific action we could take.
I chose the “5 Whys” to do the analysis mostly just because I had to pick something. This was my first time doing any kind of root cause.
We looked at why the bug was introduced first and then at why didn’t find it earlier in testing. We realized that this was a very complex part to test because of the immense amount of possible combination. The programmers said that this particular bug could have been found using unit tests and we decided that we would implement those. This made me very happy, because this part of the application in particular is one which feels very tedious to test manually.
Conclusion
For me, doing this kind of workshop meant that we worked together as a team to talk about testing. We could identify at least one area where we could automate tests. I also believe that us testers realized we could get some help from our programmers with these kind of problems and that the programmers better understood that we need their help.
Talking about a regression was also a way to discuss how quality work and test work can be done during the sprints in order to not create these bugs or to catch them earlier.
So, I can recommend doing this kind of workshop even if (or especially when?) you’re stuck in a low release frequency, manual regression test only organisation because there are benefits of talking about testing together.
If you’re interested in testing in an agile context, you should check out my upcoming full-day class on agile testing. Only in Swedish, for now.
If we say that trees have “roots” (plural), why do we talk about “root cause analysis” (singular)?
Hm… It seems more likely to me that there isn’t a single cause to a problem.
And when I read about the “5 Whys” there was some criticism to the fact that it was possible to come up with different roots to the same problem.
Maybe there is a point in looking at one root at the time and that’s why we use the singular?
Maybe we should say route cause analysis? 😉