One of the action points from our regression test retrospective was to hold a Bug Hunt. We wanted to find some bugs, enhance the team spirit and learn from each other.
The results were mixed. On one hand this was really fun, but on the other hand, the bugs that we found weren’t particularly interesting. Almost half of them were rejected because we weren’t able to reproduce them or they weren’t real bugs (working as intended). Also out of those that made it into Jira, most were trivial or “cosmetic”. As a team activity though, this was a lot of fun!
We held a retrospective afterwards were we first collected the different strategies that each pair had used, making an inventory on the whiteboard. We then talked a bit about the hunt in general. It seemed that everyone had enjoyed it and the discussion was going on about “when we do it again then…”, which is nice.
We then grouped the bugs (printed out on sheets of paper) in categories and read them out loud for each other. This made us discuss why there were so many trivial bugs: people felt stressed by the lack of time. We discussed point systems, having more time, etc…
We then opened up for a more free discussion and I recorded it using a mind map on the white board.
- One interesting semi-related proposal was to arrange cross team testing at the end of the sprint. Since we are two teams, this would increase our knowledge of the other team’s features and also help us look at things from another perspective.
- Our new Scrum Master was of the opinion that the competition part of the activity had to go. At first I disagreed but when I think about it there could be ways to work together instead. Let’s say that we for example work together to try to find 4 major bugs in one hour. What do you think?
We were 7 pairs during the hunt and two judges. The pairs were formed in order to get a good mix of people from the two teams, scrum masters and support. The stakeholders were invited but did not participate.
We used 1½ hour for the event in total with introduction, jury deliberation and reward ceremony.
The hunt went on for 1 hour.
We had 44 bugs posted on the whiteboard at the end of the hunt.
This resulted in 26 bugs entered into Jira (we expect some of these to get rejected when we start working on them)
The rejected bugs did not make it into Jira because:
– we were not able to reproduce them
– functionality was working as intended
2 thoughts on “Bug hunt experiment”
Nice report. What I noticed at the XING test competition is, that it was not only about finding bugs, but also about reporting usability problems, since the participants didn’t know the product before.
Did you repeat these bug hunts? If yes, how was the participation/motivation? At our second test competition we had already less participants than at our first and also from less departments.
I ask myself, if something competition-like is really suitable to generate long-term awareness for getting better quality or if it’s only short-term fun without a lasting effect.
I haven’t repeated it with those teams since I got a new assignment. In our case it was mandatory to participate so we wouldn’t have the loss of participants. But I did wonder if competing against each other was the best approach. Maybe competing as a team against your old result would be better.
My experience is that it increase awareness from the programmer side towards testing. They got curious about how to do it well. I believe sharing strategies also means a bit inspiration and new ideas that you want to try.
At another client (without the competition part), one guy was rather unimpressed with having to spend an hour testing and let us know as much. Afterwards he said that this was the most fun he ever had testing and that we should do it for every release.
I think it’s time to try it again with this new client and see how it turns out. I think it will have long term effects, but I can’t be sure yet.
Let’s hope your third time is a great success as well!