Regression test analysis – second experience

In January I wrote about what you can do to start crawling out of your manual regression testing hole. Since then I’m working with a new team and I’ve tried a similar approach once more.

This time, we were two teams working together to release a more modern and responsive booking site for a travel agent. We’re not using any bug tracking software instead we had were post-it notes for each issue. Other than that the approach was the same: gather the teams, sort bugs in categories which felt appropriate, discuss results and pick one issue to analyze.

Grouped post-its

Grouped post-its

We sorted the bugs first into categories such as IE, mobile, appearance, infrastructure, translations, etc… and then we used dots to mark the one which we found were the most severe and/or important bugs. After talking freely about what we saw when we looked at these

bugs and how it made us feel, we selected one which we did a root cause analysis on using 5 Whys.

We finished by identifying one concrete action which would help us improve. The action became a story card which will be prioritized among the others in the backlog. The action was that we needed to make sure that we could trust our test environments.

What I learned

This exercise requires 1½-2 hours when the volume of bugs is around 50.

I felt that it was important that we wrote down one action to take. Even  though the action only became “hold a meeting to discuss what we can do to improve our test environments” it is a promise that we will have the discussion.

Selecting only one issue to analyze and only one action to take felt like a drop of water in the sea of possible improvements. But I do believe it is better to commit to one action at the time rather than doing a lot of things simultaneously. If it turns out to be a small thing to fix, it will be done quickly and we can select a new one sooner.

I do believe that you should hold retrospectives like these whenever you have done a major effort in your team. Retrospectives aren’t just for the end of a sprint, they can be used in many other situations where you need to reflect on something that has happened in order to improve continuously.

(At Citerus we believe retrospectives to be so important we even have a separate course for holding more efficient retrospectives)

Checklist for planning

We’ve started using a checklist during sprint planning in order to keep the things that usually go wrong in mind while talking about how to solve the user stories.

Some items in the list are very specific for the product that we’re developing and I’m leaving those out but here are a few more generic things:

  • Should a log event be generated when the function is performed?
  • Is the function affected by time zones, and if so, which time zone shall be used?
  • Should the function be accessible via WEB-Services?
  • What rights are required to perform the function?
  • Can there be any concurrency issues?
  • Do we need any special test data?
  • What views are affected? Would mockups be helpful?
  • What about performance?

My favorite is of course “Do we need any special test data?”. Setting up tests can be costly and it needs to be thought of during planning. We might need to get information from customers, order hardware, change the test environment to have data that will allow us to test the feature. But also, thinking about how the testing will work might have an impact on how the feature will be designed. It’s design can make testing harder or easier.

Words of wisdom

On my Software Testing and Metrics course we had a guest lecture this week. It was a tester for Nasdaq OMX and he told us a little about how testing works in “real life”.

He started of by reminding us how the price tag on a defect rises dramatically with time and spent a  great deal of time on the fact that a lot of defect can be found in a matter of seconds or minutes.

A few techniques enabled this. The first he bought up was pair programing. Stimulating different sides of the brain in the two programmers, pair programming has (possibly) a positive effect on the correctness of the code. Both for small mistakes such as uninitialized variables and larger as design flaws. Of course, pair programming isn’t a simple technique, it requires a lot of factors to click for it to be as efficient as it is on paper.

A great deal of time was spent discussing Test Driven Design. He talked about a study done at Microsoft and IBM were two teams were given the same task and one worked TDD and the other didn’t. It turns out that the team using TDD worked 15-30% slower BUT after a time of use their code turned out to have 40-90% lessdefects. 90%! Amazing results.

Also something as simple as keeping the developers and the testers in the same room helped uncover defects in a quick way. Making it easy for testers to discuss the code with the developers meant that interpretations problems could be discovered and solved, making both the software and tests better.

And then he talked a lot about tools. How Continuous Integration could help you make sure that the code you checked in really worked and didn’t break anything else. How Static Analysis could give you warnings about possible defects. How code coverage helped you make sure that there isn’t any untested or dead code.

He finished the lecture by giving us a demo of how a test case at OMX could look like.  We looked at almost 2000 lines of test code for a single requirement…  Using this he stressed the importance of using code conventions and test case traceabilty. It was imperative that each test case contained a reference to the requirement tested.

I asked some questions about the size of their company. They had about 8 developers and 10 testers and were trying different project processes without finding one that suited them yet. They made sure that testers and developers worked on the same features.

Exciting stuff for a wannabe-tester. 🙂