Use Brain Writing to make your retrospectives more equal

If you are struggling with quiet team members who never say a word during retrospectives, or on the contrary, loud team members who don’t understand to let the others speak once in a while, then brain writing could be brilliant for you!

Brain writing is a technique for brainstorming in a group. It can be used during the “generate insight” phase of the retrospective.

Everyone starts with a stack of blank papers and a pen. They get 3 minutes to write down ideas, one per paper. When the timer rings everyone passes their papers to the right and receive papers from the person to the left.

Now you have a new ideas which you can use for inspiration. You can either add things on the paper if the idea sparks new ones related to the one on the paper or start a new paper if you come up with something entirely different.

The exercise is over when the first papers come back to the original author. Now put the papers up on a wall or lay them out on the table and have everyone read through them and be awed by the amount of generated ideas!

We used this template for writing down ideas:
“To be better at __________
We need to ________________
Every _____________________”

For example we had:
“To be better at knowledge sharing
we need to do more mob programming
every now and then”

I love brain writing because:

  •  it is a silent activity, everyone is on equal terms. The loud person doesn’t get too much space and the quiet person gets hir ideas heard as well
  •  building upon other people’s ideas is inspiring and creates a lot of “yes, and …!” moments
  • the template made us focus on purpose of suggested actions.

Finish with a round of silent prioritization for “Decide what to do”, pick your top 1-3 and you’re ready to start working on some inspiring improvements!

Advertisements

Experience report: testing dojo with entire dev team

Last spring I worked as a test lead/quality coach for 3 teams that did their own testing. I experimented with different techniques to help them further improve their testing skills. I wrote this experience in March but I didn’t get around to publishing it then which I’m doing now.

I want to share with you another way of combining testing, learning and fun

At the Agile Testing Days in Potsdam/Berlin I accidentally ended up in a testing dojo session. For an hour, 4 pairs of testers tried their skills at a buggy piece of software and received feedback about their testing. It became immediately clear to me that this was a great opportunity to improve testing skills and I decided to try it at home with my teams.

I work as the sort of test lead who provides inspiration and encouragement for 3 teams of programmers who do their testing themselves. For our domain, web development, this works well. We have developed a testing strategy together and I also help them improve their testing skills. They are awesome, committed to continuously delivering value to our customers and eager to do a good job.

I planned a testing dojo of an hour and promised candy and laughs. The response was, to my relief, positive. I wasn’t sure that they would want to spend an hour of precious programming time doing testing. I chose an hour so that it wouldn’t be too long and it was easier to find a room and a time slot.

The preparations took a while because I needed to decide on a suitable piece of software and read up on dos and don’ts for testing dojos.

Finally, the software I picked was the one I had tested in Potsdam. It was crawling with bugs and this meant that everyone would find some. I thought this would be good for a first session to make everyone comfortable. It was also small enough to be constraining but big enough to allow people to try different areas. I also wanted to have something which no one in the teams had written themselves so that there wouldn’t be any awkward situations. This meant finding external software.

The parking rate calculator – object under test http://adam.goucher.ca/parkcalc/index.php

Format for the dojo
We had the 3 roles described in the testing dojo document. Every 5 minutes we rotated. We ended with a 10 minute debrief to discuss what we observed and what was good. http://www.testingdojo.org/tiki-index.php?page=Roles

Setting up the environment
I created a few posters which I put up on the walls. They detailed the format of dojo and the parking rates for everyone to see.

I explained carefully the purpose of the dojo. I put the emphasis on the fact the purpose was to learn from each other. This means that both observers and testers learn and we should be gentle to each other. It’s not easy to sit down a new computer and start testing in front of everyone, there needs to be humility from the audience for this. And on the other hand, active and good observers are key for learning.

How was it?
First of all, we had fun! The overwhelming buginess of the software created a lot of reactions: surprise, confusion, entertainment, frustration and joy.

The programmers were a bit overwhelmed by the amount of bugs. This is the downside of using this test object. In a normal situation I would just send it back after 2 minutes, but this isn’t a normal situation. I encourage splitting the debrief into two parts: “what did you think of the software?” and “what did you observe about the testing that we did?” or even say “let’s agree that the software is crap, but what did you observe about the testing?”.

It was clear that this was an ad hoc session. There was no plan and a lot of randomness. A few people started trying to be systematic but bugs caused them to lose focus. We tried a bit of everything, here and there.

This was a good thing though. For the group to observe this randomness was interesting. It shows well of you can spend an hour testing without knowing much more than when you started. When answering the question “what would you do differently if you did it again?” the group answered that they would be more systematic or organized. We also tried to highlight some of the things that the participants had done successfully.

What now?
We will do it again. This time I want to start with creating a plan together and see the difference in an organized approach. After this I think we’re ready for code closer to our domain or maybe even our own code.

Conclusion
I strongly recommend doing this kind of exercise with your team or your peers. It’s fun, interesting and a great opportunity to pick up new skills.

 

Book club suggestion: What to do with bugs?

In my old team we had the discussion about how we should handle the bugs we found. There are a few ways to handle them.

  • fix them
  • prioritize them among other items in the backlog
  • leave them to die in a bug reporting system

Would you like to have that discussion with your team? Hold a book club (blog post club?) over lunch to get the discussion going.

I suggest reading both Elisabeth Hendrickson’s  “Bugs spread disease” and Jeff Athwood’s “Not all bugs are worth fixing” and discuss them together. Talk about how the articles make you feel, what advantages do you see with each approach, and what long term effects do you think they have. Also talk about how it applies to your team and get extra credit if you devise an experiment to try in your team during the coming x weeks.

Book clubs are great for many reasons but their main disadvantage is the fact that they are long. People usually read half a book but rarely finish them. That’s why articles or blog posts are a better fit for a book club.

Happy reading!

 

 

 

 

Regression test analysis – second experience

In January I wrote about what you can do to start crawling out of your manual regression testing hole. Since then I’m working with a new team and I’ve tried a similar approach once more.

This time, we were two teams working together to release a more modern and responsive booking site for a travel agent. We’re not using any bug tracking software instead we had were post-it notes for each issue. Other than that the approach was the same: gather the teams, sort bugs in categories which felt appropriate, discuss results and pick one issue to analyze.

Grouped post-its

Grouped post-its

We sorted the bugs first into categories such as IE, mobile, appearance, infrastructure, translations, etc… and then we used dots to mark the one which we found were the most severe and/or important bugs. After talking freely about what we saw when we looked at these

bugs and how it made us feel, we selected one which we did a root cause analysis on using 5 Whys.

We finished by identifying one concrete action which would help us improve. The action became a story card which will be prioritized among the others in the backlog. The action was that we needed to make sure that we could trust our test environments.

What I learned

This exercise requires 1½-2 hours when the volume of bugs is around 50.

I felt that it was important that we wrote down one action to take. Even  though the action only became “hold a meeting to discuss what we can do to improve our test environments” it is a promise that we will have the discussion.

Selecting only one issue to analyze and only one action to take felt like a drop of water in the sea of possible improvements. But I do believe it is better to commit to one action at the time rather than doing a lot of things simultaneously. If it turns out to be a small thing to fix, it will be done quickly and we can select a new one sooner.

I do believe that you should hold retrospectives like these whenever you have done a major effort in your team. Retrospectives aren’t just for the end of a sprint, they can be used in many other situations where you need to reflect on something that has happened in order to improve continuously.

(At Citerus we believe retrospectives to be so important we even have a separate course for holding more efficient retrospectives)

Checklist for planning

We’ve started using a checklist during sprint planning in order to keep the things that usually go wrong in mind while talking about how to solve the user stories.

Some items in the list are very specific for the product that we’re developing and I’m leaving those out but here are a few more generic things:

  • Should a log event be generated when the function is performed?
  • Is the function affected by time zones, and if so, which time zone shall be used?
  • Should the function be accessible via WEB-Services?
  • What rights are required to perform the function?
  • Can there be any concurrency issues?
  • Do we need any special test data?
  • What views are affected? Would mockups be helpful?
  • What about performance?

My favorite is of course “Do we need any special test data?”. Setting up tests can be costly and it needs to be thought of during planning. We might need to get information from customers, order hardware, change the test environment to have data that will allow us to test the feature. But also, thinking about how the testing will work might have an impact on how the feature will be designed. It’s design can make testing harder or easier.

Words of wisdom

On my Software Testing and Metrics course we had a guest lecture this week. It was a tester for Nasdaq OMX and he told us a little about how testing works in “real life”.

He started of by reminding us how the price tag on a defect rises dramatically with time and spent a  great deal of time on the fact that a lot of defect can be found in a matter of seconds or minutes.

A few techniques enabled this. The first he bought up was pair programing. Stimulating different sides of the brain in the two programmers, pair programming has (possibly) a positive effect on the correctness of the code. Both for small mistakes such as uninitialized variables and larger as design flaws. Of course, pair programming isn’t a simple technique, it requires a lot of factors to click for it to be as efficient as it is on paper.

A great deal of time was spent discussing Test Driven Design. He talked about a study done at Microsoft and IBM were two teams were given the same task and one worked TDD and the other didn’t. It turns out that the team using TDD worked 15-30% slower BUT after a time of use their code turned out to have 40-90% lessdefects. 90%! Amazing results.

Also something as simple as keeping the developers and the testers in the same room helped uncover defects in a quick way. Making it easy for testers to discuss the code with the developers meant that interpretations problems could be discovered and solved, making both the software and tests better.

And then he talked a lot about tools. How Continuous Integration could help you make sure that the code you checked in really worked and didn’t break anything else. How Static Analysis could give you warnings about possible defects. How code coverage helped you make sure that there isn’t any untested or dead code.

He finished the lecture by giving us a demo of how a test case at OMX could look like.  We looked at almost 2000 lines of test code for a single requirement…  Using this he stressed the importance of using code conventions and test case traceabilty. It was imperative that each test case contained a reference to the requirement tested.

I asked some questions about the size of their company. They had about 8 developers and 10 testers and were trying different project processes without finding one that suited them yet. They made sure that testers and developers worked on the same features.

Exciting stuff for a wannabe-tester. 🙂