Use Brain Writing to make your retrospectives more equal

If you are struggling with quiet team members who never say a word during retrospectives, or on the contrary, loud team members who don’t understand to let the others speak once in a while, then brain writing could be brilliant for you!

Brain writing is a technique for brainstorming in a group. It can be used during the “generate insight” phase of the retrospective.

Everyone starts with a stack of blank papers and a pen. They get 3 minutes to write down ideas, one per paper. When the timer rings everyone passes their papers to the right and receive papers from the person to the left.

Now you have a new ideas which you can use for inspiration. You can either add things on the paper if the idea sparks new ones related to the one on the paper or start a new paper if you come up with something entirely different.

The exercise is over when the first papers come back to the original author. Now put the papers up on a wall or lay them out on the table and have everyone read through them and be awed by the amount of generated ideas!

We used this template for writing down ideas:
“To be better at __________
We need to ________________
Every _____________________”

For example we had:
“To be better at knowledge sharing
we need to do more mob programming
every now and then”

I love brain writing because:

  •  it is a silent activity, everyone is on equal terms. The loud person doesn’t get too much space and the quiet person gets hir ideas heard as well
  •  building upon other people’s ideas is inspiring and creates a lot of “yes, and …!” moments
  • the template made us focus on purpose of suggested actions.

Finish with a round of silent prioritization for “Decide what to do”, pick your top 1-3 and you’re ready to start working on some inspiring improvements!

Advertisements

Experience report: testing dojo with entire dev team

Last spring I worked as a test lead/quality coach for 3 teams that did their own testing. I experimented with different techniques to help them further improve their testing skills. I wrote this experience in March but I didn’t get around to publishing it then which I’m doing now.

I want to share with you another way of combining testing, learning and fun

At the Agile Testing Days in Potsdam/Berlin I accidentally ended up in a testing dojo session. For an hour, 4 pairs of testers tried their skills at a buggy piece of software and received feedback about their testing. It became immediately clear to me that this was a great opportunity to improve testing skills and I decided to try it at home with my teams.

I work as the sort of test lead who provides inspiration and encouragement for 3 teams of programmers who do their testing themselves. For our domain, web development, this works well. We have developed a testing strategy together and I also help them improve their testing skills. They are awesome, committed to continuously delivering value to our customers and eager to do a good job.

I planned a testing dojo of an hour and promised candy and laughs. The response was, to my relief, positive. I wasn’t sure that they would want to spend an hour of precious programming time doing testing. I chose an hour so that it wouldn’t be too long and it was easier to find a room and a time slot.

The preparations took a while because I needed to decide on a suitable piece of software and read up on dos and don’ts for testing dojos.

Finally, the software I picked was the one I had tested in Potsdam. It was crawling with bugs and this meant that everyone would find some. I thought this would be good for a first session to make everyone comfortable. It was also small enough to be constraining but big enough to allow people to try different areas. I also wanted to have something which no one in the teams had written themselves so that there wouldn’t be any awkward situations. This meant finding external software.

The parking rate calculator – object under test http://adam.goucher.ca/parkcalc/index.php

Format for the dojo
We had the 3 roles described in the testing dojo document. Every 5 minutes we rotated. We ended with a 10 minute debrief to discuss what we observed and what was good. http://www.testingdojo.org/tiki-index.php?page=Roles

Setting up the environment
I created a few posters which I put up on the walls. They detailed the format of dojo and the parking rates for everyone to see.

I explained carefully the purpose of the dojo. I put the emphasis on the fact the purpose was to learn from each other. This means that both observers and testers learn and we should be gentle to each other. It’s not easy to sit down a new computer and start testing in front of everyone, there needs to be humility from the audience for this. And on the other hand, active and good observers are key for learning.

How was it?
First of all, we had fun! The overwhelming buginess of the software created a lot of reactions: surprise, confusion, entertainment, frustration and joy.

The programmers were a bit overwhelmed by the amount of bugs. This is the downside of using this test object. In a normal situation I would just send it back after 2 minutes, but this isn’t a normal situation. I encourage splitting the debrief into two parts: “what did you think of the software?” and “what did you observe about the testing that we did?” or even say “let’s agree that the software is crap, but what did you observe about the testing?”.

It was clear that this was an ad hoc session. There was no plan and a lot of randomness. A few people started trying to be systematic but bugs caused them to lose focus. We tried a bit of everything, here and there.

This was a good thing though. For the group to observe this randomness was interesting. It shows well of you can spend an hour testing without knowing much more than when you started. When answering the question “what would you do differently if you did it again?” the group answered that they would be more systematic or organized. We also tried to highlight some of the things that the participants had done successfully.

What now?
We will do it again. This time I want to start with creating a plan together and see the difference in an organized approach. After this I think we’re ready for code closer to our domain or maybe even our own code.

Conclusion
I strongly recommend doing this kind of exercise with your team or your peers. It’s fun, interesting and a great opportunity to pick up new skills.

 

Book club suggestion: What to do with bugs?

In my old team we had the discussion about how we should handle the bugs we found. There are a few ways to handle them.

  • fix them
  • prioritize them among other items in the backlog
  • leave them to die in a bug reporting system

Would you like to have that discussion with your team? Hold a book club (blog post club?) over lunch to get the discussion going.

I suggest reading both Elisabeth Hendrickson’s  “Bugs spread disease” and Jeff Athwood’s “Not all bugs are worth fixing” and discuss them together. Talk about how the articles make you feel, what advantages do you see with each approach, and what long term effects do you think they have. Also talk about how it applies to your team and get extra credit if you devise an experiment to try in your team during the coming x weeks.

Book clubs are great for many reasons but their main disadvantage is the fact that they are long. People usually read half a book but rarely finish them. That’s why articles or blog posts are a better fit for a book club.

Happy reading!

 

 

 

 

Talk about bug hunt

In May I spoke at the Smart Bear user conference MeetUI about “Commiting to quality as a team”. In particular, I took the example of the bug hunt to describe how a team can kickstart its collaboration.

You can see the talk (about 15 minutes) here: http://www.soapui.org/Community/meetui-videos.html

And the slides can be found here: http://www.prezi.com/user/ulrikamalmgren

Smart Bear is the awesome company behind the open source software SoapUI och the load testing tool LoadImpact.

Checklist for planning

We’ve started using a checklist during sprint planning in order to keep the things that usually go wrong in mind while talking about how to solve the user stories.

Some items in the list are very specific for the product that we’re developing and I’m leaving those out but here are a few more generic things:

  • Should a log event be generated when the function is performed?
  • Is the function affected by time zones, and if so, which time zone shall be used?
  • Should the function be accessible via WEB-Services?
  • What rights are required to perform the function?
  • Can there be any concurrency issues?
  • Do we need any special test data?
  • What views are affected? Would mockups be helpful?
  • What about performance?

My favorite is of course “Do we need any special test data?”. Setting up tests can be costly and it needs to be thought of during planning. We might need to get information from customers, order hardware, change the test environment to have data that will allow us to test the feature. But also, thinking about how the testing will work might have an impact on how the feature will be designed. It’s design can make testing harder or easier.