Experience report: testing dojo with entire dev team

Last spring I worked as a test lead/quality coach for 3 teams that did their own testing. I experimented with different techniques to help them further improve their testing skills. I wrote this experience in March but I didn’t get around to publishing it then which I’m doing now.

I want to share with you another way of combining testing, learning and fun

At the Agile Testing Days in Potsdam/Berlin I accidentally ended up in a testing dojo session. For an hour, 4 pairs of testers tried their skills at a buggy piece of software and received feedback about their testing. It became immediately clear to me that this was a great opportunity to improve testing skills and I decided to try it at home with my teams.

I work as the sort of test lead who provides inspiration and encouragement for 3 teams of programmers who do their testing themselves. For our domain, web development, this works well. We have developed a testing strategy together and I also help them improve their testing skills. They are awesome, committed to continuously delivering value to our customers and eager to do a good job.

I planned a testing dojo of an hour and promised candy and laughs. The response was, to my relief, positive. I wasn’t sure that they would want to spend an hour of precious programming time doing testing. I chose an hour so that it wouldn’t be too long and it was easier to find a room and a time slot.

The preparations took a while because I needed to decide on a suitable piece of software and read up on dos and don’ts for testing dojos.

Finally, the software I picked was the one I had tested in Potsdam. It was crawling with bugs and this meant that everyone would find some. I thought this would be good for a first session to make everyone comfortable. It was also small enough to be constraining but big enough to allow people to try different areas. I also wanted to have something which no one in the teams had written themselves so that there wouldn’t be any awkward situations. This meant finding external software.

The parking rate calculator – object under test http://adam.goucher.ca/parkcalc/index.php

Format for the dojo
We had the 3 roles described in the testing dojo document. Every 5 minutes we rotated. We ended with a 10 minute debrief to discuss what we observed and what was good. http://www.testingdojo.org/tiki-index.php?page=Roles

Setting up the environment
I created a few posters which I put up on the walls. They detailed the format of dojo and the parking rates for everyone to see.

I explained carefully the purpose of the dojo. I put the emphasis on the fact the purpose was to learn from each other. This means that both observers and testers learn and we should be gentle to each other. It’s not easy to sit down a new computer and start testing in front of everyone, there needs to be humility from the audience for this. And on the other hand, active and good observers are key for learning.

How was it?
First of all, we had fun! The overwhelming buginess of the software created a lot of reactions: surprise, confusion, entertainment, frustration and joy.

The programmers were a bit overwhelmed by the amount of bugs. This is the downside of using this test object. In a normal situation I would just send it back after 2 minutes, but this isn’t a normal situation. I encourage splitting the debrief into two parts: “what did you think of the software?” and “what did you observe about the testing that we did?” or even say “let’s agree that the software is crap, but what did you observe about the testing?”.

It was clear that this was an ad hoc session. There was no plan and a lot of randomness. A few people started trying to be systematic but bugs caused them to lose focus. We tried a bit of everything, here and there.

This was a good thing though. For the group to observe this randomness was interesting. It shows well of you can spend an hour testing without knowing much more than when you started. When answering the question “what would you do differently if you did it again?” the group answered that they would be more systematic or organized. We also tried to highlight some of the things that the participants had done successfully.

What now?
We will do it again. This time I want to start with creating a plan together and see the difference in an organized approach. After this I think we’re ready for code closer to our domain or maybe even our own code.

Conclusion
I strongly recommend doing this kind of exercise with your team or your peers. It’s fun, interesting and a great opportunity to pick up new skills.

 

Advertisements

Book club suggestion: What to do with bugs?

In my old team we had the discussion about how we should handle the bugs we found. There are a few ways to handle them.

  • fix them
  • prioritize them among other items in the backlog
  • leave them to die in a bug reporting system

Would you like to have that discussion with your team? Hold a book club (blog post club?) over lunch to get the discussion going.

I suggest reading both Elisabeth Hendrickson’s  “Bugs spread disease” and Jeff Athwood’s “Not all bugs are worth fixing” and discuss them together. Talk about how the articles make you feel, what advantages do you see with each approach, and what long term effects do you think they have. Also talk about how it applies to your team and get extra credit if you devise an experiment to try in your team during the coming x weeks.

Book clubs are great for many reasons but their main disadvantage is the fact that they are long. People usually read half a book but rarely finish them. That’s why articles or blog posts are a better fit for a book club.

Happy reading!

 

 

 

 

From insecure teenager to appreciated consultant

A fixed mindset almost made me quit computer science

Video of the talk (swedish)

For me, university was hard. I guess in a way it is for most people, but the reasons for each person are different.

Here’s my reason: I was stuck in a fixed mindset. It wasn’t until I heard Linda Rising at the Agile Turku Day that I realized this had been the problem. Linda explained the work of Carol Dweck, a professor in psychology and the author of Mindset.

In short, you can either see your abilities as fixed and static the same way as your height for example. Or you can see them as muscles, being able to grow. If you see them as fixed, you either have an ability or you don’t.

So this happened to me in a programming class: I felt like all of the others understood things faster than me which meant their ability for learning programming was better than mine. This also meant that I would not be able to learn as much programming as them. And worst part: there was simply nothing I could do about it!

It got so bad that I almost gave up. I thought “I apparently don’t have any talent for this. What’s the point in even trying?!”.

Now, I don’t even believe in talent. Perhaps, the very best in the world could have something special with regards to the topic they excel at. But then again, the very best in the world have spent an incredible amount of time practicing that particular topic…

This means I live by a growth mindset nowadays; given enough time I could learn anything. My biggest issue now is prioritizing all of the topics I want to learn and making sure I pick one at the time to focus on. Otherwise my stress level goes up.

Yesterday I gave a talk about this at my old university, the Royal Institute of Technology in Stockholm, at an event called Future Friday. The goal of the event was to attract students to apply to the master of science program in information technology. The attendees were 18-19 year old and trying to decide what to study.

The title of my talk was “From insecure teenager to appreciated consultant”. About 100 people came to listen and 95% of them were girls. It makes me wonder, is this something we need to talk about more? Not the success in your career, but your struggle in your career. What was hard for you when you worked your way to where you are now?

Either way, I think everyone should know about the concept of fixed or growth mindset because everyone I talk to has a story related to it and as long as the concept of “talent” is out there people are going to believe that abilities are static. So, read Mindset, and spread the word!

The Leprechauns of Software Engineering

Did you know that urban legends are a part of software engineering?

Book review: The leprechauns of software engineering by Laurent Bossavit

The same way we have heard stories of kidney theft and rats in pizzas, we are surrounded by stories and factoids in our work.

Laurent Bossavit takes on a few of these bogus facts and debunks the myths. For example, the (in)famous Waterfall paper and people’s different opinions regarding it get analyzed. What did the paper really say? Was it an instant hit?

Some topics analyzed:
+ the cone of uncertainty
+ some programmers are 10 times more productive than others
+ the waterfall method was a thing
+ finding defects early is less expensive than finding them late

Despite the populistic title and simple topic, Bossavit takes on a more scientific tone in his writing. In order to really grasp some of the stories, a little knowledge of statistics is required. When Bossavit debunks a myths, he has done thorough research into scientific papers and like a detective he follows the trail of evidence to see if there is any truth to the claims. I fear that for some readers, this academic approach might not be as exciting as for others. However, it is also possible to skim through it and still get the main message.

I feel that the book is much needed. While reading it, I tried to remember when I last heard someone make the claims which are being adressed. Often, it was recently, perhaps a couple of weeks prior. Once it was even from my own mouth.

Reading this book has made me a bit more sceptical about what I listen to when people state that something is a fact in software engineering. It has also made me rethink how I express myself, talking more about my experience rather than stating things as facts. Bossavit also ends the book with a discussion, science doesn’t seem to be apprioriate for analyzing methods in software engineering, how can we work with it instead?

I really enjoyed reading The Leprechauns of Software Engineering. For me, it was exciting to follow the trail of evidence (or lack thereof) and see the claims debunked. I also think it could be good book club material with interesting discussions to follow.

Agile Testing Days 2013

For the second year in a row, I take a taxi from Tegel airport in Berlin late in the evening and spend half an hour being transported through the dark towards the Dorinnt hotel in Potsdam. There is something odd about traveling in dark, feeling the journey in your body but not seeing it with your eyes. I also know that the journey back will be the same, same darkness and same road, same weird sensation.

Maybe that’s why I can’t help but think that there might be something to it when last year Scott W. Ambler kept referring to “the real world” during his keynote (as opposed to us in the audience). Maybe Potsdam isn’t part of the real world? Do I really know if Potsdam exists if I don’t even know what the way to get there looks like? Later during the conference, a perfect walk in the sunshine and the cool air with the trees wearing their autumn colours, passing by beautiful buildings which seem to be part of a film set makes me wonder again: is this for real?

I also know that the people I met and bonded with so quickly are people I might never meet again. A transient and intense connection, because we are there and sharing the experience, but what do we have when the experience is over?

It could be that we’re all passing a portal to this imaginary land where people are friendly, where ideas have worked out perfectly, where teams perform the best they can and where software is cheap, fast and helpful to all of its users. Maybe Scott W. Ambler is right, the Agile Testing Days isn’t the real world.

So what?

It’s the entire point of the conference.
To be inspired.

It might sound weird, but I do want to believe that there is place for rainbows and unicorns out there. I want to hear from people excited about successfull experiments in their teams. It’s fun to know that somewhere, some technique helped a team get closer to their product owner. To be honest, I also love to listen to the person talking about his 145 people team with releases once every two years (they used to have it once a year, but you know, educating the users takes so much time…). Because when I talk to him about my 4 person team with weekly releases, I’m the one in unicorn land.

So that’s what you get when you travel to another place and attend a conference like this one: an exceptional experience. But what do you make of that? Your conference budget for the year all spent for a couple of days in the fairy tale world. What I’m trying to do to bring the magic experience back home is to pick one (just one!) of those lollipop-ideas and actually try it for real.

In the real world.

It might become my tale to tell next year.

(Speaking of bonding quickly, my new record is 2 minutes starting from striking a conversation in the ladies room to handing out my business card hoping intensely she would email me and not just throw it away)

Regression test analysis – second experience

In January I wrote about what you can do to start crawling out of your manual regression testing hole. Since then I’m working with a new team and I’ve tried a similar approach once more.

This time, we were two teams working together to release a more modern and responsive booking site for a travel agent. We’re not using any bug tracking software instead we had were post-it notes for each issue. Other than that the approach was the same: gather the teams, sort bugs in categories which felt appropriate, discuss results and pick one issue to analyze.

Grouped post-its

Grouped post-its

We sorted the bugs first into categories such as IE, mobile, appearance, infrastructure, translations, etc… and then we used dots to mark the one which we found were the most severe and/or important bugs. After talking freely about what we saw when we looked at these

bugs and how it made us feel, we selected one which we did a root cause analysis on using 5 Whys.

We finished by identifying one concrete action which would help us improve. The action became a story card which will be prioritized among the others in the backlog. The action was that we needed to make sure that we could trust our test environments.

What I learned

This exercise requires 1½-2 hours when the volume of bugs is around 50.

I felt that it was important that we wrote down one action to take. Even  though the action only became “hold a meeting to discuss what we can do to improve our test environments” it is a promise that we will have the discussion.

Selecting only one issue to analyze and only one action to take felt like a drop of water in the sea of possible improvements. But I do believe it is better to commit to one action at the time rather than doing a lot of things simultaneously. If it turns out to be a small thing to fix, it will be done quickly and we can select a new one sooner.

I do believe that you should hold retrospectives like these whenever you have done a major effort in your team. Retrospectives aren’t just for the end of a sprint, they can be used in many other situations where you need to reflect on something that has happened in order to improve continuously.

(At Citerus we believe retrospectives to be so important we even have a separate course for holding more efficient retrospectives)

Talk about bug hunt

In May I spoke at the Smart Bear user conference MeetUI about “Commiting to quality as a team”. In particular, I took the example of the bug hunt to describe how a team can kickstart its collaboration.

You can see the talk (about 15 minutes) here: http://www.soapui.org/Community/meetui-videos.html

And the slides can be found here: http://www.prezi.com/user/ulrikamalmgren

Smart Bear is the awesome company behind the open source software SoapUI och the load testing tool LoadImpact.