At work I was asked, together with a colleague, to come up with a plan on how to improve/introduce automated testing on our two systems.
It didn’t take us long to identify that integration testing was the way to go. We presented that track. And we were met with questions such as “And how are you going to measure the amount of defects we have?”
Well, would the quality of our systems improve if we measured the amount of defects they had? Would it tell us anything more than the amount of faults they used to have? (Or still have if the mentality is to tolerate and not fix the bugs)
Our plan was completely different.
Quality is something that you build into the system and testing is a tool that can help keep focus on it.
People are fond of TDD because the quality of the code can improve when you use it. It helps you keep track of the intent of your code, make it more structured and hopefully keep the complexity down.
In the same way, by using BDD and integration testing, you can keep track on the goal of the system. What business value does your code add? Who will use it? What is the desired effect?
Even though they are testing techniques, they are also development tools. And an interesting part in introducing quality.
I’ve been attending a workshop with Andreas Brink from Factor 10. This is what I learned:
– most of the developing time is spent reading old code
– old code can be really annoying to read
– it’s completely ok to have code that basically looks like “showDialog(); calculatePrice()”
– Resharper is cool
– unit tests give you a safety harness when refactoring
– how to change code so that dependencies that make it harder to write tests are disarmed
– TDD can be powerful but also hard
I tried out Test Driven Design (TDD) this spring in a small course project. I read the famous “Test Driven Development – By Example” while doing it and I felt pretty confident that I at least grasped the gist of it. Unfortunately I lack some knowledge in Object Orientated Design and some concepts were unfamiliar to me.
However, I loved it. It allowed me as a programmer with little experience to produce code that I was confident about. I could later refactor it and still be sure that it worked. It wasn’t all that different from how I usually work. I’m never the type who just throws myself into an assignement, I always sit down to think and research first, but here was a technique that allowed me to do that and produce code at the same time. Thinking and coding, awesome!
After my guest lecture the other day, I discussed it with my boyfriend. I bought up that study that showed that TDD can reduce the amount of defects found with about 40-90%. My boyfriend was a bit sceptical as he often is with “new revolutionary programing techniques”. He tried TDD and like me he enjoyed it. He says that he uses it sometimes in his work and that it helps him to structure his thoughts. He points out that there are many factors that we don’t know about, was this a highly motivated team for example?
We discussed it further and talked about why it is works and where it fails. He bought up the fact that eventhough you might start with the best intentions, after a while code gets to complicated. Of course, at that point, it’s probably bad code, he adds, but either way you don’t have time to go back and do it right by then.
We agreed that it does help you build more component-oriented code. Code that is easier to test and maintain. Code that probably isn’t spagetti-code.
I think it’s the time factor that makes it hard. Even in my small school-projects I know how the quality of the code decreases the closer to deadline you come. Would I still be writing tests first the day before delivery? I’m not sure.
On my Software Testing and Metrics course we had a guest lecture this week. It was a tester for Nasdaq OMX and he told us a little about how testing works in “real life”.
He started of by reminding us how the price tag on a defect rises dramatically with time and spent a great deal of time on the fact that a lot of defect can be found in a matter of seconds or minutes.
A few techniques enabled this. The first he bought up was pair programing. Stimulating different sides of the brain in the two programmers, pair programming has (possibly) a positive effect on the correctness of the code. Both for small mistakes such as uninitialized variables and larger as design flaws. Of course, pair programming isn’t a simple technique, it requires a lot of factors to click for it to be as efficient as it is on paper.
A great deal of time was spent discussing Test Driven Design. He talked about a study done at Microsoft and IBM were two teams were given the same task and one worked TDD and the other didn’t. It turns out that the team using TDD worked 15-30% slower BUT after a time of use their code turned out to have 40-90% lessdefects. 90%! Amazing results.
Also something as simple as keeping the developers and the testers in the same room helped uncover defects in a quick way. Making it easy for testers to discuss the code with the developers meant that interpretations problems could be discovered and solved, making both the software and tests better.
And then he talked a lot about tools. How Continuous Integration could help you make sure that the code you checked in really worked and didn’t break anything else. How Static Analysis could give you warnings about possible defects. How code coverage helped you make sure that there isn’t any untested or dead code.
He finished the lecture by giving us a demo of how a test case at OMX could look like. We looked at almost 2000 lines of test code for a single requirement… Using this he stressed the importance of using code conventions and test case traceabilty. It was imperative that each test case contained a reference to the requirement tested.
I asked some questions about the size of their company. They had about 8 developers and 10 testers and were trying different project processes without finding one that suited them yet. They made sure that testers and developers worked on the same features.
Exciting stuff for a wannabe-tester. 🙂