At work I was asked, together with a colleague, to come up with a plan on how to improve/introduce automated testing on our two systems.
It didn’t take us long to identify that integration testing was the way to go. We presented that track. And we were met with questions such as “And how are you going to measure the amount of defects we have?”
Well, would the quality of our systems improve if we measured the amount of defects they had? Would it tell us anything more than the amount of faults they used to have? (Or still have if the mentality is to tolerate and not fix the bugs)
Our plan was completely different.
Quality is something that you build into the system and testing is a tool that can help keep focus on it.
People are fond of TDD because the quality of the code can improve when you use it. It helps you keep track of the intent of your code, make it more structured and hopefully keep the complexity down.
In the same way, by using BDD and integration testing, you can keep track on the goal of the system. What business value does your code add? Who will use it? What is the desired effect?
Even though they are testing techniques, they are also development tools. And an interesting part in introducing quality.
I wrote a while back about how I managed to add logging with log4net to my ProjectWhite tests in Visual Studio. However, after being forced to move my source code it suddenly didn’t work anymore eventhough I followed my amazing instructions. After trying a whole bunch of things out, one of them worked.
For log4net to work, the log4net.config file needs to be copied into the TestResult folder created by Visual Studios testrunner. In order to achieve this, enter the “Test” menu in VS, select “Edit Test Run Configurations”, “Deployment” and there, add your log4net.config file.
This made things work for me.
Note: I’m using the built-in testrunner in Visual Studio to drive my tests and not NUnit right now.
Note2: The first thing I did was setting it up as I did in my previous post, I haven’t tried to see if just adding the file to deployment is enough, it might not be.
James A. Whittaker tells us about the 7 plagues of Software Testing on the Google Testing Blog.
- The plague of Aimlessness – The lack of communication between testers means that we keep repeating each other, creating the same tests for the same things. An exchange of knowledge, lore, is necessary.
- The plague of Repetitiveness – Tests need to be diverse, even the automated ones.
- The plague of Amnesia – The problem you are working on has probably been solved before.
- The plague of Boredom – When forgetting the tactical aspects of testing such as writing test cases, testing becomes boring and uncreative.
- The plague of Homelessness – You need to live in a house for a while before you realize that the dishwasher should have been placed a bit more to the right, or that there is leak somewhere. A tester cannot do this.
- The plague of Blindness. A software is untangible we must rely on our less concrete senses to for any feedback about our effort.
- To be announced.
Once again I find myself faced with the giant that is Visual Studio. I’ve used it before, but in short burst. Long enough to get me familiar with it and then summer was over/baby was born/or similar and I’ve gone back to the Open Source world of the university. Each time it takes me a while to get back on track.
At this job, I’ve been asked to use White for some ui testing. White is a tool that allows you to search and refence ui components by code instead of recording a sequence of clicks. It’s designed for Windows applications and is open source.
As I felt like a noob, I hoped for a newbie guide which I didn’t find. So here is mine.
Download and unzip White. Place does not matter.
Open Visual Studio, create a new project with some sort of form you would like to test. Add a Test Project to that solution. In the project tree for the Test Project, right click References and select “Add reference”. There you can add references to the White dlls, the ones you need are “White.Core”, “White.NUnit” and “nunit.framework”.
That should be enought to get you started and follow Ben Halls guide. Remember if you find yourself with words underlined in red and the mouse over complaint about reference missing, right hand click and select “Resolve” the correct reference will be added to the “using” list.
There, this might help some other summer job tester out there.
The evidence against the number of bugs as a metric is massive. Not only does it not tell us anything about the remaining number of bugs in a system, or about the severity of the resolved bugs, but it also threatens the testers work.
If managers or the team are concerned about following the classical S-curve for bug finds during project development, testers will be concerned with fullfilling the model. The quest for an early peak might cause for superficial testing and a lack of incentive for finding the bugs that are more difficult to discover. As the project moves along, testers aren’t expected to find as many bugs anymore and it is now ok for them to do other activities or work with less effort.
The model holds because work is done in a way that confirms it.
On the other hand, if the model is set aside, a tester who sees the number of bugs decreasing could take it as a sign that it is time to change methods. By applying other tools, other approaches, bringing in new people, focusing on different areas, and so on, the number of bugs found can go up again. A new decrease leads to another period of search for other methods.
I was invited to join a 1½ day conference with the company that is kind enough to allow me to do my master thesis with them.
There I had the pleasure to discuss testing with some bright people who have much more experience than me. One of them promoted Behavior Driven Development and the other talked warmly about writing code with no bugs. And when he says “no bugs” he means “no bugs”.
I’ve got some reading to do…
On my Software Testing and Metrics course we had a guest lecture this week. It was a tester for Nasdaq OMX and he told us a little about how testing works in “real life”.
He started of by reminding us how the price tag on a defect rises dramatically with time and spent a great deal of time on the fact that a lot of defect can be found in a matter of seconds or minutes.
A few techniques enabled this. The first he bought up was pair programing. Stimulating different sides of the brain in the two programmers, pair programming has (possibly) a positive effect on the correctness of the code. Both for small mistakes such as uninitialized variables and larger as design flaws. Of course, pair programming isn’t a simple technique, it requires a lot of factors to click for it to be as efficient as it is on paper.
A great deal of time was spent discussing Test Driven Design. He talked about a study done at Microsoft and IBM were two teams were given the same task and one worked TDD and the other didn’t. It turns out that the team using TDD worked 15-30% slower BUT after a time of use their code turned out to have 40-90% lessdefects. 90%! Amazing results.
Also something as simple as keeping the developers and the testers in the same room helped uncover defects in a quick way. Making it easy for testers to discuss the code with the developers meant that interpretations problems could be discovered and solved, making both the software and tests better.
And then he talked a lot about tools. How Continuous Integration could help you make sure that the code you checked in really worked and didn’t break anything else. How Static Analysis could give you warnings about possible defects. How code coverage helped you make sure that there isn’t any untested or dead code.
He finished the lecture by giving us a demo of how a test case at OMX could look like. We looked at almost 2000 lines of test code for a single requirement… Using this he stressed the importance of using code conventions and test case traceabilty. It was imperative that each test case contained a reference to the requirement tested.
I asked some questions about the size of their company. They had about 8 developers and 10 testers and were trying different project processes without finding one that suited them yet. They made sure that testers and developers worked on the same features.
Exciting stuff for a wannabe-tester. 🙂