There’s a saying that “you can’t test quality into a product.” That may be true, but it is very difficult to assess the level of quality of a product without testing it. I find that one of the problems with quality is the way that it is talked about. There are all different categories of testing such as regression, black-box, white-box, exploratory, automated, manual, etc. Each of these categories serves a useful purpose, but what is the overall purpose of testing?

Some people argue that testing is how you verify that the product works. Other people argue that the purpose of testing is to uncover what does not work. My view is that the purpose of testing is to show that the software works. By “works” I include that it correctly handles bad input, bad data, and diabolical usage. If you try to use the product in a way that was not intended and it crashes or gives no result at all, then you have found a way in which it does not work.

Quality Assurance and testing are not the same thing. You can implement all of the QA you want: good process, code reviews, coding standards, coding for testability, and whatever else you want to assure quality short of actual testing, but without testing, you have no idea how the product will actually perform. Let’s say you produce a product with absolutely no test plan or you have a great test plan but you don’t execute it. I suppose that it is possible that the quality that the customer experiences will be exactly the same as if you had a test plan and executed it. The difference is that your knowledge of the quality of the product will come from your customer after you have already shipped your product. The purpose of testing is to derive knowledge about the quality of the product prior to shipping it so that you can make good decisions about what to do now that you have that knowledge.

So, testing itself does not produce higher quality. It only produces knowledge of current quality. There are only two things that produce higher quality: changes to the process that you use to create the product itself (not including testing), and the responses taken based on the knowledge gained from testing. Testing can influence quality, but it does not directly affect it.

Product quality is an ongoing effort. No matter how much testing you do, users will find ways to use your product that find holes in your test plan. For any change, whether it is a bug fix or a new feature, you don’t really know the impact until customers start using it. That means that the maturation of a change doesn’t really start until the change ships. If you have a long release cycle, you will have changes that are made early that are basically gathering dust until you ship. You can’t really say that they are tested until you test the whole product as it will ship, so again, true maturation doesn’t start until you ship.

The moral of this story? Release as often as you reasonably can; which may be once a week, once a month, or once a year, but always strive to release often.