|"Not that I don't think perfect software is a worthy goal, only that it is economically impractical to achieve. "|
At first I thought it was just an inherent dislike of insects and the obvious association. However, I have since come to realize that there is another, more subtle, but maybe more important reason: Testing and debugging are not the same thing.
I always cringe when I hear about "bugs" in software. I know that the term refers to an early computer problem that was traced to a moth trapped between two electrical relays, but it still bugs me. This aversion is aggravated by test product and service advertisements that feature insects or adopt cute acronyms that spell out DDT or the like. As a software tester, it makes me feel uneasy, even embarrassed.
While this may sound like splitting hairs, consider a situation in which I can have bug-free software that utterly fails to meet any useful requirements. Imagine a software module that does absolutely nothing. It comprises only nonexecutable comments. It has no bugs! Is this a worthwhile accomplishment? Now, this is obviously an absurd case. But it illustrates an underlying point: The purpose of software development is not to produce bug-free code, but to satisfy user requirements. We develop systems because we need them to do something for us, not because we want them to be bug-free. Granted, the presence of software bugs may prevent the software from meeting our needs, but the absence of them does not guarantee that the software satisfies those needs.It may not seem so, but this distinction does matter. Who cares about bugs? Many of us know testers who take exceptional pride in being able to ferret out truly obscure bugs. For instance, "Did you know that if you keep deleting the first item in a product order, then add it back, then delete it again, over and over, after 32 times it will lock up the computer?" While the testers are dramatically denouncing this as a "showstopper"--after all, it brought down the system--developers are just as emphatically dismissing it as a nonevent: "Who cares? No one would do that anyway." Testers end up believing that the developers are in denial, and developers become convinced that the testers are obsessive-compulsive. Who is right? At the risk of being ostracized by my own profession, I have to take the developers' side on this one. Unless there is a business reason or foreseeable circumstance that would cause a user to add and delete the same item more than 32 times, this bug is not an issue worthy of identifying, let alone resolving. Not that I don't think perfect software is a worthy goal, only that it is economically impractical to achieve. The harsh reality is that, with few exceptions (nuclear and medical devices come to mind) we simply can't afford to produce perfect products. It may not even be possible, given the infinite variations of interactions between person, software, and computer, to rule out an adverse result from any particular set of circumstances. But, even if it were possible, it would take so long and cost so much that no one would wait for the software, nor could anyone afford it when it was finally ready. For this reason, I think testers should focus on the value the software brings to the customer, who is ultimately footing the bill. Let programmers worry about bugs. Testers need to worry about requirements. This means focusing on the functionality that is most important and likely to be used, not the most unusual and bizarre set of circumstances imaginable. If customers, who are often represented internally by product managers, perceive that the test organization is assuring their success by verifying that the software will meet their needs, then they will be willing to invest in the time and resources to support testing efforts. If, on the other hand, customers view the testers as anal-retentive perfectionists who obsess over trivia and generally serve as an impediment to the timely enjoyment of their investment, then they will treat them accordingly. So the next time you uncover a bug, ask yourself whether it is really an issue. Would the customer be willing to incur additional expense and delay their use of the product, or forego it altogether, if this bug goes uncorrected? Or, knowing about it, would they be willing to live with it as long as it doesn't prevent them from getting their jobs done? Said another way, which is a more compelling status report from the test group: "We have two high-severity bugs" or "The product does not correctly calculate the invoice item price, and it does not remove the shipped items from inventory"? Get the picture? If testers want the time, resources, and budget we need to succeed, then we have to position ourselves as an asset to the business, not a drain-trap to development. This means we are not in the bug business at all, we're in the requirements business. So lose the insects please. // Linda Hayes is CEO of WorkSoft Inc. She was one of the founders of AutoTester. She can be reached at email@example.com .