Friday, March 29, 2024

Quality Quest: Mission Impossible–Meeting Software Testing Objectives

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Linda G. Hayes

I was rendered speechless when a fellow professional said, in all seriousness, she was going to discard the majority of her regression tests because they had failed to find errors. After I recovered my composure–and my voice–I asked why she was considering such a thing, to which she confidently replied, Well, so-and-so says tests that don’t find problems aren’t worthwhile.

As it happens, the crazy claim turns out to be based on the earliest and most commonly quoted definition of software testing. Published in Glenford Myers’ 1977 book, The Art of Software Testing, the definition states: The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product.

Based on this found meaning, I can see where my colleague and her informant got the idea that tests that find no errors have no value. I can also see why software testers might rival dentists for having the top depression and suicide rates in all professions.

Proving a Negative

Simply finding errors is an unacceptable purpose for software testing. The approach requires software testers to prove a negative–there are no more errors to find. To demonstrate this, they must know how many errors there are to begin with and where the errors are. If we knew that, we would not need to test; we would just need to fix the errors.

Furthermore, if you don’t know how many errors exist, how do you know when you will be finished testing? How can you measure your tests’ effectiveness? Does this mean as you contribute to the overall improvement of the software development process, your effectiveness as a tester declines as well?

Proving the Pointless

Another reason this no errors-no value definition is dangerous is it lends credence to the idea all software errors are created equal. It presumes that finding an error, regardless of what or where it is, is valuable. This belief leads testers to invest valuable time and resources creating obscure, random, and meaningless situations in the hopes of catching the programmer unable to adapt to the changes. All the while, the testers are eschewing the most basic and obvious tests, assuming they will work. But what if they don’t?

Ironically, the true meaning of the term regression testing is to look for software functionality that used to work but no longer does, i.e., the software has regressed. But, based on Myers’ definition, there is no point in running a test that has found no errors, so once a software function works it is immune from further testing. Yet, the functionality that no longer works following a regression test poses the greatest risk, since it is still in use. The new functionality that doesn’t work may be irritating, but it is probably not devastating.

Proving Progress

To give credit where credit is due, more recent authors have improved upon the no errors-no value testing definition. In Software Test Automation, written by Mark Fewster and Dorothy Graham in 1999, the purpose of software testing is to give increased confidence in those areas of the product that work and to document issues with those areas of the product that do not work. Notice this terminology introduces the value of establishing what does work as well as what doesn’t.

Similarly, the most recent glossary of standards from the British Computer Society Specialist Interest Group in Software Testing (BCS SIGIST) defines testing as the process of exercising software to verify that it satisfies specified requirements and to detect errors. Ah, now we’re getting somewhere. The concept of requirements–you know, the reason we developed the software in the first place–is finally becoming part of the definition.

I wonder how significant it is that Mr. Fewster and Ms. Graham both hail from the United Kingdom, as, of course, does the British Computer Society. Perhaps we can persuade them to colonize the software testing industry here in the United States?

While it may seem academic to obsess about how software testing is defined, the impact is highly practical. Well-meaning experts–who espouse definitions that lead testers to discard tests that work–are setting the testers (and their companies) up for failure. If software isn’t proven to do the basics, who cares whether it fails to do the obscure? //

Linda Hayes is CEO of WorkSoft Inc. She was one of the founders of AutoTester. She can be reached at linda@worksoft.com.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles