Thursday, September 19, 2024

The pain of platform possibilities

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

While component-based architectures allow software developers to create applications that support many different databases, servers, and operating environments, they create a quality quagmire of nightmarish proportions for software testers.

“You mean to tell me you aren’t even going to test the server platform that is used by the customer who signed our largest deal last quarter?!” he bellowed.

The reason? It may take the same effort to develop an application for any ODBC-compliant database as it does for just one, but it takes a geometric multiple of that effort to test it because each and every database–in each and every potential platform configuration–must be tested. Different databases may have different reserved keywords, different sub- or supersets of ODBC support, or different constraints in different environments. Thus, each and every combination of all the elements must be tested together in order to truly assure quality.

Do the math. If your application supports four different databases on six different hardware platforms under three different operating systems, you are looking at testing the same application 72 times! Throw in other variations, like middleware or network protocols, and you are in the stratosphere for test time.

Under such circumstances, any competent, thorough software tester who takes pride in shipping a quality product is doomed to be frustrated no matter how hard he tries. Not only is it impossible to test every single configuration possibility with the time and resources available, I have worked with several companies where the test group doesn’t even have access to all of the supposedly supported platforms. As a result, customers uncover critical issues in the field, which is the most expensive place to fix them.

The odds are against reining in marketing or sales by limiting the platforms, since that’s where the money is. What to do?

Define your terms

To defend its borders, the test group must define them. This means it must be clearly stated, accepted, and communicated to all concerned, both internally and externally to customers, which configurations are, in fact, tested and which are not. This frees the test group from spending all of its time explaining why–of the dozens of ones it did test–it did not test the exact one the customer is screaming about.

I recommend organizing around the concept of “certified” versus “supported” configurations. A “certified” configuration is one that is actually tested, while a “supported” configuration is one the company agrees to accept responsibility for resolving if it fails. This distinction is important for three key reasons: It defines the platforms within the scope of the test effort; it identifies the potential risk of those out of the scope; and it enables a mitigation strategy for those risks.

Certified configurations

The beauty of precisely defining which configurations the test group will actually test, or certify, is it reveals to the rest of the organization the cold realities of what testing is up against. For example, I was reviewing the certified configuration list with the sales VP for a financial services software company when he was shocked to discover that the test group was not going to test the server platform of a customer. “You mean to tell me you aren’t even going to test the server platform that is used by the customer who signed our largest deal last quarter?” he bellowed. “Why the —- not?”

I smiled. “Because our purchase request for a test server was denied.” He looked astounded, then promised to get us access to one, somehow.

“Great,” I said. I was now on a roll: “But there’s one more thing. I either need two more people, or two more weeks in every test cycle to cover this additional platform.”


Having just been the victor in a bloody battle to get development to agree to support this very server so he could book the sale, he was furious. “Why didn’t someone tell me this before?” Again, I smiled. “No one asked the test group.”

The ultimate benefit of clarifying what the company will test and what it won’t is that it gives everyone a chance to mitigate risk.

I am confident this scene plays out every day in many corporate enterprises. Adding support for a new platform is not just a development issue, it’s a testing and support issue. In fact, you can compare the development effort to having a child: it may take nine months to develop it, but testing and support have to raise it for the next 18 or more years as it goes through every patch, release, and version of its life.

Supported configurations

Once you have identified the certified configurations, anything else marketing wants to sell becomes “supported.” This means that although the company doesn’t represent that it has, in fact, tested a precise configuration, it agrees to accept responsibility for resolving any issues that arise.

At first this may sound like a PR problem, but in reality it’s a plus. If the customer has a problem with a supported environment, it doesn’t automatically raise the question of whether the company tests anything at all. Without this distinction–and we’ve all heard it before–when a customer uncovers a major problem with an obscure combination, he immediately freaks out and questions the competency of the test organization altogether. For most battle-weary testers, this can be a morale killer.

With supported environments, at least there’s notice up front about what the company is committing to test versus what it’s willing to take responsibility for. As a result, the customer realizes the risk it’s assuming in adopting this configuration.

Mitigating risk

The ultimate benefit of clarifying what the company will test and what it won’t is that it gives everyone a chance to mitigate risk. If company officials are concerned a key account is at risk because its configuration is not certified, they can mitigate that risk in one of two ways: One, they can invest in additional time and resources to cover that configuration; or, two, they can include the account in the beta program.

In fact, customers with supported configurations would be ideal candidates for early release. This would allow the test organization to extend its coverage without bloating staff or costs, and it would allow customers whose configurations are at risk to participate in the test process.

The point is, it’s wrong to simply assume that test organizations can fully test every platform and every possible configuration the software can potentially run on. Both the company and its customers should be fully informed–and the test group fully equipped–to deal with the risks posed by the many potential platform possibilities. That way, testers have a hope of meeting quality expectations, because they will be both clearly defined and, more importantly, actually achievable in the real world. //

Linda Hayes is CEO of WorkSoft Inc.. She was one of the founders of AutoTester. She can be reached at linda@worksoft.com.


Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles