Maximizing customer coverage: Page 2

(Page 2 of 2)

So the sane approach to test coverage is to devise a means of getting the most return on the time you are able to invest. This translates into reducing the most risk as opposed to achieving the most coverage.

The next question is, how?

Analyze it

I'm not against full coverage or high quality, but I am against setting unrealistic expectations of test coverage that lead to turmoil and turnover in the test department.
Start with the most basic of all questions: Who are your customers? That is, who will be using this system, and what will they be doing with it? Most likely your customer support area can tell you this or, if not, perhaps the sales and marketing department. If you have to, review the sales contracts to see what has been sold.

For example, an electronic commerce company discovered that most of its customers were financial institutions and that the overwhelming majority of them (85%) operated mainframe platforms running OS390 and using the SNA protocol. And, when researched further, it found that a single file format and encryption option (an industry standard) accounted for 90% of all file transfers.

This is a different kind of coverage. Instead of code or feature coverage, we'll call it "customer coverage."

Customer coverage

Customer coverage means discovering how your software is actually used and testing it that way. The easiest way to define coverage is to build user or customer "profiles" that describe a particular customer configuration and their typical activities. These profiles might be centered around industries, geographies, or other identifiers that affect the type of customer and how they use the software.

This has some interesting benefits. The first and most obvious is that it forms a natural basis for prioritization: you know what to do first and how to allocate your time and resources. It prevents you from wasting resources by trying--and failing--to test everything in every possible way. Instead, you make absolutely sure that the activities you know for a fact are critical are, in fact, thoroughly tested.

The second benefit is a little more subtle. Let's say you are running out of time to complete the test effort and it's critical to make the release date. If you have prioritized your test effort around customer profiles, you could do a "rolling release"--ship only to those customers whose profiles have been tested. That way, it's not all or nothing. If most of your customers fit into a particular profile, you can ship to most of them on time and only delay shipping to the minority of customers who fall outside the tested profile.

The third benefit is longer term but potentially very valuable. If you adopt this practice, it should lead to better information collection about your customers and how they use the software. Once you understand your users better, you can prioritize enhancement requests, new features, even bug fixes the same way--identify where you should allocate your development and test resources by achieving the highest rate of return in terms of lower support costs and higher customer satisfaction.

This entire approach also gives you a framework for incorporating reported problems into your test plan. Instead of just a "bug" that ends up multiplying as a new test throughout all of your test plans, you determine which customer reported it, what profile they fit into--or if you need a new profile--and add it there.

The goal is to continue to define and refine your test process so you know what to test and why, as well as what not to test and why. This is the only way to establish a basis for measuring, managing, and improving your test effort in a realistic--and achievable--way. //

Linda Hayes is CEO of WorkSoft Inc.. She was one of the founders of AutoTester. She can be reached at

Page 2 of 2

Previous Page
1 2

0 Comments (click to add your comment)
Comment and Contribute


(Maximum characters: 1200). You have characters left.