| Planning for full software test coverage usually sounds like preparing only one strategy for a championship chess match. The assumption that you will have the opportunity to completely cover all of the features and functions--both new and old--of the next software release sounds like planning to win a game without considering your opponents moves. It's wishful, if not downright naïve, thinking. |
So why plan full-coverage software testing? For the simple, but compelling reason that you can't assess a risk you haven't identified.
Setting up the board
A large West Coast bank was developing a new version of the system that supports its loan officers in more than 600 branch banks. This system calculates effective interest rates, interest payments, and maturity dates, as well as produces the host of documents and disclosures required to initiate loans and collateralize them.
Because loans are the lifeblood of banking, this system is crucial, and a senior manager was appointed to oversee the test effort. With extensive lending experience as well as development savvy, she was in a position to understand the inherent risks in both activities.
What she wasn't familiar with, however, was software testing, and so she proceeded in blissful ignorance to plan for 100% test coverage of all types of loans, collateral, interest levels, and payment types. She even brought her technical background to the fore by acquiring a test automation tool and developing a complete repository of test cases that exercised the entire system, as she understood it. When the software arrived, she was ready.
The first build had all of the warts to be expected: the installation process left out some critical files, the configuration was incompatible with the platform, and there were protection faults from time to time. But subsequent builds made steady progress and eventually the software was stable enough to withstand the execution of the complete test suite, revealing more subtle problems.
Because the bulk of the tests were automated, it was possible to establish a very crisp schedule: the complete test process required approximately 76 hours to run if the software was properly installed and configured, and if there were no hard crashes. Reviewing the printed output--the actual loan documents--took another two days of manual effort.
The field test uncovered yet more issues, as was to be expected, and these were corrected as found. Weekly builds and test iterations kept the 40 branches busy, but their backup processes protected them from any undue consequences. That is, until the final week before the scheduled go-live date.
During this week the system was to produce calculations and reports for the month-end posting of accrued interest from the new loan system. The reports and their calculations were erroneous, but the cause was not immediately clear. As the deadline approached, the development team became more and more frantic to uncover the problem and get it fixed. Finally, late that week, the culprit was found and fixed. It was time to test and then to ship.
All of the usual suggestions were trotted out. Add more people! No need, she said, it's mostly automated. Work overtime! The 76 hours are around the clock already, she pointed out. Automate more! The documents must be personally verified because the scanned images were too sensitive to slight variations, she explained.
And so it went, until the final, inevitable option made its appearance: test less.
Ah, she said, no problem at all. If making the date is more important than making sure the system is ready, we can certainly do that. Here's how: we will leave off testing all car loans, since they are the most complex and require the most time. That will shave off the extra time and we can make the date, just barely.
The reaction was swift: What? You can't skip car loans--they are a huge part of our loan volume, they need special documentation, the collateral has to be properly secured, the title restricted, and so on and so on. It's just too risky!
Hmm, she said. Then we'll have to forget about second mortgages. That's the only other class of loan that has enough time in the test schedule to make the difference we need.
The test manager got same reaction: Are you crazy? Mortgages represent huge amounts of money, even more documents, and besides, we're marketing them like mad right now. You can't possibly gamble with those!
And on it went. By the time the dust had settled, guess what? The go-live date was delayed.
So what made the difference? Simple, it was an assessment of risk, and the lesson is deceptively straightforward. If you don't plan to test the entire system, you don't know what you are leaving out; if you don't know what you are leaving out, you don't know what's at risk. And, if you don't know what's at risk, you can't weigh it against the perennial risk of missing the schedule.
For the bank, the risk of delaying the roll-out of its new loan processing system was weighed against the risk of failed system functionality. The result was that a fully tested--and consequently a fully operational--system was more important than a brief delay in deployment.
In most cases, the test manager would have insisted that they could not test the entire system, and management would have tut-tutted but proceeded to cut testing in order to meet the deadline. So when critical functionality went awry, the test manager would have been denounced for allowing something so important to go by the wayside.
The key in the bank's case was that the test manager knew enough about the business to put the risk assessment in terms that management would relate to: money. If loans are processed incorrectly, business is lost.
The bottom line? Full-coverage software testing is possible; you just have to know which moves to make. Test cases may have no meaning to a bank, but loans do. //