We've all been there. We receive a software drop from development that is ostensibly ready for testing... then we spend the next three weeks trying to install that software and run it. The reasons why we can't install, let alone run, the software cover the gamut from missing files and rogue DLLs to database structure incompatibilities and out-and-out bugs. It's a battle that not only tries our patience but also burns up our test time: You know, the measly few weeks they give you for testing when you should have months.
We've all heard of the various types of testing-unit, integration, system, compatibility, regression, acceptance, and so on; it is not, however, always clear which of these belong in development and which belong in testing. Even more to the point, it is not clear what test determines whether the software is ready to graduate from development and enter the test organization.
That's what the Build Verification Test (BVT) is for. If you don't have one, get one. It will make the difference between doing your job and doing someone else's.
The BVT establishes that the software has sufficient integrity to be tested in the first place. That is, it confirms that the following are true: The software has all its required components, those components are properly linked together, they are accessible, and they are functional. The BVT should be designed to cover the system from beginning to end, but not in depth. When designing a BVT, think "inch deep, mile wide," and you get the idea.
The BVT is sometimes referred to as a "smoke test," a nickname that has its roots in the early days of hardware testing. Back then, testers applied electric power to a hardware device to see if it would smoke, spark, or burst into flames. With a BVT, testers are not looking for smoke but for fundamental problems that prevent the software from operating or from performing a critical task. In many organizations, the development team creates smoke tests to ensure that new code has not broken the daily or weekly build.
In my experience, though, the BVT should be something more than a test for dramatic failure. It should ensure that major functions are present and operating as expected. Just because the software doesn't blow up doesn't mean it is ready to go-sometimes the most dangerous errors occur quietly, without fanfare, such as a failure to make critical calculations or update the database. Moreover, the test organization should develop the BVT, not development, for the same reason you can't issue your own diploma: If you don't know what you don't know, you won't know it.
The test organization has to determine what constitutes an acceptable level of software quality to support the test process. This should not be an exhaustive test, but neither should it be cursory. The best approach is to select one test case out of each category or functional area, usually a positive case that will cause the program to create, retrieve, or update data. This focus on data will assure that the database or other data sources are in sync with the software, and that the test has touched most, if not all, of the underlying tables and files. This process should also invoke middleware and other components that all have to be in harmony.
In addition to making sure you have a test-worthy build, however, the BVT can serve another-in some cases, more critical-purpose. It is an embarrassing but undeniable fact that so-called "hot fixes" are a fact of life in some companies. A hot fix is a correction for a defect so severe that it cannot wait for a regular release cycle-which also usually means it doesn't have time to be tested. In that case, the BVT may be all that stands between correcting and exacerbating an already bad situation.
When used for hot fixes, though, the BVT should be applied to an installed system, not a new