|
|
In last month’s column, I railed against writing programs to test programs in an attempt to cope with an unpredictable automated test environment. If you proceed along these lines, the only thing that lies ahead is madness.
The first step to successful test automation, as I explained, is to get a grip on your test environment. It must be stable and predictable. You must know what the state of the data is before you can predict the outcome of a test and therefore automate it. If you have tried to do this, you know it is far easier said than done, but, nevertheless, indispensable to an automated test process. If you can’t accomplish this step, then manual testing will be faster and easier in the long run.
Now let’s look at the second step: planning for the fact that the tests themselves will affect the state of the data. And it’s not just what the tests will do to the data, but also the order in which they will do it.
The test automation track
“If you have a fresh database in a known state, as soon as you start executing tests, the data will change. “ |
If you have a fresh database in a known state, as soon as you start executing tests, the data will change. You can’t predict the state of the data at any point after the beginning unless you know which tests have run, and in what order. For example, if you need to add, update, and delete an account, then you must run the tests in exactly that order. If you run the same three tests in any other order, they will fail.
The test difficulty expands exponentially with the number of people who contribute to the test library. Because the test team has exclusive control of the test environment doesn’t automatically mean the environment is under control. Members of the same test team can trip each other up. Each and every test that affects the state of the data can impact each and every other test.
Think of the test process like a train schedule: The timetable spells out exactly at what point the train will leave and when it will arrive at another stop so passengers can make their travel plans accordingly and so trains can avoid collisions. Imagine if the train left whenever the passengers wanted it to, stopped for new passengers whenever more wanted to board, and the train went wherever anyone wanted. It would be chaos. The same goes for automated tests. If each tester writes his or her own tests, with a self-determined data state and execution time, then the probability is high that the tests will collide in the database.
The only way to avoid disaster is to define what the timetable of tests will be and how the data will be affected as a result. Individual tests can be designed to fit into the overall schedule and rely on a known data state at any point.
Crossing state lines
The easiest way to define your test timetable is, like above, model it on a train schedule. Think of your test data as existing in multiple states, each being a point along the rail line, and your execution schedule as the times of travel between these points. The first execution pass starts with the initial data state; when it is completed, the second state is reached. Then the next execution pass occurs, and the third state is reached. And so on.
Depending on your application, this type of schedule may lend itself easily to calendar equivalents. For example, the initial data state would be Day 0, the first data state Day 1, the second Day 2, and so on. If you have regular processing cycles each week, month, or quarter, then these would also be states. The year-end state could be the terminus of the schedule.
In this context, a trading system might have all of its customers, stocks, prices, and other necessary supporting data in place on Day 0. On Day 1, certain trades occur; on Day 2 those trades are confirmed; on Day 3 they are settled; and on Day 4 the statements are printed. This schedule helps the tester who needs to verify account activity is correct to know what transactions to expect on any given day.
Other applications may need more definition. Let’s say a bank-processing system test must verify that a check received for deposit at 2 P.M. should be posted by Day 1 at midnight; another test might verify that a check received at 4 P.M. is not posted until Day 2 at midnight.
The point is to establish a set of data states and an execution schedule that tests must follow. Like train passengers, they have to board at a defined location and depart at a specific time.
Protecting state borders
This scheduling approach leads to another advantage: Once your data states are known, you can archive not just the initial state but also intermediate ones, allowing you to restore your data to any desired point. This is especially useful to get around tests that are blocked due to failures or defects.
For example, let’s say that on Day 3 you expect a set of trades to be settled and posted to the customer accounts so you can print statements on Day 4. Unfortunately, due to problems with the pricing tables, the trades are posted with incorrect amounts. Because this will affect all downstream tests that depend on these values, you have a dilemma: Do you wait for the problem to be fixed, and then start over back at Day 0? If so, you may later find problems with the statements. On the other hand, if you don’t start over, you will have a “domino” failure effect as earlier problems cause later tests to fail.
The ideal solution, if you have archived the expected Day 3 data state, is to simply restore it and proceed with Day 4. That way you can proceed with your test schedule while the earlier problems are being resolved.
The benefit of establishing data states and maintaining control of the execution schedule is this: You gain the level of predictability that is essential for successful test automation. Manual testing can be free of structure and control. In contrast, if you want to take advantage of the speed and economics of mass data transportation–and the productivity that test automation offers–then you have to play by the rules. //
Linda Hayes is CEO of WorkSoft Inc. She was one of the founders of AutoTester. She can be reached at linda@worksoft.com.