The most persistent advocates of this self-testing software theory try to work around this problem by firing random events or data at the system to cause the different states and pathways to occur. This is akin to the idea that enough monkeys, banging on enough typewriters, given enough time, would eventually produce Shakespeare's body of work.
While I am highly skeptical of whether self-testing software can ever be a reliable testing method, given the complex interrelationships that govern its behavior, let's assume for the sake of analysis that the method works. What does it tell you? For any given set of actions, you have a set of results. Are these results valid or invalid? How do you know? If they are truly random, you will have to trace every action and reaction to decide whether it was handled correctly. How do you know if it was handled correctly? Gee, sounds like you might have to decide on some application requirements.
It's the Requirements, Stupid
The bottom line is that software testing requires not just effort, but skill as well. While test automation tools can remove the drudgery of manually executing or designing tests, there is still a need to understand what should be tested.
Gathering and defining requirements remains the single most challenging aspect of software development and testing. This determining factor cannot be automated or avoided. //
Linda Hayes is CEO of WorkSoft Inc. She was one of the founders of AutoTester. She can be reached at email@example.com.
|Additional resources Writing Programs to Test Programs |
Replacing manual testing with automation won't produce the test results you want--or expect. The Test Automation Timetable--Altered States
To keep the test automation train on track, you need to establish data states, set execution schedules, and stay within the borders.