Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your Business
Automatic software testing takes one of two forms--generating test cases from requirements or generating requirements from the software itself. The first one is possible, though not effortless. The second one is not possible, and even if it were it would be pointless.
What a great idea, a tool that can generate all the necessary test cases from the application's requirements. Notice the word "necessary" as opposed to the word "possible." If you generate all possible test cases, you are likely to generate an unmanageable and infinite number of possibilities in even straightforward applications. Generating only the necessary test cases means you have selectively identified those test cases that are unique. This is the only way to achieve complete coverage with manageable volumes. Caliber RBT, a testing system developed by Atlanta's Technology Builders Inc., can generate all the necessary test cases to assure complete coverage when given the application requirements. It employs a proven testing model known as cause-effect node graphing, which has its roots in the testing of hardware.
For this approach to work, however, you need to do two things. First, you must commit to specifying your application's requirements with enough mathematical precision and internal integrity to support the testing model. Second, you must develop the test scenarios, or pathways through the application, to execute the test cases. For example, you must create a test scenario that has the steps needed to navigate through the application as well as provide the inputs and verify the outputs that comprise the test case.
Although this is a worthy and achievable goal, it is not "automatic" software testing.
What an even better idea--have self-testing software that creates its own requirements and then generates the test cases accordingly. That tool would be a tool that examines the software source code, divines what it is doing, then generates the necessary test cases. These test cases are needed to exercise every pathway and every input or output, including boundaries, ranges, types, and even random values. If this idea sounds too good to be true, that's because it probably is.
At the most basic level, requirements define what the application should do, not what it does. So, if you derive the requirements from the software itself, then you have a completely self-referencing model. Obviously, this means that if the software is missing features or has implemented them improperly, then any self-generated requirements will likewise be incomplete or incorrect. Said another way, if the students write the test, they will certainly pass it. But that doesn't mean they have mastered the subject.
Even if you dismiss these testing inconsistencies by saying all you want to do is make sure the software does what it's designed to do without causing errors, you then have a different problem. The vast majority of an application's functionality does not exist in the source code as static information. Given that most modern applications rely heavily on multiple interfaces, components, and objects, and may take advantage of concepts like inheritance (where objects are reused and also extended or modified), it is impossible to determine what the actual behavior will be until execution occurs. And, although there are ways of tracing through the source code during execution, to effect execution you have to drive the system. This means you must supply some form of input actions or data. So you are back to the original testing problem--you must know what test cases you want to execute before you can trace the code.
The most persistent advocates of this self-testing software theory try to work around this problem by firing random events or data at the system to cause the different states and pathways to occur. This is akin to the idea that enough monkeys, banging on enough typewriters, given enough time, would eventually produce Shakespeare's body of work.
While I am highly skeptical of whether self-testing software can ever be a reliable testing method, given the complex interrelationships that govern its behavior, let's assume for the sake of analysis that the method works. What does it tell you? For any given set of actions, you have a set of results. Are these results valid or invalid? How do you know? If they are truly random, you will have to trace every action and reaction to decide whether it was handled correctly. How do you know if it was handled correctly? Gee, sounds like you might have to decide on some application requirements.
It's the Requirements, Stupid
The bottom line is that software testing requires not just effort, but skill as well. While test automation tools can remove the drudgery of manually executing or designing tests, there is still a need to understand what should be tested.
Gathering and defining requirements remains the single most challenging aspect of software development and testing. This determining factor cannot be automated or avoided. //
Linda Hayes is CEO of WorkSoft Inc. She was one of the founders of AutoTester. She can be reached at firstname.lastname@example.org.
|Additional resources Writing Programs to Test Programs |
Replacing manual testing with automation won't produce the test results you want--or expect. The Test Automation Timetable--Altered States
To keep the test automation train on track, you need to establish data states, set execution schedules, and stay within the borders.