Stressed out from stress testing: Page 2

Posted October 26, 1999
By

Rich Levin


(Page 2 of 5)


Critical mission, critical testing

Lutz's team decided the shift from mission-critical vendor to mission-critical ISV called for mission-critical testing. It was a fortuitous call. It turns out that, had the company not applied automated stress-testing technologies, the system would have collapsed on its first day online.

"The application just wouldn't work under load," Lutz recounts. "It ran great with a small team of QA testers exercising it, but when we applied the load-testing software, it wouldn't work."

Using the LoadRunner load-testing software from Mercury Interactive Corp. of Sunnyvale, Calif., Lutz's team was able to simulate 10,000 concurrent users banging away on the app. The problem was traced to a bug in the ColdFusion app server from Allaire Corp. of Cambridge, Mass.

After Allaire issued a patch, the system was again subjected to a round of load testing. This time, it passed. As the development process ensued, Lutz's team brought more customer data online, eventually facing Avis' entire 600 gigabyte data warehouse to the Web.

Each step of the way, the team repeatedly threw app modules into a load-testing pressure cooker. Lutz says that, at the time, without Mercury's LoadRunner product, this extreme level of load testing would have been beyond the realm of possibility.

That's because most load-testing products required pools of PCs. The server-based LoadRunner required only one central server.

"Typically with client/server or Web stress testing, you have to drive it from multiple PCs," Lutz explains. "I would have had to commandeer entire buildings of PCs. It would have been impossible to test the kinds of numbers we're talking about."

In terms of predictability, the testing results have been "right on," Lutz says. The Avis site zoomed to an average of 90,000 hits per day soon after it was deployed, with a one-day peak of 141,000.

"We've had no problems due to load," Lutz says. Now, as the site gets set to scale again with a new development phase--opening the app up to more online customers, i.e., "scaling up"--Avis' PHH group has purchased an additional LoadRunner license to enable scalability testing beyond 30,000 simulated users.

"No architecture can scale infinitely," says Lutz. "Every time you add or change something, you have to test. We want to take it up further, and stay ahead of the curve as far as numbers of users. Load testing lets us do that."

On the razor's edge

Staying ahead of the curve means relentlessly updating, upgrading, revving, improving, adding features, embracing trends, and technologically innovating within the context of the software development process, under extreme market pressures.

It's the hallmark of any successful ISV that needs to ship shrink-wrapped product to a fickle enterprise marketplace; the aggressive release mentality one would expect to find at Microsoft, Sun, or Red Hat Inc.

Not the kind of thinking normally associated with IT organizations, which historically deal with internally driven business requirements that take months to refine, development cycles that are measured in years, and application systems that are built to span decades.

Examine the cycle time of any e-business shop, though, and you'll find development lifecycles pegged to monthly, weekly, daily, even hourly release builds.

"IT used to operate under the notion that you write perfect requirements, and everything flows from there," says Sam Guckenheimer, the Lexington, Mass.-based senior director of automated testing products for Rational Software Corp., in Cupertino, Calif. "But the Web is iterative. The requirements change daily. We've seen some dot-com organizations with six-hour release cycles."

E-business developers agree, and add that compressed application lifecycles aren't the only challenge in maintaining high-quality customer-facing e-apps. The architecture of Web-based systems is inherently complex, heterogeneous, and fragile.

"We might do five releases in three months," says William Flow, software quality manager at Frontier Corp., in Rochester, N.Y. "And it's not just the Web client we're revving. We have to test all the stuff--integration, databases, other Web apps we hit, e-mail systems--it can be hundreds of things. It's become impossible to do manually."

Frontier personifies the diverse community of platforms many enterprise organizations struggle to integrate as they fight their way to the Web. The company's architecture is a mix of mainframes, with Solaris UNIX and Linux IP servers running on Intel, SPARC, and UltraSPARC machines.


Page 2 of 5

Previous Page
1 2 3 4 5
Next Page





0 Comments (click to add your comment)
Comment and Contribute

 


(Maximum characters: 1200). You have characters left.