How much has the Web changed the face of enterprise application development? Let us count the ways. Start with HTML, pervasive clients, fat servers, distributed architectures, and application servers.
Move to object messaging, server-side components, decoupled frameworks, and enterprise information portals. And let’s not forget the year’s big buzzes: eXtensible Markup Language (XML), enterprise application integration (EAI), electronic data interchange (EDI), and global supply chain optimization.
And that’s just scratching the surface. Call it what you will: e-business, e-commerce, e-service, e-tailing, e-tc.–from IT’s perspective, it’s all e-ssential. Indeed, to say the Web has redefined IT, and done so overnight, would not be an understatement.
Yet for all its impact, the Web’s most fundamental change to the fabric and essence of enterprise development remains largely unnoticed. That change is the transformation of IT from a plodding corporate function to a fast-running independent software vendor–an ISV.
“The moment you put a dynamic application on the Net, you’ve shifted from an IT organization that builds solutions for internal consumption, to a company that develops software for external consumers,” says Larry Freed, director of the e-commerce practice at Compuware Corp., in Farmington Hills, Mich.
This shift also requires adjusting application development priorities. Technologies many IT organizations have long ignored, such as automated graphical user interface along with function and load-testing tools, suddenly acquire strategic importance.
The reason: When an application system is pointed at the Web, the world is your enterprise stage. Bugs, slow performance, or collapsing under heavy Web loads can mean the difference between brilliant success and bankrupt e-business efforts, Freed says.
Walk like an ISV
Growing numbers of IT leaders, consultants, and vendors agree with Freed’s assessment: IT is taking on the attributes of ISVs. To succeed today takes more than migrating sales, marketing, purchasing, and customer services to the Net; and more is needed than adopting new architectures and platforms to keep pace technologically.
According to experts, success requires thinking and acting like an ISV, and adopting proven practices and development methodologies that have long separated successful vendors of shrink-wrapped apps from abject market failures.
In fact, IT organizations that ignore the best practices employed by leading ISVs do so at their own peril. Consider this stunning prediction issued in September 1999 by analysts from the Gartner Group Inc. of Stamford, Conn.: 75% of e-business efforts will fail. This is due in large part to lack of adherence to best development practices and methodologies.
“You need to look at some of the best development practices that [ISVs] have used for years to be sure their products make the grade,” says Mickey Lutz, VP of IT at PHH Vehicle Management Services, in Hunt Valley, Md. “For most IT [shops], that usually means one thing: They have to get serious about software testing for the first time.”
Lutz should know. As head of the information technology arm of Avis Rent-A-Car Systems Inc., his PHH group was charged with rapidly evolving Avis’ UNIX-based client/server environment to the latest bleeding-edge distributed architecture.
|
|
The new requirements called for transforming Avis’ legacy enterprise architecture from one that served a handful of internal agents, to a publicly accessible fleet management system that thousands of Avis clients could access live, direct from their desktop PCs.
Ambitious from the start, Avis aimed to do more than just surface marketing and reporting schemes on the Web. The new architecture would touch literally every customer, every driver, and every one of the 350,000 vehicles in the company’s corporate rental fleet.
The first application targeted was a huge vehicle fleet maintenance solution, where clients could go online and manage vehicle histories, repair costs and operational expenses, driver profiles, and safety training reports, as well as analyze accident records.
It was a high-velocity U-turn for Avis. The company historically relied on its customer call centers to support client queries by telephone, with monthly reports generated by computer and delivered to fleet managers by snail mail.
“When we started this effort, we were a principal vendor, but not a principal app on the fleet manager’s desktop,” Lutz says. “Today when they manage their fleet, we are the principle application they use. We’re no different from Microsoft [Corp.] or Sun [Microsystems Inc.] in that regard. We have become a mission-critical software provider.”
Critical mission,
critical testing
Lutz’s team decided the shift from mission-critical vendor to mission-critical ISV called for mission-critical testing. It was a fortuitous call: had the company not applied automated stress-testing technologies, the system would have collapsed on its first day online.
“The application just wouldn’t work under load,” Lutz recounts. “It ran great with a small team of QA testers exercising it, but when we applied the load-testing software, it wouldn’t work.”
Using the LoadRunner load-testing software from Mercury Interactive Corp. of Sunnyvale, Calif., Lutz’s team was able to simulate 10,000 concurrent users banging away on the app. The problem was traced to a bug in the ColdFusion app server from Allaire Corp. of Cambridge, Mass.
After Allaire issued a patch, the system was again subjected to a round of load testing. This time, it passed. As the development process ensued, Lutz’s team brought more customer data online, eventually facing Avis’s entire 600GB data warehouse to the Web.
Each step of the way, the team repeatedly threw app modules into a load-testing pressure cooker. Lutz says that, at the time, without Mercury’s LoadRunner product, this extreme level of load testing would have been beyond the realm of possibility.
That’s because most load-testing products required pools of PCs. The server-based LoadRunner required only one central server.
“Typically with client/server or Web stress testing, you have to drive it from multiple PCs,” Lutz explains. “I would have had to commandeer entire buildings of PCs. It would have been impossible to test the kinds of numbers we’re talking about.”
In terms of predictability, the testing results have been “right on,” Lutz says. The Avis site zoomed to an average of 90,000 hits per day soon after it was deployed, with a one-day peak of 141,000.
“We’ve had no problems due to load,” Lutz says. As PHH gets set to scale the site again with a new development phase to open the app to more online customers, the group has purchased an additional LoadRunner license to enable scalability testing beyond 30,000 simulated users.
“No architecture can scale infinitely,” says Lutz. “Every time you add or change something, you have to test. We want to take it up further, and stay ahead of the curve as far as numbers of users. Load testing lets us do that.”
On the razor’s edge
Spurred on by extreme market pressure, companies are forced to stay ahead of the curve. They must relentlessly update, upgrade, rev, improve, add features, embrace trends, and technologically innovate within the context of the software development process.
The hallmark of any successful ISV that needs to ship shrink-wrapped product to a fickle enterprise marketplace is the aggressive release mentality one would expect to find at Microsoft, Sun, or Red Hat Inc.
|
It’s not the kind of thinking normally associated with IT organizations, which historically deal with internally driven business requirements that take months to refine, development cycles that are measured in years, and application systems that are built to span decades.
Examine the cycle time of any e-business shop, though, and you’ll find development lifecycles pegged to monthly, weekly, daily, even hourly release builds.
“IT used to operate under the notion that you write perfect requirements, and everything flows from there,” says Sam Guckenheimer, the Lexington, Mass.-based senior director of automated testing products for Rational Software Corp., in Cupertino, Calif. “But the Web is iterative. The requirements change daily. We’ve seen some dot-com organizations with six-hour release cycles.”
|
E-business developers agree, and add that compressed application lifecycles aren’t the only challenge in maintaining high-quality customer-facing e-apps. The architecture of Web-based systems is inherently complex, heterogeneous, and fragile.
“We might do five releases in three months,” says William Flow, manager of software quality assurance for Frontier Corp., in Rochester, N.Y. “And it’s not just the Web client we’re revving. We have to test all the stuff–integration, databases, other Web apps we hit, e-mail systems–it can be hundreds of things. It’s become impossible to do it manually.”
Frontier personifies the diverse community of platforms many enterprise organizations struggle to integrate as they fight their way to the Web. The company’s architecture is a mix of mainframes, with Solaris UNIX and Linux IP servers running on Intel, SPARC, and UltraSPARC machines.
Automating quality
Born over 100 years ago as Rochester Telephone, a small local telco, the company started buying up small Baby Bells after the AT&T breakup. A long string of acquisitions later, Frontier emerged as the country’s #5 domestic long-distance carrier.
For the past two years, Frontier’s IT organization, under the leadership of CEO Joe Clayton, has been charged with aggressively moving the firm’s entire computing infrastructure to the Web. The initiative is dubbed TMN, for Telco Management Network.
The e-business programming team at Frontier is turning to automated testing technologies to maintain the highest service levels of the firm’s customer-facing apps. The reason: With literally all of the company’s systems headed for the Web, a failure of any company system means wasting money.
“There was a time when application downtime didn’t directly impact revenue,” Flow says. “Today the revenue for our company is based on these Web-based apps. If any aspect is down or not doing its job properly, I’m losing revenue.”
|
|
The laundry list of Web apps under development at Frontier runs the gamut from order-entry systems, to inventory control, to customer care and billing, and everything in between. To cope with the varied testing requirements demanded by these increasingly interdependent Web systems, Flow says he’s had to redefine the role of QA engineer.
“My QA engineers have a dual role,” Flow explains. “They play unit testing, and they play integration QA.” Flow says he integrates QA engineers directly into the development process from day one. This way they can understand the user requirements and application specs, and ensure the code delivers.
Once the development team declares a unit “code complete,” Flow has his QA testers switch gears. “When [the developers] say something’s complete, my QA engineer has to change his hat from a unit tester to an integration tester. That’s where the automated tools come in.”(see: Stressed out from stress testing)
Multiple points of failure
For example, Frontier’s Inventory Management System (IMS) is the needle’s eye through which five other enterprise Web systems are threaded. Virtually every conceivable interaction between these business-critical, codependent application systems must be rigorously tested.
IMS manages the company’s total inventory and, as such, is depended upon by virtually all the other major application systems. There are five different Web-based apps that rely on it, including the company’s product configurator, order-entry system, workflow engines, billing system, and more.
“We used to just test the GUI,” Flow says. “Does the app ask the right questions, do the forms work, is the data saved, and so on. But now we have to make sure every app hits IMS, and that the data interacts properly with other apps in the dependency chain. This is where integration testing takes over, and why we had to find an automated tool.”
Because the complexity of the integration testing was beyond human means, Flow’s group turned to automation–specifically, Compuware’s QA Director. The product allows multiple applications and databases to be scripted and executed simultaneously–a key feature for integration testing.
Flow’s team uses QA Director to hit upon all the applications and their databases and to generate reports that flag errors in system interactions. “Application A touches application B, and B hits C and D, while E might hit A,” Flow says. “QA Director can actually manage this kind of elaborate test.”
Now, whenever Frontier prepares to issue a new release, it is first subjected to an entire integration regression test suite, managed by scripts running under QA Director. This ensures previously tested functionality is unchanged, and validates the accuracy of new features.
Risk avoidance
Flow says that, without the availability of Web-savvy integration testing tools, Frontier’s entire application portfolio would be at risk. “If we didn’t have these automated tools, we simply couldn’t do the testing,” he says. “We’d be in a world of hurt right now.”
Certainly automated testing tools can ease the pain of integration testing and help ensure a site’s ability to withstand heavy user loads, the likes of which no legacy IT app has ever been asked to sustain. But not one of the automated testing tools available today can replace the need to beta test, using qualified users culled from the application’s target audience. It’s the only engineering process known that can isolate bad user interfaces.
“As a [testing tool] vendor, I hate to say that our tools can’t perform a certain function, but the truth is, usability testing is the one thing no automated tool can do,” says Diane Hagglund, senior manager for e-business product marketing at Mercury Interactive.
Hagglund says usability testing might never be automated, because it has to do with responding to human emotions–something that has yet to be computerized. “We’re seeing more and more traditional IT shops doing what ISVs would call beta testing, under the guise of usability testing,” she says.
That’s exactly what’s happening at Acentris Wireless Communications, a telco services reseller in Seattle. There the beta test process has been integrated into the overall development lifecycle, with a core group of developers, internal users, and customers comprising Acentris’ beta test team.
The company recently migrated from its legacy Microsoft Visual Basic 4 (VB4) client/server system to a fully distributed platform. The new system is built in VB6 and leverages several beta technologies itself, including a COM+ framework and Windows 2000 Beta 3 RC1 servers.
“We prototyped the Web UI first, and sent it out to a small group of customers and internal users for beta testing,” says Acentris VP Darren Lang. “That gave us a huge head start, because we were able to fine-tune the user experience and hand the UI off to the programmers early in the development process.”
Acentris’ development team was then free to focus on the migration’s nuts and bolts, and use automated tools to stress and regression test the application architecture, knowing usability was already in hand.
“The reputation of the IT department no longer rides on how well they manage the printers, back up the servers, or get a new PC on your desk,” says Michael Marquardt, president of Internet Operations Center Inc., an e-commerce application hosting company in Southfield, Mich. “It’s now the software development arm of the business, and that means we need to think and act more like ISVs, and less like islands of technology.” //
Rich Levin covers IT for CBS Radio and the Coast to Coast Radio Network. He can be reached at RBLevin@RBLevin.net.