Friday, March 29, 2024

Stress for sale

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Linda Hayes

One of the biggest trends in software lately is the shift toward application service providers (ASPs)–vendors that deliver their applications over the Internet. That is, instead of investing in the infrastructure of software needed to run a system–servers, databases, networks, and so forth–customers simply pay a fee to access the same software over the Internet. It’s a great idea, really, because it allows customers to focus on their own core business instead of having to deal with the IT demands of an internal infrastructure.

That same concept is now being applied to testing: specifically, to the stress testing of Web sites. For example, Mercury Interactive Corp., a Sunnyvale, Calif., test tools vendor, was the first to announce the availability of a remote-hosted traffic service to test the performance and capacity of a company’s Web site. While Mercury is currently the only company offering these services, others are poised to enter the market.

What does it all mean?

Stress circus

Stress testing has always been, well, stressful. The objective is to create enough user and transaction volume on a system to measure its capacity and performance and to determine if it will stand up to real-world demands. Many a company has launched a new application that was functionally robust, only to have it fail because of unacceptable response times, or because it simply could not support the number of users who needed access to it.

The brute-force approach to stress testing is to invite a large population of users in on a weekend or holiday and have them bang away at your site. This process is cumbersome and expensive for obvious reasons, not the least of which is the difficulty of coordinating user activity so that the actual breakpoints can be identified. The next-generation solution is software that creates virtual users by simulating the activity of a number of terminals or workstations. This process is much better because it reduces the required manpower and can be fine-tuned to measure gradual increments of traffic.

But even the software approach is hard. The tools themselves are arcane, the scripts are complicated to write, and it requires a lot of hardware to generate enough traffic for a meaningful test. Many a company has made a major investment in software, servers, and consultants to set up and execute a stress test that only runs for a few hours. This is like staging a Broadway show that closes after one performance, only to be re-staged when something changes and the whole production has to be re-tested.

If you use a remote-hosted stress service, you must commit to keeping the scripts current as your site changes.

Stress for sale (continued) Back

Enter the Web. Whereas before companies could reasonably predict the demands on their systems based on the number of users, Internet access means that the whole world can visit a site without warning. There is no longer a way to predict or control demand and, to make matters worse, performance is now paramount. Internal users might gripe about response times, but they really have no other options. External users, however, are only a click away from a competitor. If your site slows down, customers might not wait–they might just go elsewhere.

So, if your company is a hot Internet startup and your business model says your site is going to attract millions of daily hits, how do you make sure your infrastructure is really up to it?

Stress service

Using a remote-hosted stress service, you (or a consultant) develop a set of scripts that execute the most likely pathways through your Web pages. Then you schedule a date and time for the service to turn up the volume of users and transactions on your site while you monitor the supporting infrastructure to determine where the stress points are.

Of course, you have to do the proper groundwork, making sure that the data being used is valid and that the service can access your test site or staging area so it doesn’t bring down the real site.

What this means is that you don’t have to make the investment in the software and servers to generate the potentially huge volumes you are hoping for. It also means, of course, that you have to make an ongoing investment in keeping the scripts current as your site changes, as well as an investment in executing the scripts.

For many companies, site changes can be a daily occurrence, although changes to the scripts only matter when they are executed. So the next question is, how often must you stress your site?

Stress testing

The most obvious times to verify performance are when you change the underlying site infrastructure, such as by adding a new server, updating the database, or making significant software changes. Basically, anything that can impact your site’s performance should be verified before it goes live.

The other implication, of course, is that it is now possible for the traffic over the entire Internet infrastructure to be escalated. Instead of the usual and constantly expanding population of users going about their daily cyber-travels, we will now have virtual users, whose activities are simulated by software, in the mix. Since there is only one Internet, and no test or staging Internet, all traffic is traveling over the real thing.

From a test integrity point of view, the lack of a “test Internet” is both good and bad. It’s good because your site is receiving its simulated users from the same source as the real ones. It’s bad because at some point you may see performance degradation resulting from trying to be sure there’s no performance degradation. For this reason, many sites schedule their performance tests during off hours, such as the late night or early morning; so long as your customers are not global–thereby spanning the time zones–this may reduce the risk of impacting performance during the test.

Where will it all end? No doubt, the bandwidth and reliability of both Web sites and the Internet itself will continue to expand as more users come online and more capacity is added. Web technology is amazingly young for its pervasiveness, and as it matures the infrastructure for managing high volumes will stabilize. Until then, hackers will make the news by attacking sites with something as simple as too much traffic. Are you prepared? //

Linda Hayes is CEO of WorkSoft Inc. She was one of the founders of AutoTester. She can be reached at linda@worksoft.com.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles