Thursday, October 3, 2024

Capacity Planning: Art, Science or Magic?

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Science fiction writer Arthur C. Clarke told us that ”Any sufficiently advanced technology is indistinguishable from magic.”

Anyone who has worked in IT for long can tell of the miracles they have pulled off. But a manager’s job description doesn’t stop with technical wizardry. It also includes that other piece of the black arts: predicting the future.

No, not something as simple as who will win next spring’s Oscars, but how many bits users will send over the network in six months, or how many terabytes of storage they will consume.

”Capacity Planning as an art form has become more important since the dot.com meltdown, as it demonstrated the dangers of building networks to dubious over-rated forecasts or philosophical visions,” says Andy Bolton, Chief Executive Officer of Capacitas, Ltd., a capacity planning consultancy with offices in London and New York. ”Networks should only be built to customer demand, which is difficult but achievable.”

How exactly, though, do you achieve it?

Crystal balls and tea leaves are notoriously inaccurate for predicting IT utilization. One needs the magic of capacity planning tools.

Following the Trendsetters

The simplest form of capacity planning is trending — taking past and current utilization figures, and using these to estimate future growth trends.

”Trending does enable reasonably accurate quick-and-dirty analyses to take place, with a good return on investment for this work,” says Bolton. ”However, the capacity buffer needed, due to trending inaccuracies, is often greater than the cost of performing a more accurate capacity forecasting analysis.”

In many cases, it also can be achieved without having to buy software or learning new skills. If a company owns a network and systems management (NSM) package, for example, it probably already has the tools needed to gather the utilization figures required for trending. Examining the graphs of network traffic, storage or CPU utilization, will give someone a rough idea of when to upgrade.

If the company doesn’t have an NSM in place, however, it lacks the basis for predicting overload and bottleneck.

”We had issues where certain elements were becoming over-utilized or saturated,” says Benjamin Natal, network engineer for the Sheridan Press in Hanover, Penn. ”We needed to implement a capacity planning process covering our critical systems — key servers, network segments, routers, switches, T-1 links — that would let us be proactive in managing our resources.”

There are a variety of trending tools on the market that can do the job. These include Trending Manager by Qualitech Solutions of Charlotte, N.C.; Orion by SolarWinds of Tulsa, Ok.; Denika by Somix Technologies of Sanford, Me., and Expert Observer by Operative Software Products of Los Angeles.

Sheridan chose Denika and got it up and running in a matter of hours. In addition to using it for the original purpose of capacity planning, Natal says he also has configured the software to identify and resolve network bottlenecks, and to keep a watch on server disk utilization so he can clean up a disk before it affects performance.

Blending Art and Science

While trending provides a basic form of capacity planning and is useful in certain situations, it has its limitations.

As British statesman Edmund Burke once told a member of Parliament in 1791, ”You can never plan the future by the past.”

Trending does not take into account discrete events, such as layoffs, mergers, network upgrades or replacing direct attached storage with a storage area network (SAN). Nor does it take into account the characteristics of a given piece of equipment under different traffic loads. Going from a single-CPU server to a four-way box doesn’t result in four times the throughput.

Accurate capacity predictions require the use of more sophisticated tools, which let the user model different scenarios and see the results. These tools do not model the entire infrastructure, but focus on a certain aspect of it. Here are some examples:

  • TeamQuest Corporation of Clear Lake, Iowa; BMC Software, Inc. of Houston, Texas., and SAS Institute, Inc. of Cary, N.C.;
  • Mainframe tools include Houston, Texas-based BMC Software, Inc.’s Perform & Predict Modules, and Dallas, Texas-based Merrill Consultants’ MXG;
  • Computer Associates, Inc. of Islandia, N.Y.; Hewlett-Packard Company of Palo, Alto, Calif., and IBM, based in Armonk, N.Y., all include network capacity planning with their management suites. OPNET Technologies, Inc. of Bethesda, Md. has a standalone network modeler.

    ”To my knowledge, no tool currently plans all seven layers of the network stack, as most concentrate on closely linked network layers,” says Bolton. ”Although this is a very fragmented approach, it fits well with most carriers and corporations that build networks as their network teams normally only cover limited number of layers.”

    Sprint Communications Company LP in Overland Park, Kan. is one such company. Joel Allen, senior capacity planning manager, uses The Information Systems Manager, Inc.’s Perfman software to do CPU, tape and DASD (Direct Attached Storage Device) capacity planning on the telecom’s IBM z900 series mainframes. But elsewhere in the company, TeamQuest software models usage on the mid-range servers. Allen says Perfman gives him much faster turnaround on CPU modeling which allows him to perform more simulations.

    ”Previously, if we forecast an eight percent growth rate, but someone then wanted to see what it would look like with six percent growth, we would have to rerun the simulation. And we wouldn’t be able to give them an answer till the next day,” he explains. ”With Perfman, it is just a matter of tweaking the number and we can give them an answer in about 15 minutes.”

    Allen has used the modeling software in a variety of ways.

    For example, Sprint has mainframes from several different vendors and used Perfman to model what would happen to workloads when switching from one platform to another. He also has run simulations to determine what would happen on a given CPU if it had extra memory or an additional engine to manage.

    He says that simulations have come in within 10 percent of actual results, but that generating accurate predictions is a mix of an art an a science. The science comes in knowing which statistics to measure and model. The real work comes in getting others in the company to give you accurate forecasts of future business needs.

    ”It’s not just a straightforward scientific process. You have to convince people there is a benefit to letting you in on their plans,” Allen says. ”Then, when I compare the forecast to the trends, I rely real heavily on my eyeballs and seeing if it really makes sense to me.”

  • Subscribe to Data Insider

    Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

    Similar articles

    Get the Free Newsletter!

    Subscribe to Data Insider for top news, trends & analysis

    Latest Articles