Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your BusinessMost server rooms and data centers subscribe to the maxim that more is better more servers that is.
According to a survey by market research firm Novo1 Inc., the average data center has 230 servers. That number is rising 10 percent to 20 percent each year. Further, these data centers typically host a diverse mixture of boxes from HP, Dell, IBM and others. As a result, managing servers can be a time-consuming activity.
Server management is currently done on a per server basis while IT services are offered across a constellation of servers, says John Humphreys, an analyst with IDC. So there is an inherent disconnect between offering IT services and managing them. Standards become a way to tear down that wall.
Accordingly, the Distributed Management Task Force (DMTF, has issued a standard known as Systems Management Architecture for Server Hardware (SMASH). Some of the benefits of SMASH include reducing the management burden of server hardware, cutting the cost of server administration, improving the reach of system administrators to remotely located servers and standardizing the management of heterogeneous environments.
In days gone by, hardware was the biggest data center cost. With the proliferation of small servers, the cost equation has gradually shifted. IDCs Humphreys goes so far as to say that server management costs have skyrocketed.
Today we find the hardware cost versus management cost scale has tipped to the point where the majority of spending is on maintaining the existing systems, says Humphreys. That means less and less capital is available for investment areas like business process improvements or using technology to innovate.
SMASHs main value comes from its Command Line Protocol (CLP). It provides a way to manage any number of different types of servers with the same simple script. Instead of scripts for every vendor and sometimes for each server, IT managers can use one script for the same function in every server in the data center. Some of the SMASH CLP scripts, in fact, are as short as two words.
The reason SMASH is so simple and so straightforward is found on an earlier DMTF project the Common Information Model (CIM) which is now built into most existing servers. SMASH leverages the richness of CIM to simplify whats needed in a SMASH script.
DMTFs Bumpus characterizes it as CIM having a lexicon of nouns which SMASH harnesses to reduce its commands down to a couple of verbs.
System managers told us they wanted one command line protocol they could write scripts for and interact with in a consistent way, he says. They were fed up with having to use scripts for each separate vendor. Even across product lines within vendors, the scripts required can be different.
Essentially, there are two types of protocols needed to manage servers: interactive tools such as SMASH for scripting and programmatic tools such as HP OpenView, Dell OpenManage, IBM Tivoli and CA Unicenter.
Programmatic tools are vital for correlation, the running of business systems, interacting with the network, and in particular, managing applications. These are functions SMASH cannot match.
On the downside, these heavy-duty management consoles can be performance hogs and are managed via a graphical user interface (GUI). They can be cumbersome and overkill for simple, repetitive management tasks. System administrators prefer to execute these functions using simple scripts using the command line.
Bumpus gives the example of upgrading the firmware on every server in the data center.
Before, I would have to create and maintain a library of scripts for Dell, for HP, for IBM, etc., says Bumpus. Now all I need is one script, change the target and use the firmware applicable to each type of server. This cuts the time requirement down considerably.
As well as managing servers as a whole, SMASH addresses individual components. A server, for example, may have multiple processors, sensors, network cards, logical devices and cooling systems. SMASH can be directed at specific processors, components and subcomponents. Therefore, you can set up a script to periodically check the temperature sensors on all machines so you can see how your power and AC needs can be adjusted during the day or night to avoid overheating of equipment, or reduce the electric bill safely.
IPMI Baby One More Time
IPMI means Intelligent Platform Management Interface (IPMI). It is a widely adopted and widely deployed standard that provides lights-out remote monitoring and recovery for servers and blades. According to Avocent Corp., a management software and appliance vendor from Huntsville, Ala., more than 70 percent of servers shipped last year included IPMI capabilities.
IPMI was designed to fill the holes that traditional agent-based software management tools (the programmatic ones covered above) cannot cover. When an OS has hung, for example, software tools are useless. Similarly, when a server is turned off, there is nothing you can do about it. With IPMI a user can remotely recover a hung server or turn one on. The downside the command line interfaces for those IPMI servers vary from vendor to vendor.
Dell and IBM IPMI servers have one version, HP another and Avocent appliances yet another, says Steve Rokov, technical director of Avocents manageability solutions group. To be able to standardize on the same command line environment that all the major vendors agree on is a huge leap forward in usability on the IT side.
And thats exactly what SMASH has achieved. It gives IPMI a standardized high-level method of scripting.
If the machine is hung and needs new firmware, SMASH and IPMI let you add the firmware remotely instead of having to physically go to the machine, says Bumpus. Even if it is powered off, you can still use these protocols via auxiliary power, regardless of the state of the OS.
According to Rokov, there are three main scenarios where SMASH comes into play large clusters, branch offices and mixed racks of 1U servers and blades. In clusters, they solve problems such as diagnosing and/or power cycling multiple servers when the OS has hung, or setting thresholds for potential heat and power issues and alerting the management consoles ahead of meltdowns.
Branch offices typically lack the expertise or personnel required to keep an eye on their systems 24/7. Similarly, security can be a big issue. So how do you fill the gap? By placing an appliance out at the branch, you can aggregate alerts and secure access to a single point. Receiving an IPMI "Security Alert" back at the data center, for instance, might indicate that someone just popped open a server chassis at a branch. If that appliance supports SMASH, then the same SMASH scripts can be run from the data center, irrespective of the model or vendor of those branch servers to perform a health check or identify whether changes have been made out in the field.
Additionally, with blades invading the 1U rack, they have to be managed. IPMI and SMASH dont care whether they are on the blade, on the motherboard, on a plug-in card or in the blades chassis manager. To the administrator (and his/her scripts) it all looks the same which simplifies system administration greatly.
Here Come the Products
Bumpus reports that SMASH-based products are already in the works. The technology will be integrated into servers via a built-in management controller. With such a broad consensus among vendors, he predicts that the market will be seeing them by either the end of this year or early 2006.
This will include innovative tools from the likes of Avocent and others that will extend the capabilities of IT. For instance, he predicts that we will soon be seeing products that utilize proxy-type tools that enable SMASH and legacy protocols to talk to each other.
Such proxies can be used to manage both old and new servers, says Bumpus. That will mean you dont have to switch out all your old servers in order to take advantage of SMASH and IPMI.