Virtualization is catching on like never before. Just about every server vendor is advocating it heavily, and IT departments worldwide are buying into the technology in ever-increasing numbers.
“The use of virtualization in the mainstream is now relatively commonplace, rather than just in development and test,” said Clive Longbottom, an analyst at U.K.-based Quocirca. “In addition, business continuity based on long-distance virtualization is being seen more often.”
As a result, the time has come to more closely align hardware purchasing with virtualization deployment. So what are some of the important do’s and don’ts of buying servers and other hardware for a virtual data center infrastructure? What questions should IT managers ask before they make selection decisions on servers? And how should storage virtualization gear be integrated into the data center?
Do’s and Don’ts
There are, of course, plenty of ways to virtualize, depending on the applications being addressed. This article will focus on a typical case where infrastructure and business logic applications are the main targets.
With that in mind, one obvious target is memory. It is a smart policy to buy larger servers that hold more memory to get the best return on investment. While single- and dual-processor systems can host multiple applications under normal circumstances, problems arise when two or more hit peak usage periods.
“Our field experience has shown that you can host more VMs [virtual machines] per processor and drive higher overall utilization on the server if there are more resources within the physical system,” said Jay Bretzmann, worldwide marketing manager, System x at IBM (Armonk, N.Y.). “VMware’s code permits dynamic load balancing across the unused processor resources allocated to separate virtual machines.”
|Smart IT Articles|
He advised buying servers with more reliability features, especially those that predict pending failures and send alerts to move the workloads before the system experiences a hard failure. Despite the added cost, organizations should bear in mind that such servers are the cornerstone of any virtualization solution. Therefore, they deserve the lion’s share of investment.
“Businesses will lose significant productivity if the consolidation server fails,” said Bretzmann. “A hard crash can lead to hours of downtime depending upon what failed.”
Longbottom, however, made the point that an organization need not spend an arm and a leg for virtualization hardware — as long as it doesn’t go too low end.
“Cost of items should be low — these items may need swapping in and out as time goes on,” said Longbottom. “But don’t just go for cheapest kit around — make sure that you get what is needed.”
This best achieved by looking for highly dense systems. Think either stackable within a 19-inch rack or usable as a blade chassis system. By focusing on such systems, overall cooling and power budgets can be better contained. Remember, too, not every server is capable of being managed in a virtual environment. Therefore, all assets should be recognizable by standard systems management tools.
Just as there are things you must do, several key don’ts should be observed as well. One that is often violated is that servers should not be configured with lots of internal storage.
“Servers that load VMs from local storage don’t have the ability to use technologies like VMotion to move workloads from one server to another,” cautioned Bretzmann.
What about virtualizing everything? That’s a no-no, too. Although many applications benefit from this technology, in some cases, it actually makes things worse. For example, database servers should not be virtualized for performance reasons.
Support is another important issue to consider.
“Find out if the adoption of virtualization will cause any application support problems,” said Bretzmann. “Not all ISVs have tested their applications with VMware.”
Most of the provisos covered above also apply to purchasing gear for storage virtualization.
|Smart IT Articles|
“Most of the same rules for classic physical environments still apply to virtual environments — it’s really a question of providing a robust environment for the application and its data,” said John Lallier, vice president of technology at FalconStor Software (Melville, N.Y.).
While virtual environments can shield users from hardware specific dependencies, they can also introduce other issues. One concern when consolidating applications on a single virtualization server, for example, is that you may be over-consolidating to the detriment of performance and re-introducing a single-point-of-failure. When one physical server fails, multiple virtual application servers are affected.
“Customers should look for systems that can provide the same level of data protection that they already enjoy in their physical environments,” said Lallier.
He believes, therefore, that storage purchasers should opt for resilient and highly available gear that will keep vital services active no matter what hardware problems arise. In addition, Lallier suggests investing in several layers of protection for large distributed applications that may span multiple application servers. This should include disaster recovery (DR) technology so operations can quickly resume at remote sites. To keep costs down, he said users should select DR solutions that do not require an enormous investment in bandwidth.
As a cost-cutting measure, Lallier advocates doubling up virtual environments. If the user is deploying a virtual environment to better manage application servers, for example, why not use the same virtualization environment to better manage the data protection servers? As an example, FalconStor has created virtual appliances for VMware Virtual Infrastructure that enable users to make use of its continuous data protection (CDP) or virtual tape library (VTL) systems that can be installed and managed as easily as application servers in this environment.
Of course, every vendor has a different take. Network Appliance, aka NettApp (Sunnyvale, Calif.), provides an alternative to FalconStor using the snapshot technology available in its StoreVault S500. This storage array handles instant backups and restores without disrupting the established IT environment.
“Useful products are able to host VMs over multiple protocols, and the StoreVault can do it via NFS, iSCSI or FCP — whatever your environment needs,” said Andrew Meyer
StoreVault Product Marketing Manager at NetApp.
“Don’t get trapped into buying numerous products for each individual solution. One product that is flexible with multiple options (can handle VMs, create a SAN, handle NAS needs, provide snapshots and replication) may be a smarter investment as a piece of infrastructure.”
This article was first published on Serverwatch.com.