Datamation content and product recommendations are
editorially independent. We may make money when you click on links
to our partners.
Learn More
Virtualizing your environment is more than simply taking a bunch of old servers (possibly laying around since the last Boom in the 1990s), and partitioning them. In most cases it means buying more hardware.
This makes virtualizing pretty much a no-brainer for a company starting out or one building a data center from the ground up. But most companies don’t have that kind of cash just sitting around.
“Don’t skimp on hardware,” Wes Noonan, Lead Technical Analyst at NetIQ, cautioned in a briefing with ServerWatch. “If you skimp on hardware it won’t necessarily cost you in capital. It will, however, cost you in other ways,” he elaborated, citing the human toll, personnel issues, and lost sales.
So how to explain the need to invest heavily in hardware?
Quantify, Theodore Ritter, an analyst with Nermertes research, told the audience in a session titled, “Making the Case: Selling Virtualization When ROI Isn’t Enough.”
He said that, typically, the majority of enterprises go after low-hanging fruit and assume all costs go down. This is fine in the beginning, as costs do generally go down. It will not, however, work in the long term when enterprises go beyond low-performing servers and operations.
Oftentimes, the deployments themselves occur over several years. According to Ritter, a typical deployment process takes more than two years, and for a larger organization, it can be a five to seven year time frame. With such projects, you can see the initial benefits quickly, but as you move along into production and heavy database apps, the ROI is not as clear or as quickly forthcoming.
Enterprises therefore, must take a long-term view. Ritter recommends finding a key metric to measure early in the process that takes flexibility and agility into account. This way, they become standard trackers and in time can be the basis of a business case.
Ritter was emphatic about this, noting, “if you don’t put the metric is place early to measure the return, it’s going to bite you early.”
There are multiple approaches to measuring this way. High availability and disaster recovery, for example, are critical issues, and in some cases virtualization makes it financially feasible for organizations to set up a failover site, if they couldn’t before. Benefits such as these should be quantified and taken into account.
Cost reduction is another way to go. One company used cost prevention as justification for initial investment. Other things to bring up include:
- Shorter maintenance windows
- Some servers, such as Exchange, run better when virtualized
- Extend the life cycle of hardware as a virtual machine (this is pretty much a no-brainer, as it’s taking advantage of hardware already in play, and most likely already depreciated)
- The ability to start and stop hardware
Ritter provided one big caveat — Process and procedure must keep up with virtualization. Oftentimes, he explained, it’s not the hardware holding things up but the human processes around it.
At first blush, provisioning goes down from weeks to hours. However, most provisioning time organization face is outside of the actual virtualization process (e.g., getting the purchase order in and the time spent getting it in rack). The process that comes before the actual virtualizing must be fixed or the true impact of virtualization will not be felt.
Amy Newman is the managing editor of ServerWatch. She has been following the virtualization space since 2001 and is currently attending VMworld.
This article was first published on ServerWatch.com.
-
Huawei’s AI Update: Things Are Moving Faster Than We Think
FEATURE | By Rob Enderle,
December 04, 2020
-
Keeping Machine Learning Algorithms Honest in the ‘Ethics-First’ Era
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 18, 2020
-
Key Trends in Chatbots and RPA
FEATURE | By Guest Author,
November 10, 2020
-
Top 10 AIOps Companies
FEATURE | By Samuel Greengard,
November 05, 2020
-
What is Text Analysis?
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 02, 2020
-
How Intel’s Work With Autonomous Cars Could Redefine General Purpose AI
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 29, 2020
-
Dell Technologies World: Weaving Together Human And Machine Interaction For AI And Robotics
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 23, 2020
-
The Super Moderator, or How IBM Project Debater Could Save Social Media
FEATURE | By Rob Enderle,
October 16, 2020
-
Top 10 Chatbot Platforms
FEATURE | By Cynthia Harvey,
October 07, 2020
-
Finding a Career Path in AI
ARTIFICIAL INTELLIGENCE | By Guest Author,
October 05, 2020
-
CIOs Discuss the Promise of AI and Data Science
FEATURE | By Guest Author,
September 25, 2020
-
Microsoft Is Building An AI Product That Could Predict The Future
FEATURE | By Rob Enderle,
September 25, 2020
-
Top 10 Machine Learning Companies 2020
FEATURE | By Cynthia Harvey,
September 22, 2020
-
NVIDIA and ARM: Massively Changing The AI Landscape
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
September 18, 2020
-
Continuous Intelligence: Expert Discussion [Video and Podcast]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 14, 2020
-
Artificial Intelligence: Governance and Ethics [Video]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 13, 2020
-
IBM Watson At The US Open: Showcasing The Power Of A Mature Enterprise-Class AI
FEATURE | By Rob Enderle,
September 11, 2020
-
Artificial Intelligence: Perception vs. Reality
FEATURE | By James Maguire,
September 09, 2020
-
Anticipating The Coming Wave Of AI Enhanced PCs
FEATURE | By Rob Enderle,
September 05, 2020
-
The Critical Nature Of IBM’s NLP (Natural Language Processing) Effort
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
August 14, 2020
SEE ALL
ARTICLES