Sunday, October 6, 2024

For Starters: The Virtualization Performance Quandary

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Virtualization can have a huge effect on productivity and cost savings when consolidating wide-ranging workloads but getting the best performance out of your software is something else entirely. Here are some of the things you’ll want to keep an eye on if you’re determined to get the most out of the technology.

Software Soft Spots

Those joyous testimonies of an effortless move to a virtualized platform are quite true with most small-scale projects. There isn’t much difficulty in running a few operating system images on relatively modern hardware, each machine handling a few tasks by its lonesome. Things tend to get a whole lot trickier when you scale up the workload.

Every technology has those tempting “what if” possibilities attached to them. Thoughts of quickly migrating everything you and your company have worked hard to build up over the years to newer methods of doing things always has pitfalls, even with more mature products in the virtualization category.

The reality is that not all tasks are well suited to run under a Virtual OS, the usual culprits being software that’s highly resource intensive and I/O bound applications, similarly there’s software that just doesn’t want to play nice for whatever reason and is best left to run on its own machine where it can happily run its course.

Software response time under virtualization is also a key point to consider, if you start loading up a number of system images, each running a few tasks, things can get a little hairy. Some software isn’t necessarily resource intensive but requires snappy system response in order to perform at its best. So when users start leaning on the application you may find it performing worse than on minimally-specced hardware.

The fact of the matter is that there is a performance penalty with any action carried out on a virtualized system. An action that’s inconsequential on a low powered machine can suddenly become burdensome on even the most powerful of servers. The prime candidates for consolidating are generally low usage, low resource intensive applications.

You’re more likely than not to go back to the drawing board with quite a few consolidation plans once things aren’t going smoothly as you’d of hoped. Virtualization overhead can get ugly and it’s not going to be cleared up any time soon, but you can bet the large software and hardware vendors are working on this very problem.

On The Hardware Front

As with any software trend, hardware support that benefits it tends to lag behind quite a bit. There’s not much blame you can pass on to the vendors since it takes a while to engineer features into their hardware. But the good news is that your two favorite chip vendors have you in mind.

AMD and Intel’s rivalry extends to all things; even the hardware accelerated virtualization field isn’t safe. The two companies have had some basic hardware assist with various virtualization tasks in the past few years but it has been a rather weak response to some of the serious shortcomings that CPUs have when handling this demanding style of application.

Two companies have been making strides in order to improve on the performance penalty virtualization can impose on a system. The x86 instruction set the company’s chips use have a particularly difficult time with virtualization as detailed here. Their first, and most logical step, is to reduce the effect the hypervisor has on a system’s performance. AMD’s SVM and Intel’s VT-x have improved how hypervisor emulation and the like behave when accessing system resources.

Both companies also have another trick up their sleeves with regards to CPU performance penalties a system can encounter when managing the memory requirements of multiple virtual machines. AMD’s Nested Page Tables, found on Quad Core Opterons and the somewhat similar Intel EPT, soon to be found on their upcoming “Nehalem” CPUs, have a dramatic effect on limiting the performance hit on memory page table access.

The hardware additions along with inevitable software tweaking over time will lead to performance increases (or less of an overall system performance loss) when running virtual machines.

Current performance penalties incurred by systems running multiple virtual machines can cause quite a bit of frustration when you’re trying to balance cost-cutting and remove any possible instabilities that might arise with loading up a server full of necessary applications. Thankfully companies are improving on their products in leaps and bounds. In a few years, close to native performance may even become the norm.

This article was first published on EnterpriseITPlanet.com.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles