Thursday, March 28, 2024

Five Private Cloud Pitfalls to Avoid

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

As organizations move more of their IT infrastructure and applications into private clouds, they are entering into uncharted territory. The roadmap isn’t always clear, and it’s easy to make mistakes – mistakes that in hindsight look obvious, but which can be hard to predict if you lack experience with cloud infrastructures.

However, this isn’t completely uncharted territory. Just as most experts will tell you to avoid the first version of any new gadget (let someone else work the kinks out), organizations that have moved slowly to the cloud can now benefit from the experiences of early adopters.

Here are five private cloud pitfalls you should avoid:

1. Believing that consolidating servers through virtualization will eliminate over-provisioning.

Apartments.com started to virtualize its development and testing servers back in 2007 using VMware. However, Apartments.com soon realized that they need to push even further in order to stay competitive in the increasingly cutthroat online housing market. Matt Stratton, Director of Technology Operations at Apartments.com, noted that even with a consolidated infrastructure, they didn’t have the tools necessary to create an automated, self-service environment.

The IT staff also asked for tools that would help them better monitor server performance and detect potential server health issues before they impacted performance. The status quo was that when a server reached approximately 70 percent capacity, IT ordered more servers. Of course, in a highly cyclical market like apartment rentals, this led to plenty of servers sitting idle after the peak apartment-hunting season was over.

To address this problem, Apartments.com switched from VMware to Microsoft’s Hyper-V technology, which is included in the Windows Server 2008 R2 operating system. They migrated several hundred VMware virtual machines (VMs) to Hyper-V in late 2011 and early 2012. Now, Apartments.com is upgrading to Microsoft System Center 2012, which is designed specifically for managing private cloud environments.

System Center 2012 includes Server App-V, a tool that allows IT teams to create virtual application packages that can be copied to any computer that has a Server App-V Agent on it, without requiring a local installation. This reduces the number of images IT has to manage, speeds software deployment and improves availability.

The goal, which Apartments.com expects to achieve soon, is to give developers the ability to provision their own VMs using a template and a self-service portal. By automating much of the process, IT can eliminate most of the manual work involved in provisioning and de-provisioning virtual resources, which means that if servers are sitting idle, they can be de-provisioned and freed up for something else, rather than just waiting for the next year’s peak season.

2. Believing that de-provisioning VMs is easy to do.

Just because you can de-provision a VM and free it up for something else in theory, doesn’t mean you’ll be able to do so in practice.

“How can you put a bullet in a virtual machine without knowing who owns it?” asked Paul Martin, Systems Engineering Lead, EMEA, for Embotics, a private cloud management provider. “Most IT administrators don’t know who owns it and will end up erring on the side of caution, which causes other problems.”

Without the proper tools in place to identify “zombies and VMs that have had no log-in, or have not been powered on for a certain number of days” and to identify the owners of those orphaned VMs, IT pros won’t risk getting rid of them. It’s too big of a political risk. Who knows what toes you might be stepping on?

“Having an owner assigned from day one makes this whole process easier. It’s important to ensure that all new VMs have ownership assigned and that your private cloud management solution is able to apply these on deployment and also retrospectively,” Martin said.

3. Forgetting to update chargeback tools along with your infrastructure.

Aston University, based in Birmingham, UK, began virtualizing its infrastructure back in 2004, when it initially consolidated a set of ten finance applications. This initial small environment eventually grew into the start of a full-fledged private cloud initiative.

Aston U. moved their first customers onto their private cloud in 2009, billing them through a SharePoint manual chargeback system. Service chargebacks allowed IT to fund the gradual expansion of the cloud infrastructure. A “cloud service first” corporate directive was established in 2010, with all new services going automatically to the cloud environment unless there was a strong reason why they shouldn’t.

All of the university’s schools of study and support departments now utilize the service. As their private cloud environment grew, however, the IT server team started to experience problems with capacity and with their manual chargeback system.

To address these problems, they investigated solutions from VMware and Veeam, before settling on the V-Commander private cloud management suite from Embotics. V-Commander’s chargeback portal helped them transform the chargeback process from a manual, error-prone process into an automated, simplified one.

Capacity management and resource optimization features also identified existing VMs that were either sprawled or over- or under-resourced, and an assisted placement feature now automatically guides new services to hosts that have available capacity.

Aston uses built-in decommissioning workflows to automate service renewals. Now, as each VM reaches its renewal date, an e-mail that includes the cost of the service (automatically calculated by V-Commander) is sent to the customer, who has until the actual renewal date to approve the billing. If the billing is approved, the expiration date is reset. If not, the service is automatically shut down, and that resource is freed up for someone else.

4. Believing that every cloud that claims to be private actually is.

Just because “private” is the term you’re using doesn’t mean you can slack off when it comes to setting up access controls and safeguarding the privacy of data.

“Surprisingly, some ‘private’ clouds really aren’t so private. For example, the ‘private’ cloud might mingle data from multiple customers in a single instance, though the cloud itself is not publicly accessible,” said Mike Carpenter, Vice President of Service Assurance at TOA Technologies, a provider of mobile workforce management solutions. “In this case, it is actually a semi-private cloud, because customer data is not actually stored with full privacy. Full privacy means storing the data on separate instances, and without mingling the data of one customer with another.”

Additionally, the best cloud services plan for and provide capabilities that not only ensure every user has a truly private cloud, but also make sure that the customer controls his or her own data.

“At TOA Technologies, the private cloud solution provided to customers is uniquely designed to provide an absolute ‘lock and key relationship’ between the customer’s data and private cloud application, which is the only place in the whole solution that it is ever un-encrypted for use,” Carpenter added.

In a truly private cloud, customers are in control of creating their user accounts. The customer controls the data, and the customer controls access to the only application that can open it.

5. Using a private cloud model to guide the rollout of public cloud services.

Concur Technologies, a provider of travel and expense reporting solutions, had virtualized nearly 80 percent of its internal IT infrastructure and had moved many internal applications into private clouds. However, their customer-facing travel and expense reporting SaaS solution ran up against a major problem, one that is easy to overlook if your cloud focus is directed inward: the fact that heavy-duty enterprise apps don’t always perform well over the public Internet.

“Concur processes more than $50 billion in travel and expense reports each year – roughly 10 percent of the worldwide total,” said Drew Garner, Director of Architecture Services at Concur. “As a SaaS product, our pricing is directly tied to how much it costs to process each expense report. We have to be able to serve a transaction tomorrow with fewer resources than today. If we don’t do that, we’ll get beaten by the competition because they’ll figure out how to do it first.”

Moreover, in a transaction-based environment, end users have little patience for slow performance. If transaction times lag, customers move on to someone else.

Seeking greater scalability and speed, Concur sought to replace its homegrown caching system with memcache (in-memory caching). To identify the best candidates for migration to memcache, the R&D Operations team at Concur needed to analyze SQL query performance across thousands of databases. The team also needed to be able to monitor memcache performance and correlate that performance to activity at other tiers of the application infrastructure.

To tackle these problem, Concur brought in ExtraHop, a provider of application performance monitoring and management solutions.

The ExtraHop system provides real-time transaction analysis at wire speed – up to a sustained 10 Gbps – covering the network, web, database and storage tiers of the application.

“Concur stores 52 million items in 1.4 terabytes of memcache with sub-millisecond access and response times, but there is no way to query the system to find a particular key without dramatically impacting performance,” Garner said. “ExtraHop provides this visibility by passively analyzing transactions as they pass over the network.”

In one case, the R&D operations team used the ExtraHop system to find specific memcache keys that were not stored because they exceeded the default 1 MB limit. “With this specific information, we could apply compression in the application to fix the problem,” Garner said. “Usually, people monitor memcache with server-side and client-side metrics, but there is a lot of activity in the middle that is crucial. With ExtraHop, we can monitor our memcache implementation from end to end.”

This monitoring helps ensure that transactions are processed quickly, which in turn helps to build customer loyalty.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles