Five Private Cloud Pitfalls to Avoid Page 2: Page 2

These tips can help you learn from the experiences of private cloud early adopters.
Posted November 3, 2012
By

Jeff Vance


(Page 2 of 2)

4. Believing that every cloud that claims to be private actually is.

Just because “private” is the term you’re using doesn’t mean you can slack off when it comes to setting up access controls and safeguarding the privacy of data.

“Surprisingly, some ‘private’ clouds really aren't so private. For example, the ‘private’ cloud might mingle data from multiple customers in a single instance, though the cloud itself is not publicly accessible,” said Mike Carpenter, Vice President of Service Assurance at TOA Technologies, a provider of mobile workforce management solutions. “In this case, it is actually a semi-private cloud, because customer data is not actually stored with full privacy. Full privacy means storing the data on separate instances, and without mingling the data of one customer with another.”

Additionally, the best cloud services plan for and provide capabilities that not only ensure every user has a truly private cloud, but also make sure that the customer controls his or her own data.

“At TOA Technologies, the private cloud solution provided to customers is uniquely designed to provide an absolute ‘lock and key relationship’ between the customer’s data and private cloud application, which is the only place in the whole solution that it is ever un-encrypted for use,” Carpenter added.

In a truly private cloud, customers are in control of creating their user accounts. The customer controls the data, and the customer controls access to the only application that can open it.

5. Using a private cloud model to guide the rollout of public cloud services.

Concur Technologies, a provider of travel and expense reporting solutions, had virtualized nearly 80 percent of its internal IT infrastructure and had moved many internal applications into private clouds. However, their customer-facing travel and expense reporting SaaS solution ran up against a major problem, one that is easy to overlook if your cloud focus is directed inward: the fact that heavy-duty enterprise apps don’t always perform well over the public Internet.

“Concur processes more than $50 billion in travel and expense reports each year – roughly 10 percent of the worldwide total,” said Drew Garner, Director of Architecture Services at Concur. “As a SaaS product, our pricing is directly tied to how much it costs to process each expense report. We have to be able to serve a transaction tomorrow with fewer resources than today. If we don’t do that, we’ll get beaten by the competition because they’ll figure out how to do it first.”

Moreover, in a transaction-based environment, end users have little patience for slow performance. If transaction times lag, customers move on to someone else.

Seeking greater scalability and speed, Concur sought to replace its homegrown caching system with memcache (in-memory caching). To identify the best candidates for migration to memcache, the R&D Operations team at Concur needed to analyze SQL query performance across thousands of databases. The team also needed to be able to monitor memcache performance and correlate that performance to activity at other tiers of the application infrastructure.

To tackle these problem, Concur brought in ExtraHop, a provider of application performance monitoring and management solutions.

The ExtraHop system provides real-time transaction analysis at wire speed – up to a sustained 10 Gbps – covering the network, web, database and storage tiers of the application.

“Concur stores 52 million items in 1.4 terabytes of memcache with sub-millisecond access and response times, but there is no way to query the system to find a particular key without dramatically impacting performance,” Garner said. “ExtraHop provides this visibility by passively analyzing transactions as they pass over the network.”

In one case, the R&D operations team used the ExtraHop system to find specific memcache keys that were not stored because they exceeded the default 1 MB limit. “With this specific information, we could apply compression in the application to fix the problem,” Garner said. “Usually, people monitor memcache with server-side and client-side metrics, but there is a lot of activity in the middle that is crucial. With ExtraHop, we can monitor our memcache implementation from end to end.”

This monitoring helps ensure that transactions are processed quickly, which in turn helps to build customer loyalty.


Page 2 of 2

Previous Page
1 2
 



Tags: cloud computing, virtualization, infrastructure, private cloud, VM, monitoring


0 Comments (click to add your comment)
Comment and Contribute

 


(Maximum characters: 1200). You have characters left.