Free Newsletters :

Top 10 Reasons Cloud Computing Deployments Fail

Lack of cloud computing vendor management and poor understanding of the risks are among the challenges that doom cloud deployments.
(Page 1 of 3)

Practically every cloud computing provider – from Google to Rackspace, Amazon to Salesforce.com – has suffered through an outage at some point. When outages happen, skeptics question the viability of cloud computing.

Talk to anyone invested in the cloud, though, and it doesn’t take long to understand that outages are just one of the costs of doing business in the cloud, and . . . well, so what?

Outages happen with pretty much every service we consume. Apple is enjoying record profits, even as the iPhone 4 drops calls at an alarming rate. Where are the stories questioning the viability of smartphones or the iPhone or Apple?

Outages happen in on-premise data centers everywhere. Where are the stories questioning the viability of in-house IT? (Actually, those stories are out there, but they all ask if cloud computing is making traditional IT obsolete.) When was the last time your power went out? Did you question the viability of utility-provided electricity?

There’s only so much you can do in an outage – backup generators (or in the case of the cloud, backed up data) help, but they don’t solve the problem. Outages are the service provider’s problem, not yours.

With other common failures, however, the customer takes a much more active role in determining success or failure. Here are some of the most common mistakes organizations make as they embrace the cloud.

1. Failing to define “success.”

Too many organizations regard cloud computing as a modern-day cure-all. Having problems with the bottom line? Turn to the cloud. Having trouble keeping remote workers productive? Trust the cloud. Are more of your employees working from home? Hey, maybe the cloud can help.

“Setting unrealistic expectations is the number one reason organizations have trouble with cloud computing,” said Robert Stroud, international VP of ISACA (Information Systems Audit and Control Association), a non-profit IT governance organization, and VP of service management and governance at CA.

“Too many organizations believe that they can put in a request to a cloud provider, and, magically, everything will be working perfectly overnight.”

If you were setting up a new application in house, would you be that naïve? If you don’t set concrete, realistic goals, don’t be surprised when the cloud doesn’t meet your expectations.

2. Failing to update computing concepts.

Early this year, startup Heroku was blindsided by an Amazon EC2 outage. Heroku provides a cloud development platform for Ruby on Rails that is hosted by Amazon. When weather caused an outage, Heroku saw its entire infrastructure disappear, along with the 40,000+ applications running on its platform.

The company had done everything it was supposed to in terms of failover and redundancy. What they hadn’t realized, though, was that everything resided in a single Amazon “availability zone.”

Amazon worked with Heroku to get their platform back on line quickly, but this incident shows how out-of-date computing concepts can undermine cloud efforts. Failover, backups and redundancy were easier to visualize in the on-premise computing world. If you backed up off-site, you were in good shape.

If everything is off-site, though, how do you know what level of failover capability you actually have? The whole concept of data being in a specific place is challenged by cloud computing.

“One of the things we’ve learned is that stability in the cloud is complicated,” said Byron Sebastian, CEO of Heroku. “One of the myths about cloud computing is that cloud infrastructure is a complete solution. It’s not. You need add-ons in the cloud as with any other IT system.”

As a result, Heroku has expanded its own platform to offer its customers such services as advanced failover, load balancing and redundancy, all tailored for cloud-hosted applications.

3. Failing to hold service providers accountable.

Heroku was lucky. Amazon immediately reached out to them and helped them solve the problem. Others haven’t been so lucky. Visit the user forums of any major cloud computing platform and you’ll see plenty of venting.

“X provider lost all of my data and won’t do anything about it!” is how these complaints often go (they’re usually in all caps and with many more exclamation points.) Some of the rants are obviously from people who screwed up and are looking for someone else to blame. Some are the rants of unbalanced lunatics. Others have the ring of legitimacy.

I’ve talked to plenty of people off the record who complained about service providers, but few will discuss the struggles they’ve had with customer service. (This isn’t unusual for any story, so don’t start imagining a broad cloud conspiracy.) Anecdotally, though, the scales are weighted in the service providers’ favor.

Michele Hudnall, solution marketing manager for BSM at Novell, emailed me to emphasize the importance of well-defined SLA’s. According to Hudnall, things you should watch out for are a lack of SLA’s, vague SLA’s and poor overall service management.

Organizations can easily lose 1-2% of revenues when mission-critical services go down even for a short amount of time. When that happens, it’s important to hold the service provider accountable. This may mean renegotiating your contract to include SLA penalties or seeking remediation.

Gartner recently drafted a list of customer rights that cloud vendors should honor. These included the right to SLA’s that address liabilities, remediation and business outcomes; the right to notification and choice about changes that will affect the service of consumers’ business processes; and the right to understand the technical limitations of the system up front.


Page 1 of 3

 
1 2 3
Next Page



Tags: cloud computing, Cloud, cloud deployment, Cloud Providers, cloud management


0 Comments (click to add your comment)
Comment and Contribute

 


(Maximum characters: 1200). You have characters left.