ALSO SEE: Are SaaS/Cloud Computing Vendors Offering Questionable Contracts?
There’s been at least as much healthy skepticism about cloud computing as there has been optimism and real results. And there ought to be, especially as cloud computing moves out of buzzword territory and becomes an increasingly powerful tool for extending IT resources.
To that end, here’s a rundown of ten key things both creators and users of cloud computing should continue to bear in mind.
1) Security
The good news is that the very nature of the cloud may be compelling more real thought about security – on every level – than before. The bad news is that a poorly written application can be just as insecure in the cloud, maybe even more so.
Cloud architectures don’t automatically grant security compliance for the end-user data or apps on them, and so apps written for the cloud always have to be secure on their own terms. Some of the responsibility for this does fall to cloud vendors, but the lion’s share of it is still in the lap of the application designer.
2) Complacency
A cloud computing-based solution shouldn’t become just another passive utility like the phone system, where the owners simply puts a tollbooth on it and charges more and more while providing less and less. In short, don’t give competitors a chance to do an end run around you because you’ve locked yourself into what seems like the best way to use the cloud, and given yourself no good exit strategy. Cloud computing is constantly evolving. Getting your solution in place simply means your process of monitoring and improving can now begin.
3) Client incomprehension
We’re probably past the days when people thought clouds were just big server clusters, but that doesn’t mean we’re free of ignorance about the cloud moving forward. There are all too many misunderstandings about how public and private clouds (or conventional datacenters and cloud infrastructures) do and don’t work together, misunderstandings about how easy it is to move from one kind of infrastructure to another, how virtualization and cloud computing do and don’t overlap, and so on.
A good way to combat this is to present customers with real-world examples of what’s possible and why, so they can base their understanding on actual work that’s been done and not just hypotheticals where they’re left to fill in the blanks themselves.
4) Preventing bottom-up adoption
Cloud infrastructures, like a lot of other IT innovations, don’t always happen as top-down decrees. They may happen from the bottom up, in a back room somewhere, or on an employee’s own time from his own PC.
Examples of this abound: consider a New York Times staffer’s experience with desktop cloud computing. Make a “sandbox” space within your organization for precisely this kind of experimentation, albeit with proper standards of conduct (e.g., not using live data that might be proprietary as a safety measure). You never know how it’ll pay off.
5) Ad-hoc standards as the only real standards
The biggest example of this: Amazon EC2. As convenient as it is to develop for the cloud using EC2 as one of the most common types of deployments, it’s also something to be cautious of. Ad-hoc standards are a two-edged sword.
On the plus side, they bootstrap adoption: look how quickly a whole culture of cloud computing has sprung up around EC2. On the minus side, it means that much less space for innovators to create something open, to let things break away from the ad-hoc standards and can be adopted on their own. (Will the Kindle still be around in ten years?) Always be mindful of how the standards you’re using now can be expanded or abandoned.
6) Over-utilization of capacity
Few things are more annoying to customers than promising something you can’t deliver. The bad news is that in many industries, that’s how things work: overbooking on airlines, for instance.
It might also become like that for cloud providers, who may be forced to sell more capacity than they can actually provide as a way to stay competitive with … well, everyone else doing the same thing. Reuven Cohen of Enomaly has speculated that Amazon EC2 might be doing this right now. With any luck they’re not doing it in lieu of better testing and saner quota allotments.
Testing should always be standard practice. Robust, creative, think-out-of-the-box testing doubly so. Consider the way MySpace used 800 EC2 instances to test itself and see if they could meet anticipated demand for a new streaming music service. Their example involved using the cloud to test their native infrastructure, but there’s no reason one couldn’t use one cloud to generate test demand for another, and determine what your real needs are. And not just once, but again and again.
7) Under-utilization of capacity
Just as over-utilization is both bad planning and bad business, so is under-utilization. In fact, having a good deal of idle capacity you’re paying to support and not generating revenue from may well be worse than the opposite scenario.
This sort of thing’s easier to deal with if you’re the one buying the service, but what if you’re the one selling it? That’s another reason why metrics and robust load testing are your best friends when creating cloud services. Also consider the possibility you’re not selling enough kinds of services: is there room in your business plan for more granular, better-tiered service that might draw in a wider array of customers?
8) Network limitations
One word: IPv6. If you’re deploying systems, using infrastructure or writing applications that aren’t IPv6-aware now, you’re building a time bomb under your chair.
IPv4’s days are more numbered than ever, and tricks like NAT or freeing up previously-unallocated blocks aren’t going to buy enough time to get us through the decade. Cloud computing, with its world of hosts that can appear by the thousands at once, will all but guarantee we need IPv6’s address pool and technical flexibility.
Think forward on every level, and encourage everyone building on top of your infrastructures to do the same thing.
9) Latency
Latency has always been an issue on the Internet; just ask your local World of Warcraft raiding guild. It’s just as much of an issue in the cloud.
Performance within the cloud doesn’t mean much if it takes forever for the results of that performance to show up on the client. The latency that a cloud can introduce doesn’t have to be deadly, and can be beaten back with both an intelligently planned infrastructure and smartly-written applications that understand where and how they’re running.
Also, cloud-based apps – and the capacity of cloud computing itself – are only going to be ramped up, not down, in the future. That means an arms race against increases in latency is in the offing as well. Just as the desktop PC’s biggest bottlenecks are more often storage and memory, not CPU, the true source of cloud latency must be targeted and improved.
10) The Next Big Thing
The cloud isn’t an endpoint in tech evolution, any more than the PC or the commodity server were final destinations. Something’s going to come after the cloud, and may well eclipse it or render it redundant. The point isn’t to speculate about what might come next, but rather to remain vigilant to change in the abstract. As the sages say, the only certainty is uncertainty, and the only constant thing is the next big thing.