There’s no getting away from it: Cloud computinghas many potential benefits, but it has a number of drawbacks as well.
On the positive side, having applications provided from the cloud offers enterprises the possibility of all seizes the possibility of low-cost compute resources that are almost infinitely scalable. There’s the ability to pay “by the hour” for resources only when they are needed, and for sudden surges in demand for resources to be accommodated very easily. Cloud computing also frees up capital that would otherwise be tied up in hardware and data center bricks and mortar, and it frees up IT staffers who would otherwise be tending to servers so they can work on more productive IT endeavors.
But it’s not all sunshine and roses: The drawbacks revolve around issues about data security and how sensible it is to store it with a third party (assuming regulatory requirements permit it); portability and the possibility of being locked in to one cloud provider; reliability; data logging; speed and the inevitable latency when dealing with servers in the cloud half a continent away; and geo-political worries — where in the world is the cloud data center running your apps, and do you want it there? This last is less of an issue for U.S.-based enterprises, but it is a very real concern for businesses in many other countries.
But for some organizations there’s one real show stopper when it comes to getting the benefits of cloud computing: heterogeneity. Many corporate data centers have many different generations of servers from a variety of vendors running different operating systems on different processors — Windows, AIX, Solaris, Linux, Intel, PowerPC, SPARC and so on. In contrast, most cloud services offer a limited choice of operating systems running on a narrow range of hardware.
This leaves heterogeneous enterprises in a bit of a quandary. It may be possible to offload some applications into the cloud, but the remainder still must be managed and run in-house. If this is the case, then there may be some efficiency benefits, but a smaller data center could actually mean that many economies of scale are lost. What remains is a hodge-podge of different systems that need a great deal of time and many different skill sets to manage, while the easier-to-manage systems would be gone. Efficiency actually goes down.
One solution is to operate a cloud environment in-house: a so-called “internal cloud,” said Steve Oberlin, chief scientist at Cassatt, a San Jose, Calif.-based IT infrastructure management software vendor. “Internal clouds help you to pool your computing resources into a cloud and manage it, applying server resources dynamically on the fly in response to demand,” he says. “What you end up with is higher utilization and efficiency.”
Many organizations have already embarked on virtualization programs to boost server utilization rates and reduce power and space requirements in the data center, but Oberlin says an internal cloud goes beyond this. It enables applications that are not suitable for virtualization (such as those that require the resources of an entire server at peak times) to run more efficiently, and it includes virtualized servers anyway: virtualization, in other words, is part of an internal cloud solution, he says.
But here’s an important question: Which is better, implementing an internal cloud or using a public cloud? The answer, according to James Staten, a principal analyst at Forrester, is that both have their advantages and disadvantages. “Internal clouds are good because you can follow all of your workflow and security guidelines, and ensure that you are running the right code. The trade-off is that you can’t reach the economies of scale that public cloud providers achieve,” he said.
“On the other hand, if you use a public cloud provider you end up having to do lots of work like license management and adapting to the processes of the public cloud. You have to work with what is on offer. But you do get the benefits of economies of scale.”
So what does it mean to run an internal cloud? At the most basic level, Cassatt’s Active Response software which simply manages power by monitoring usage and applying policies to shut down servers at non-peak times when they are not required, such as on weekends. “You can get pretty dramatic savings on your power bill from this,” said Oberlin. “We see significant energy savings, and an ROI of just nine months.”
But this in itself is not cloud computing in the normal sense of the word. To achieve this, Cassatt uses a database of server images, a set of rules that define the service levels that applications must achieve, and management software that controls the whole setup. Put simply, the management software monitors each application, and when necessary it boots appropriate servers with the correct image over the network to add resources to the applications. At less busy periods, unneeded resources are shut down so they can remain idle or reallocate to other applications that need them. There is still some inefficiency in heterogeneous environments because some hardware can run only some, but not all, server images; however, resources can still be pooled among compatible applications. “In terms of cost savings, a typical data center with low double-digit efficiency can see 40 percent to 50 percent reduction in the number of servers required to provide the same sort of service level, and still have better headroom and agility,” said Oberlin.
In fact enterprises aren’t necessarily restricted to a single data center when it comes to running an internal cloud. Cassatt is soon to release an “Enterprise edition” of its Active Response software, which pools resources across different data centers. And it doesn’t end there. The software will allow an enterprise to extend its resource pool by including resources that come from an external cloud provider as well. “You could have a fixed amount of resources available to you locally or at another data center, and deal with unplanned peaks in demand by using resources from an external cloud provider,” Oberlin said.
This practice of using an external public cloud provider for extra capacity will become increasingly common, Forrester’s Staten said. “At the moment, we are seeing companies using isolated internal clouds, but we certainly think they will end up adopting a “hybrid cloud” or “cloud bursting ” approach.” IBM demonstrated a system earlier this month that enables companies to move application processing from an internal cloud to a public cloud facility, while keeping all data stored within the private cloud. This type of hybrid cloud approach could prove popular with companies that can’t otherwise use a public cloud for security or regulatory reasons.
It’s still the early days for cloud computing, but if efficiency gains in the order of 40 percent to 50 percent are possible, then enterprises unwilling or unable to use a public cloud provider will have to give internal clouds a very long, hard look.
This article was first published on ServerWatch.
Ethics and Artificial Intelligence: Driving Greater Equality
FEATURE | By James Maguire,
December 16, 2020
AI vs. Machine Learning vs. Deep Learning
FEATURE | By Cynthia Harvey,
December 11, 2020
Huawei’s AI Update: Things Are Moving Faster Than We Think
FEATURE | By Rob Enderle,
December 04, 2020
Keeping Machine Learning Algorithms Honest in the ‘Ethics-First’ Era
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 18, 2020
Key Trends in Chatbots and RPA
FEATURE | By Guest Author,
November 10, 2020
FEATURE | By Samuel Greengard,
November 05, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 02, 2020
How Intel’s Work With Autonomous Cars Could Redefine General Purpose AI
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 29, 2020
Dell Technologies World: Weaving Together Human And Machine Interaction For AI And Robotics
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 23, 2020
The Super Moderator, or How IBM Project Debater Could Save Social Media
FEATURE | By Rob Enderle,
October 16, 2020
FEATURE | By Cynthia Harvey,
October 07, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
October 05, 2020
CIOs Discuss the Promise of AI and Data Science
FEATURE | By Guest Author,
September 25, 2020
Microsoft Is Building An AI Product That Could Predict The Future
FEATURE | By Rob Enderle,
September 25, 2020
Top 10 Machine Learning Companies 2021
FEATURE | By Cynthia Harvey,
September 22, 2020
NVIDIA and ARM: Massively Changing The AI Landscape
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
September 18, 2020
Continuous Intelligence: Expert Discussion [Video and Podcast]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 14, 2020
Artificial Intelligence: Governance and Ethics [Video]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 13, 2020
IBM Watson At The US Open: Showcasing The Power Of A Mature Enterprise-Class AI
FEATURE | By Rob Enderle,
September 11, 2020
Artificial Intelligence: Perception vs. Reality
FEATURE | By James Maguire,
September 09, 2020
Datamation is the leading industry resource for B2B data professionals and technology buyers. Datamation's focus is on providing insight into the latest trends and innovation in AI, data security, big data, and more, along with in-depth product recommendations and comparisons. More than 1.7M users gain insight and guidance from Datamation every year.
Advertise with TechnologyAdvice on Datamation and our other data and technology-focused platforms.
Advertise with Us
Property of TechnologyAdvice.
© 2025 TechnologyAdvice. All Rights Reserved
Advertiser Disclosure: Some of the products that appear on this
site are from companies from which TechnologyAdvice receives
compensation. This compensation may impact how and where products
appear on this site including, for example, the order in which
they appear. TechnologyAdvice does not include all companies
or all types of products available in the marketplace.