For many enterprises, the safest on-ramp to cloud computing is to build a private cloud. Private clouds don’t have as many security, compliance and data-ownership challenges as public ones, but they are not risk free.
Legacy environments, cost overruns and application performance can all undermine your private cloud initiatives. That doesn’t mean you should rethink the decision to build a private cloud, but it does mean you need to carefully think through the challenges—and be realistic about potential benefits.
If you understand the challenges you’ll face and plan ahead to overcome them, building a private cloud will put you on the road to a streamlined, efficient and flexible IT infrastructure.
To help you succeed when it comes time to build your own private cloud, here are five tips from private cloud experts:
1. Avoid Forklift Upgrades.
Many enterprises are slow to build private clouds because vendors try to sell them on the concept of rebuilding their IT infrastructures from scratch&mdasha costly proposition.
A large global financial institution (which preferred to remain unnamed) wanted to reduce the complexity and fragmentation of their IT infrastructure by implementing a private cloud. Their service delivery process was slow, requiring upwards of 14–60 days for turnaround. They also had enough invested in their existing environment that they couldn’t simply abandon it. The financial institution turned to Adaptive Computing, a provider of cloud management software, for help.
“Legacy environments are the harsh reality of enterprise IT. Even the newest technology becomes legacy within a few short years,” said Rob Clyde, CEO of Adaptive Computing. “Most large vendors are happy to sell you a private cloud, provided you use their latest products from top to bottom. That just doesn’t work in real-world, heterogeneous environments. No one wants to do a complete swap-out of their existing systems.”
A better approach, and one many vendors will try to tell you won’t work, is to simply augment your existing investments in software and systems with cloud projects.
“One example of this is physical server management. Most private cloud vendors would have you believe that you have to virtualize everything to deploy a private cloud. In reality, physical servers benefit greatly from being integrated into a cloud management system,” Clyde said.
Adaptive Computing used a mixed legacy and virtualized infrastructure to help the global financial institution build a cloud-augmented infrastructure that now includes thousands of servers and more than 10,000 virtual machines (VMs). Once the initial implementation was complete, they immediately began to see improved efficiency and cost savings.
According to Adaptive Computing, the institution projects more than $1 billion in savings between capital and operational expenditures, while reducing the service delivery process to just a few hours instead of several weeks or months.
Due to the success of the initial cloud rollout, the institution now has a goal of hosting 80 percent of IT services in the cloud.
2. Determine How You Will Measure Success.
The easiest way to measure success is ROI, but that’s not the only way. The financial institution mentioned above was every bit as concerned about reducing service delivery times and protecting existing investments as achieving ROI for the new cloud tools.
Jay Litkey, CEO of private cloud management company Embotics, recommends that you consider these factors:
- How much can you eliminate IT sprawl and untracked inventory? For one client, Embotics found that out of 650+ VMs, they had 5 VMs out of inventory and 48 powered off, which cost the company over $140K each and every month.
- How many system administrators do you need? Typically, cloud-based environments require fewer admins.
- How many administrative tasks can you automate?
- How much better is the end-user experience?
- How well can you automate policy enforcement to streamline regulatory compliance?
3. Plan for the Future Growth and Changing Makeup of Your Organization.
The University of California, Berkeley, provides IT capabilities and services to its own internal clients, as well as to other campuses in the UC system.
Higher education is going through dramatic changes of late, with an increased emphasize on online learning tools. To meet future needs, UC-Berkeley built a private cloud, hosting servers in a centralized architecture for approximately 30 different tenants throughout its own system, as well as for those of other schools, such as UC-Merced and UCLA.
But the university IT department found it challenging to extend versatility and flexibility to its tenants while controlling access, protecting security and providing availability.
In other words, if your organization is large enough, you may well face the same challenges with your private cloud that you would have with a public or hybrid cloud.
The first technical hurdle for UC-Berkeley was to implement two-factor authentication so that its datacenter tenants could securely access VMware vCenter. Two-factor authentication was required by the UC system. The University had already invested in RSA SecurID infrastructure, so they wanted to leverage that investment. Establishing two-factor access for virtual infrastructures was not only compliant with internal policy but also instilled confidence across the various tenants and encouraged greater participation.
Curtis Salinas, the systems administrator for Information Services and Technology, noted that the vSphere access challenge echoed what they had already gone through with Windows. “It happened for our Windows infrastructure several years ago, and now it’s happening at the hardware virtualization layer. We’re too big for our britches and lack a solid methodology for monitoring, securing, and maintaining our vSphere systems as we continue to expand,” Salinas said.
To gain these capabilities, UC Berkeley deployed HyTrust’s virtualization security appliance. Initially, the university had sought HyTrust for its ability to deliver two-factor authentication for the virtual infrastructure via RSA SecurID. Then, they utilized such features as host configuration templates, auditing features and the ability to vault root passwords.
With virtualization security and access taken care of, UC-Berkeley can securely scale its infrastructure up as more students, and even more partner campuses, come online.
4. Strive to Meet—Or Beat—the Performance of Your Previous Architecture.
Traditionally, performance-monitoring tools have been relegated to silos—database profilers for DBAs, agent-based application instrumentation for developers, and packet sniffers and NetFlow analyzers for the networking teams. When an app performance problem arises, it’s like that old parable about five blind men trying to identify an elephant based on the piece each individual is touching. Visibility is fragmented and incomplete, and thus, the whole of the problem is usually misunderstood.
This siloed approach leaves IT organizations in near-constant fire-fighting mode; they’re reacting to unforeseen problems instead of anticipating and solving them proactively.
It’s also an approach that does not translate to cloud-based environments.
Web-based electronic medical records (EMR) provider Practice Fusion wanted to break free of siloed troubleshooting to gain a holistic view of app performance. “We had many point solutions that provided information about discrete components, but nothing that gave us a holistic, correlated view,” said John Hluboky, VP of Technical Operations at Practice Fusion. “We were looking for a platform that could give us comprehensive visibility and foster greater collaboration across teams.”
Practice Fusion eventually adopted the network-based application performance monitoring (APM) solution from ExtraHop.
“We recently used ExtraHop to successfully migrate a portion of our web application from a physical to a virtual infrastructure,” says Hluboky. “This part of the application was customized to run on a particular HP server platform, and previous attempts to virtualize the workload failed, creating race conditions and similar problems.”
The Practice Fusion IT team used the ExtraHop system to baseline several key performance indicators, including application response time, and then they spun up a parallel virtual infrastructure to compare performance. “We could prove that performance was the same or slightly better on the virtual infrastructure. Because the HP servers were reaching end-of-life, we would have spent up to $75,000 purchasing new hardware and revalidating the software for the new platform,” Hluboky said.
With performance metrics in hand, Practice Fusion was able to avoid that expenditure, and they were able to prove to the rest of the organization that cloud-based performance would live up to their expectations.
5. Figure Out Who Will Pay for What.
One common mistake many companies have made when they deployed their private clouds is overlooking payments and chargebacks. Without a usage accounting system, IT could be on the hook for costs that should come out of the other departments’ budgets.
It’s important to have cloud accounting tools in place to monitor who is using which computing resources and to bill them appropriately for that usage.
“Clouds make it much easier than ever before for users to consume IT resources. Without accounting, users will invariably over-consume and waste resources, since there is no incentive to do otherwise. Just like there is no such thing as a free lunch, there are no free IT services,” said Rob Clyde, CEO of Adaptive Computing.
In a cloud environment, IT faces a classic tragedy of the commons scenario, with IT being the commons. If users aren’t held accountable for the services they consume, your expected private cloud ROI could evaporate quickly.
Jeff Vance is a freelance writer based in Santa Monica, CA, who focuses on emerging technology trends. Connect with him on Twitter @JWVance or by email at jeff@sandstormmedia.net.