For many enterprises, the safest on-ramp to cloud computing is to build a private cloud. Private clouds don't have as many security, compliance and data-ownership challenges as public ones, but they are not risk free.
Legacy environments, cost overruns and application performance can all undermine your private cloud initiatives. That doesn’t mean you should rethink the decision to build a private cloud, but it does mean you need to carefully think through the challenges—and be realistic about potential benefits.
If you understand the challenges you'll face and plan ahead to overcome them, building a private cloud will put you on the road to a streamlined, efficient and flexible IT infrastructure.
To help you succeed when it comes time to build your own private cloud, here are five tips from private cloud experts:
Many enterprises are slow to build private clouds because vendors try to sell them on the concept of rebuilding their IT infrastructures from scratch&mdasha costly proposition.
A large global financial institution (which preferred to remain unnamed) wanted to reduce the complexity and fragmentation of their IT infrastructure by implementing a private cloud. Their service delivery process was slow, requiring upwards of 14–60 days for turnaround. They also had enough invested in their existing environment that they couldn't simply abandon it. The financial institution turned to Adaptive Computing, a provider of cloud management software, for help.
"Legacy environments are the harsh reality of enterprise IT. Even the newest technology becomes legacy within a few short years," said Rob Clyde, CEO of Adaptive Computing. "Most large vendors are happy to sell you a private cloud, provided you use their latest products from top to bottom. That just doesn’t work in real-world, heterogeneous environments. No one wants to do a complete swap-out of their existing systems."
A better approach, and one many vendors will try to tell you won't work, is to simply augment your existing investments in software and systems with cloud projects.
"One example of this is physical server management. Most private cloud vendors would have you believe that you have to virtualize everything to deploy a private cloud. In reality, physical servers benefit greatly from being integrated into a cloud management system," Clyde said.
Adaptive Computing used a mixed legacy and virtualized infrastructure to help the global financial institution build a cloud-augmented infrastructure that now includes thousands of servers and more than 10,000 virtual machines (VMs). Once the initial implementation was complete, they immediately began to see improved efficiency and cost savings.
According to Adaptive Computing, the institution projects more than $1 billion in savings between capital and operational expenditures, while reducing the service delivery process to just a few hours instead of several weeks or months.
Due to the success of the initial cloud rollout, the institution now has a goal of hosting 80 percent of IT services in the cloud.
The easiest way to measure success is ROI, but that's not the only way. The financial institution mentioned above was every bit as concerned about reducing service delivery times and protecting existing investments as achieving ROI for the new cloud tools.
Jay Litkey, CEO of private cloud management company Embotics, recommends that you consider these factors:
The University of California, Berkeley, provides IT capabilities and services to its own internal clients, as well as to other campuses in the UC system.
Higher education is going through dramatic changes of late, with an increased emphasize on online learning tools. To meet future needs, UC-Berkeley built a private cloud, hosting servers in a centralized architecture for approximately 30 different tenants throughout its own system, as well as for those of other schools, such as UC-Merced and UCLA.
But the university IT department found it challenging to extend versatility and flexibility to its tenants while controlling access, protecting security and providing availability.
In other words, if your organization is large enough, you may well face the same challenges with your private cloud that you would have with a public or hybrid cloud.
The first technical hurdle for UC-Berkeley was to implement two-factor authentication so that its datacenter tenants could securely access VMware vCenter. Two-factor authentication was required by the UC system. The University had already invested in RSA SecurID infrastructure, so they wanted to leverage that investment. Establishing two-factor access for virtual infrastructures was not only compliant with internal policy but also instilled confidence across the various tenants and encouraged greater participation.
Curtis Salinas, the systems administrator for Information Services and Technology, noted that the vSphere access challenge echoed what they had already gone through with Windows. "It happened for our Windows infrastructure several years ago, and now it’s happening at the hardware virtualization layer. We’re too big for our britches and lack a solid methodology for monitoring, securing, and maintaining our vSphere systems as we continue to expand," Salinas said.
To gain these capabilities, UC Berkeley deployed HyTrust's virtualization security appliance. Initially, the university had sought HyTrust for its ability to deliver two-factor authentication for the virtual infrastructure via RSA SecurID. Then, they utilized such features as host configuration templates, auditing features and the ability to vault root passwords.
With virtualization security and access taken care of, UC-Berkeley can securely scale its infrastructure up as more students, and even more partner campuses, come online.