Tuesday, March 19, 2024

Deploying Cloud Infrastructure at the Speed of the Cloud

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Two of the chief values for the cloud are speed and agility. Cloud orchestration and automation tools promise to allow IT organizations to move significantly faster and become more competitive as service organizations. While orchestration tools typically do a great job of deploying applications flexibly once virtualization is in place, the underlying (or horrors—the non-virtualized and legacy) infrastructure is not very agile or automated. People are not addressing this head on. Instead, a lot of hand waving has been done at physical infrastructure issues, and it’s the Achilles’ heel of IT when trying to get agile. However, it doesn’t have to be this way.

Why Infrastructure Agility Matters

Enterprise IT is under pressure from many sides—mandates, budget, user expectations and competitive gaps.

First of all, it’s a truism that enterprise IT’s mandate is shifting aggressively toward supporting innovation agility. However, budgets are not increasing to help IT meet this mandate. According to survey results from Gartner’s 2014 CIO Agenda Report, global CIOs report a cumulative IT budget increase of only 0.2 percent in in 2014.

The reason may be another point of pressure—users’ consumerized expectations of IT are making them go elsewhere. The same Gartner report has CIOs’ reporting that 25% percent of IT spending is happening outside of the IT budget—and that’s only what they can see. In other words, enterprise IT organizations are losing “market share” to public cloud offerings.

This shift may be explained by the competitive gap between the near instantaneous availability of public cloud resources and the relatively slow delivery available from traditional IT processes. According to Enterprise Management Associates’ (EMA) 2014 Software Defined Data Center study, 47 percent of enterprises self-report taking anywhere from a week to over a month to deploy infrastructure for application developers and testers.

Infrastructure Delivery Times

Source: Enterprise Management Associates, Software Defined Data Center 2014 Report

Compared to the “minutes” it takes to get access to public cloud resources, this is a significant competitive gap. Of course, not everything will shift to public clouds anytime soon. There will be plenty of infrastructure business for IT to handle well into the future, but it’s in the best interest of IT groups to raise their game.

Physical Infrastructure is Still a Major Obstacle to Cloud Agility

Two of the chief tenets of the cloud and accompanying cloud automation are that all infrastructure becomes virtualized and has an automation-friendly, preferably REST-style API. This is certainly a helpful and simplifying vision. It makes it possible to turn “infrastructure into code” because everything is abstracted, software-defined and programmable.

The chief aspirational examples for this vision are webscale companies like Netflix that operate their businesses based on public cloud infrastructure from the likes of Amazon Web Services (AWS). Unfortunately, for most enterprise organizations that don’t have the luxuries of youth and seemingly unlimited IT budgets that webscale businesses do, IT infrastructure reality can be quite a bit messier. According to a 2013 NTT Communications study, 58 percent of CIOs expressed that one of the biggest obstacles to cloud adoption was the complexity of their existing systems. Any IT organization that has been in existence for more than 10 years has legacy infrastructure that isn’t virtualized and is probably not going to disappear anytime soon.

The physical realities of infrastructure have an impact on cloud-like agility for IT in a couple of different ways:

  • Day-zero deployment. Most enterprise IT infrastructure investments are still going toward private clouds today, and there is a sea change underway in how IT teams purchase infrastructure equipment. The old way is to buy best-of-breed solutions in each category—compute, networking, storage and virtualization, then integrate these pieces themselves. Increasingly, there is a move toward converged infrastructure products like VCE Vblock, reference architectures like EMC VSPEX and even hyper-converged solutions like Nutanix or the recently announced VMWare EVO:RAIL. The advantages of converged infrastructure, reference architecture and hyper-converged solutions include that they come integrated, tested and certified as interoperable and able to reach certain performance benchmarks, or support a certain number of virtual machines or desktops. This relieves the enterprise IT team of having to act as the systems integrator. However, even with integrated products, the day-zero provisioning work to stand up a data center pod based on these products and architectures is anything but trivial. Typically, it involves multiple domain experts working together to manually configure them for a particular deployment use case. It often takes anywhere from one to three weeks. This would be fine if it weren’t such a wasteful use of talent and if such long provisioning and deployment timeframes weren’t so painfully out of step with the offerings from public cloud providers, where you can spin up VMs in minutes.
  • Supply chain. Remembering the competitive gap versus public cloud providers in terms of time to deploy, it’s relevant to note that the deployment of private cloud infrastructure has a real supply chain dynamic. Most datacenter infrastructure is bought through distribution and systems integration channels today. The reason this is of interest is that even if we assume that distributors maintain ample stocks of inventory (in a just-in-time logistics world) so that there’s never a lag time due to shipment from the equipment manufacturers, typically the distributor or the systems integrator has to perform some level of provisioning. Let’s also remember that the IT products distribution business operates on relatively thin margins. So while manufacturers with much thicker margins have lots of highly specialized folks; the distribution channel has less. This means that if provisioning takes experts sitting in a room for three weeks, there are less of them to go around. The supply chain then becomes a constraint for IT enterprise groups to even receive the infrastructure in a timely way. If enterprise IT is threatened competitively by the responsiveness of public cloud/shadow IT, then the supply chain is also threatened.
  • Infrastructure as a Service. When building a private or hybrid cloud IaaS offering, many IT groups are challenged by the fact that they must deal with physical infrastructure assets such as non-virtualized/dedicated servers, networking switches and legacy, non-virtualized storage arrays. The presence of these assets is absolutely non-trivial. For example, the EMA SDDC study found that 83 percent of IT organizations are running applications on dedicated servers. In addition, while Software Defined Networking (SDN) is at the peak of its hype cycle, the fact is that it barely exists in deployment, especially in enterprise IT environments. In fact, an August 2014 poll found that two-thirds of respondents pegged SDN as being at least three to five years away. As a result, IaaS that can’t address both virtualized and non-virtualized infrastructure can become yet another silo that doesn’t help move the whole business forward.

Article contributed by Alex Henthorn-Iwane, vice president of marketing, QualiSystems

Where Do Orchestration and Automation Fit In?

There are wonderful orchestration and automation tools available today; however, the vast majority fall into two major categories, neither of which sufficiently addresses the physical realities of infrastructure agility:

  • Virtualization-Only Orchestration. Most orchestration and automation effectively assumes that everything is virtualized. Whether they have a whole stack orientation or perhaps a network virtualization orientation, these solutions simply don’t deal with the physical infrastructure in any meaningful way.
  • Vertically-Integrated Orchestration. Most datacenter compute infrastructure vendors offer an orchestration tool that addresses both physical and virtual infrastructure. However, all that infrastructure has to be supplied by that vendor, and it rarely will extend to all the other physical or legacy infrastructure that is in place.

Of course, this is not to say that both of these don’t have validity and aren’t useful—clearly they are for certain use cases and environments, but they have clear limits.

Typically, the solution offered to overcome these challenges is an API that you can use to extend the tool’s business logic to address whatever it doesn’t do out of the box, including physical infrastructure. There are some real problems with this, but suffice it to say that most enterprise IT teams aren’t able, even if they wanted, to build and sustain a home-grown automation architecture (which usually just amounts to a massive pile of hard-to-maintain scripts) to handle these requirements and integrate with those APIs.

Agile Infrastructure Orchestration Technology and Best Practices

The good news is that orchestration technology and matching best practices exist today to solve the challenge of heterogeneous infrastructure. The approach includes these following key principles:

  1. Building-block object architecture. To accommodate orchestration of both physical inventory and virtualized resources as a common pool, all resources and their provisioning interfaces are represented or coded as building-block objects. This means that infrastructure inventory resources can be assigned common attributes that identify which provisioning objects should be used. For example, are they virtual machines, are they IOS or IOS-XR switches, or are they non-virtualized Dell or HP servers running Linux? Provisioning objects also share common attributes to abstract the input parameters from the underlying API syntax. A major advantage of this approach is that it establishes the basis for automation re-use, which dramatically reduces the cost and time needed to maintain automation. This is in contrast to a system based on fragile scripts, which tend to be hard to maintain because there is so much duplication of common functions or API calls across dozens to hundreds of separate scripts, and because scripts are harder to understand, especially since they tend to not be well documented.
  2. Out of the Box and Do it Yourself (DIY) provisioning interface creation. It’s relatively easy for automation vendors to create libraries for common cloud APIs. It’s an ocean-boiling exercise to try to support all the endless generations of interfaces within the universe of infrastructure deployed over the years by enterprise IT groups. Complete dependence on automation vendors to create and maintain these interfaces robs the automation process of much of its agility since code updates may take months and there may be business negotiations before even starting to code up those interfaces. In many cases, only a small subset of an API is needed to provision the infrastructure anyway. So, a DIY option for being able to create your own provisioning objects is essential to maintaining automation agility.
  3. Visual orchestration and automation authoring tools. As stated, most enterprise IT groups don’t have a ton of programmers. The last thing these groups need is to be trapped in a methodology that is not built for their personnel profile. Enterprise IT personnel are used to working with UI tools. GUI orchestration tools are fairly common, but they should have full flexibility to construct resource environments including arbitrary network topologies that connect all the diverse resources that may be needed. The orchestration should allow drag and drop of both physical and virtual resources in into a design canvas and manage connectivity at both the physical and virtual layers. Once automation objects are established, you don’t want to have the business logic associated with environment creation and provisioning only created by programmers. A visual automation authoring tool allows knowledgeable non-programmers to do drag and drop workflow creation. The combination of object re-use with a visual authoring tool means that you can increase the number of personnel who create and maintain automation and this makes for a truly sustainable automation practice.

Enterprise IT teams have a real need to increase their agility in deploying infrastructure. In the rush to create and demonstrate that agility, don’t forget to fully account for all infrastructure that end-users need, including physical infrastructure. Then choose the orchestration technology and establish an automation practice that will manage that infrastructure and allow you to create and maintain automation for the long-haul.

Photo courtesy of Shutterstock.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles