Where Do Orchestration and Automation Fit In?
There are wonderful orchestration and automation tools available today; however, the vast majority fall into two major categories, neither of which sufficiently addresses the physical realities of infrastructure agility:
- Virtualization-Only Orchestration. Most orchestration and automation effectively assumes that everything is virtualized. Whether they have a whole stack orientation or perhaps a network virtualization orientation, these solutions simply don't deal with the physical infrastructure in any meaningful way.
- Vertically-Integrated Orchestration. Most datacenter compute infrastructure vendors offer an orchestration tool that addresses both physical and virtual infrastructure. However, all that infrastructure has to be supplied by that vendor, and it rarely will extend to all the other physical or legacy infrastructure that is in place.
Of course, this is not to say that both of these don't have validity and aren't useful—clearly they are for certain use cases and environments, but they have clear limits.
Typically, the solution offered to overcome these challenges is an API that you can use to extend the tool's business logic to address whatever it doesn't do out of the box, including physical infrastructure. There are some real problems with this, but suffice it to say that most enterprise IT teams aren't able, even if they wanted, to build and sustain a home-grown automation architecture (which usually just amounts to a massive pile of hard-to-maintain scripts) to handle these requirements and integrate with those APIs.
Agile Infrastructure Orchestration Technology and Best Practices
The good news is that orchestration technology and matching best practices exist today to solve the challenge of heterogeneous infrastructure. The approach includes these following key principles:
- Building-block object architecture. To accommodate orchestration of both physical inventory and virtualized resources as a common pool, all resources and their provisioning interfaces are represented or coded as building-block objects. This means that infrastructure inventory resources can be assigned common attributes that identify which provisioning objects should be used. For example, are they virtual machines, are they IOS or IOS-XR switches, or are they non-virtualized Dell or HP servers running Linux? Provisioning objects also share common attributes to abstract the input parameters from the underlying API syntax. A major advantage of this approach is that it establishes the basis for automation re-use, which dramatically reduces the cost and time needed to maintain automation. This is in contrast to a system based on fragile scripts, which tend to be hard to maintain because there is so much duplication of common functions or API calls across dozens to hundreds of separate scripts, and because scripts are harder to understand, especially since they tend to not be well documented.
- Out of the Box and Do it Yourself (DIY) provisioning interface creation. It's relatively easy for automation vendors to create libraries for common cloud APIs. It's an ocean-boiling exercise to try to support all the endless generations of interfaces within the universe of infrastructure deployed over the years by enterprise IT groups. Complete dependence on automation vendors to create and maintain these interfaces robs the automation process of much of its agility since code updates may take months and there may be business negotiations before even starting to code up those interfaces. In many cases, only a small subset of an API is needed to provision the infrastructure anyway. So, a DIY option for being able to create your own provisioning objects is essential to maintaining automation agility.
- Visual orchestration and automation authoring tools. As stated, most enterprise IT groups don't have a ton of programmers. The last thing these groups need is to be trapped in a methodology that is not built for their personnel profile. Enterprise IT personnel are used to working with UI tools. GUI orchestration tools are fairly common, but they should have full flexibility to construct resource environments including arbitrary network topologies that connect all the diverse resources that may be needed. The orchestration should allow drag and drop of both physical and virtual resources in into a design canvas and manage connectivity at both the physical and virtual layers. Once automation objects are established, you don't want to have the business logic associated with environment creation and provisioning only created by programmers. A visual automation authoring tool allows knowledgeable non-programmers to do drag and drop workflow creation. The combination of object re-use with a visual authoring tool means that you can increase the number of personnel who create and maintain automation and this makes for a truly sustainable automation practice.
Enterprise IT teams have a real need to increase their agility in deploying infrastructure. In the rush to create and demonstrate that agility, don't forget to fully account for all infrastructure that end-users need, including physical infrastructure. Then choose the orchestration technology and establish an automation practice that will manage that infrastructure and allow you to create and maintain automation for the long-haul.
Photo courtesy of Shutterstock.