Cloud infrastructure is the layer of software and hardware between your internal systems and the public cloud. Incorporating many different tools and solutions, this infrastructure is the essential system for a successful cloud computing deployment.
This layer of cloud infrastructure has grown as public cloud has changed the structure of the data center and its hardware. Up until now, IT equipment and data center systems adopted the circled wagon approach. Everything was behind a firewall and facing inward. The only users were inside the company and inside the firewall, as were the apps.
The cloud – and to some extent mobile – forces a break in that circle. Now businesses need to face outward, to AWS, Azure, Google Cloud, or other cloud companies. Businesses need to create a secure data flow in their firewall to connect securely to the public cloud and keep intruders out, while at the same time maintaining acceptable levels of performance.
The Internal Cloud Meets Cloud Infrastructure
As the cloud has grown, many enterprises have adopted an internal cloud model, often known as a private cloud. These private clouds don’t have the compute capacity of an Amazon or IBM, but they do have the flexibility to spin up virtual instances and keep them in-house.
The goal is to simplify the combination of the private cloud and pubic cloud, often known as a hybrid cloud. To help with this process, companies use technologies such as hyperconverged infrastructure (HCI), where a vendor provides everything needed to install a turnkey cloud environment. This allows businesses to turn their traditional on-premises data center into cloud-like infrastructure that can managed from a single dashboard.
All services are delivered through the Infrastructure as a Service (IaaS) model. As such, everything is virtualized, so the cloud-based infrastructure can be set up easily, duplicated, replaced, and shut down.
Cloud Infrastructure Building Blocks
The components of cloud infrastructure are typically broken down into three main categories: compute, networking, and storage:
- Compute: Performs the basic computing for the cloud systems. This is almost always virtualized so the instance can be moved around.
- Networking: Usually commodity hardware running some kind of software-defined networking (SDN) software to manage cloud connections (see below for more information about networking).
- Storage: Usually a combination hard disks and flash storage designed to move data back and forth between the public and private clouds.
Storage is where cloud infrastructure parts ways from the traditional data center infrastructure. Cloud infrastructure usually uses locally attached storage instead of shared disk arrays on a storage area network. Cloud providers like AWS, Azure and Google charge more for SSD storage than they do for hard disk storage.
Cloud storage also uses a distributed file system designed for different kinds of storage scenarios, such as object, big data, or block. The type of storage used depends on the tasks you need handled. Key point: cloud storage can scale up or down as needed.
Cloud infrastructure is the foundation upon which sits platform and any application. Connected devices like a laptop, phone or server transfer data in and out of this larger cloud system.
IaaS Benefits
IaaS is the foundation on which cloud infrastructure is built. Cloud infrastructure is the bricks and mortar, IaaS is the store. IaaS makes it possible to rent those cloud infrastructure components – compute, storage and networking – over the Internet from a public cloud provider.
The benefits of IaaS are numerous:
- Cuts upfront costs: IaaS eliminates the upfront capital expense of buying new server hardware, waiting several weeks for it to be delivered, more time for it to be installed and deployed and then provisioned. You can log into your AWS control panel and spin up a virtual instance in 15 minutes.
- Scalable capacity: If you need more capacity, you can buy more just as quickly, and you can scale down if you find you don’t need as much as allocated. And instead of the up-front capital expense of buying new equipment, IaaS follows a usage-based consumption model where you pay-per-use.
- Discounts: IaaS vendors also provide discounts for sustained usage, or if you make a large up-front purchase. The savings can be high, too, as much as 75%.
The next step up from IaaS is platform as a service (PaaS), which is built on the same IaaS platforms and hardware. But PaaS is expanded to offer more services, such as a complete development environment, including a Web server, tools, programming language, and database.
Why Use a Cloud Infrastructure?
In a traditional IT infrastructure, everything is tied to a server. Your storage is on a specific storage array. Apps run on dedicated physical servers. If anything goes down, your work comes to a halt.
In a cloud infrastructure, because everything is virtualized, nothing is tied to a particular physical server. This applies to services as well as apps. Do you think when you log onto Gmail you are logging into the same physical server every time? No, it’s a virtualized server at any one of dozens of Google data centers.
The same applies to your AWS instances and your internal services, should you deploy a cloud infrastructure model for your internal infrastructure. By virtualizing storage, compute and networking components, you can build from whatever services are available and not heavily utilized. For example, you can launch an application on a virtual server on hardware with low utilization. Or you can deploy a network connection on a switch with low traffic.
With cloud infrastructure, DevOps teams can build their apps so they can deploy an app programmatically. They can tell an app to look for a low utilization server or to deploy as close to the data store as possible. You can’t do that in a traditional IT environment.
Big Networking Changes
Network technology has created a major change the relationship between cloud infrastructure and traditional IT. The current standard in WAN communication technology, Multiprotocol Label Switching (MPLS), is designed for use internally in your data center. It doesn’t handle high bandwidth apps very well and is easily overloaded. Plus, the data is transmitted unencrypted, which raises obvious problems when transmitting over the public Internet.
SD-WAN is made for the public Internet and lets you use a VPN to encrypt traffic. It uses intelligent routing to manage traffic to avoid bottlenecks, and most of the SD-WAN vendors have built their own private networks to supplement the public Internet, so you don’t have to compete with Netflix traffic.
Because it is built for the public Internet, one of the biggest advantages to SD-WAN is security. SD-WAN offers end-to-end encryption across the entire network, including the Internet, and all devices and endpoints are completely authenticated, thanks to software-defined security.
Cloud Infrastructure Challenges
Cloud infrastructure in the public cloud is not a flawless solution. There can be issues, and typically these issues are serious. Note, these are issues unique to the public cloud and should not impact any private cloud infrastructure you deploy internally.
Noisy neighbors
The first problem is the issue of the noisy neighbor. When you are running a virtual instance, your VM is running on an AWS/Azure/IBM/Google server in a data center. That physical server is likely a two-socket rack mount with two Intel Xeons and a lot of memory. If you allocate four cores on a 28-core Xeon, the other 24 are going to be rented out to someone else, and you have no way of knowing their identity.
The result could be an app that impacts your performance, be it compute, in memory or the network. A common practice among cloud users is to spin up a bunch of virtual machines, run benchmarks to see which perform best, and shut down the ones they don’t need.
The solution to this is what’s called bare metal cloud. In a bare metal environment, the CPU is not virtualized. That 28-core Xeon is all yours. No noisy neighbors. No OS, either. Bare metal solutions mean you bring everything, from the OS stack on up.
The bare metal solution is designed for specific environments where performance is critical, or if you want access to custom chips. For example, in a virtualized environment, you cannot access the networking chip. In bare metal you can, so you can do custom networking, like packet inspection.
Latency
The other issue is latency. Public cloud performance is not consistent, except perhaps at night when usage plummets. If you have an application that is sensitive to issues of latency, you might have a costly problem.
One solution is to change the location of your app. You might be connecting to a data center on the other side of the country. You can request a data center that is physically closer to you, to reduce the lag. Of course, that might cost you more, so you have to weigh the benefits.
You can connect directly to the cloud provider, AWS as AWS Direct Connect, for example. Yet that is an even pricier solution since you are now using the provider’s own network.