You have to try pretty hard not to hear about containers these days. They are the new “it” technology, garnering incredible buzz on Twitter and on various online publications. Since they are often discussed in conjunction with virtualization, there is the potential for confusion or comparison, when in fact they are complimentary.
The two are not competitive, in that you would have to make an either/or decision. You know what a virtual machine is. Well, containers are simply a better packaging standard for Linux apps that work across all of the different Linux distros.
When you build a Linux app, you need different packages for each of the many flavors of Linux, like Red Hat, SuSe, Ubuntu, Debian, and so forth. Each of these solutions are arcane and limited in what you can and can’t do, and not everyone has the skill set for every platform. Most likely a Red Hat expert will only know how to package an app for Red Hat.
One of the issues that make package managers complex is dependencies. For example, say your app needs WebSSL. Red Hat’s package manager might use one version of WebSSL while Ubuntu uses a different version. That means OS checks, dependencies, and other headaches.
“Docker and containers are currently white hot and wildly popular,” said Jay Lyman, research manager, cloud platform at 451 Research. “It serves as a standard where there is a lack of standards for how to deploy in the cloud. It also offers a new level of simplicity, has a new user interface. The big driver is you get some isolation from apps, services, and workloads, without as much bleeding and blending between them.”
Also see: Big Data virtualization.
Right Place, Right Time
Containers are not new; Unix has had them for years. But Docker, the startup with $55 million in venture money behind it, has enjoyed great success with its container technology thanks to perfect timing. The market was ready for the concept right when Docker began to ship. It now enjoys partnerships with Amazon, Cisco, Google, VMware, and even Microsoft, among other heavyweights in the industry.
In 2014, Docker teamed with Canonical, Google, Red Hat, and Parallels to create a critical standardized open-source program libcontainer that allows containers to work within Linux namespaces and control groups without needing administrator access and offering a consistent interface across all major Linux versions.
This allowed for many containers to run within a single virtual machine. Before, admins usually walled off apps from each other by putting one app per virtual machine. Now you didn’t need to spin up a VM for every app. You can run multiple apps in one VM environment. This meant no longer needing hundreds of VMs on one machine.
“The problem with VMs is they have a lot more overhead,” said Kelsey Hightower, senior engineer and chief advocate at CoreOS, which makes its own container product called Rocket. “There is a huge slowdown depending on the workload. That VM has a high cost because you have a middleman to get to your hardware. You’re probably wasting 15% to 20% of resources for VMs when all they wanted was app containment.
You know an idea is a convincing one when Microsoft joins the party early. It’s partnering with Docker to support it on Azure, and they are also talking about integrating Docker containers into Windows Server. That speaks to the level of interest around containers. Lyman believes Microsoft will do its own work internally to make its own container spec, but with everything they are doing with Docker it will be able to interface with Docker containers, without a vendor lock-in
The Big Weakness
The other shoe to drop when it comes to containers is security. It doesn’t have much, which is a huge problem for a public cloud environment.
“Containers have critical limitations in areas like OS support, visibility, risk mitigation, administration, and orchestration. This is especially true for the newer brands of containerization which do not (yet) have a significant management and security ecosystem, in contrast to more mature solutions like Solaris containers,” said Andi Mann, vice president and a member of the Office of the CTO at CA.
The problem is that containers share the same hooks into the kernel, said Hightower. “That’s a problem because if there are any vulnerabilities in that kernel in doing multitenency, and if I know of an exploit the others don’t know about, I have a way to get into your containers. Containers have not yet demonstrated that they can deliver the same secure boundaries that a VM still has.”
Hightower said someone showed Docker just a few months ago how to escape the system and gain full access to the server, so no one is ready to say you can get rid of virtual machines just yet, especially in a public cloud environment like Amazon or Google.
“You don’t want to have untrusted actors on the same server. A single organization can still get benefit, but a cloud provider might not want to spin their whole business on containers with no VMs to isolate their business,” he said.
The second area of weakness for containers is they are not proven scalable. “Containers face many challenges to scale. It’s one thing to do a Web app but it’s another thing to do a multitenant, complex enterprise app with a lot of data of interest,” said Lyman.
But that can be turned into a positive as well. “Containers are a really good match for microservices, where you chop up the app into chunks for different teams, so everyone works on their specialty area,” said Hightower. “Containers are good for that use case.”
Because they have different strengths, weaknesses and functions, virtual machines and containers should not be viewed as competitors but as complimentary.
“Containers and VMs are destined to be close companions in the cloud of clouds. Just as one cloud is not enough, and so too, one virtualization technology is not enough. Each technology provides a different response to different use cases, and in many cases work together to solve those challenges,” said Mann.
“Containers are especially good for early development, for example, because the speed of manual provisioning/deprovisioning greatly outweighs the improved manageability of a virtual machine in an environment where everything is new and rapidly changing,” he added.
The two are very much complementary to each other, adds Hightower. “Now you need fewer VMs and probably can go back to a bare metal server with no virtualization. If you are good at VMs, you can use containers for everything.”
The big challenge, then, is to address the security problem. Hightower notes there are already some secure products out there, such as Red Hat’s SELinux, with its government-level security, and Ubuntu’s AppArmor, which binds access control attributes to applications rather than users. But more is needed to secure the kernel and keep unwanted intruders out of other VMs in a multitenant environment.
The next step is large scale orchestration and scale, said Lyman, but he thinks that will happen in time. He bases that on discussions with vendors, investors, and end users, which is unusual because end users are usually laggards when it comes to new technologies. “I saw it with DevOps and PaaS. With containers, end-users are asking questions alongside with developers and investors,” he said.
Containers have interest from startups like Docker, CoreOS and Shippable but also big names like Google with Kubernetes, IBM supporting Docker on its BlueMix PaaS service, Amazon supports containers on Elastic Cloud 2, HP and Microsoft have their own efforts, and Red Hat’s OpenShift PaaS.
It’s similar in how OpenStack brought together startup specialists and megavendors. You’re only going to hear more about this from bigger vendors and see more different container technologies, but right now everyone is putting most bets on Docker and Docker containers,” said Lyman.
Photo courtesy of Shutterstock.