By Irshad Raihan
In Stanley Kubrick’s classic science fiction film 2001: A Space Odyssey, a group of prehistoric apes looks in wonder and confusion at a strange object that has been placed on Earth by an alien intelligence. Large, black, and silent, the object appears to bestow some form of advanced intelligence upon the apes, beyond what they would otherwise be capable of attaining on their own.
Thousands of years later, astronauts discover a similar object on the moon and dub it “the monolith.” Like the apes in the film’s prologue, it ultimately helps humankind take a momentous evolutionary leap beyond everything that has ever been known.
I know what you’re thinking: what in the world does this have to do with storage? Well, aside from the fact that many storage administrators and software developers are undoubtedly sci-fi fans, over the past couple of years they’ve come in contact with their own monoliths — Linux containers.
Much like the famous jump cut in 2001 of the bone turning into a spaceship, these self-contained development environments have launched storage out of the stone age of appliances and practices and into a whole new world. They’ve opened up new possibilities for storage administrators seeking greater flexibility, agility, and cost-effectiveness. In doing so, they’re expanding our perception of how to better manage application storage.
Linux containers are upending the way we approach storage
Deployments of Linux containers have grown. Enterprises are intrigued by the additional agility, innovation and digital transformation that containers offer. At the rate they’re going, it’s not a stretch to suggest that containers may be IT’s default deployment platform in the near future.
As container usage has grown, IT shops are moving to a microservices-centric development philosophy. Microservices enable developers to make incremental and piecemeal adjustments to bits of code, rather than large blocks, which enhances agility and application management. Containers are well-suited to develop and deploy microservices. When containers were first introduced to the world, they were meant to be stateless, transient, and ephemeral because the microservices they were meant to house were stateless, transient, and ephemeral.
However, as most software architects are quickly finding out, most enterprise applications need to persist their state even when running in transient containers. If a container “dies” or gets rebooted, the state of the container itself can be lost if it isn’t persisted outside of that particular machine.
There are a number of ways to solve that problem. The use of independent storage clusters are antithetical in this case as they don’t offer the agility and scale often needed by application containers or microservices. The last thing a developer striving for greater efficiency and innovation wants is a solution that exists outside of the container.
Unfortunately, when containers were first conceived, storage wasn’t a built-in function. For containers to truly go mainstream I believe that there needs to be a distributed, durable, persistent storage layer with all the bells and whistles expected of an enterprise grade storage solution – including security functionality. But just how do we go about addressing that conundrum?
Containerizing the storage platform to accelerate development, deployment, innovation — and control
The real solution in my view is to containerize the storage platform itself by serving out storage as its own microservice from inside its own container. The storage container can then run alongside compute containers on the same set of hosts and be provisioned dynamically by a mature and robust orchestration engine like Kubernetes.
The concept is to serve storage and applications from the same host, thereby creating a single control plane where both applications and storage are being managed and orchestrated together. They are not adjacent; they are literally converged. Suddenly, there’s no more need for independent storage appliances, the demand for persistent storage within the container is met, and everything can be managed through a single management system. This can make things significantly easier for developers, resulting in a much more efficient solution that can help accelerate development, deployment, and innovation – all with persistent storage.
Software-defined storage is the key to containerization
This container-native storage approach can be achieved through use of software-defined storage (SDS). Unlike antiquated, traditional storage appliances (which do not offer container-native storage, since they were never designed to be used with containers in the first place), SDS offers developers the flexibility needed to run storage and application containers as a hyper-converged construct that makes it easier to meet the storage requirements of individual server nodes.
When exploring SDS options, it’s important to consider the critical combination of functionality and integrated support. Look for solutions that provide native integration and unified orchestration for applications and storage. Having the luxury of a single point of support for your entire container stack from the operating system, up through storage and the application developments layer should be a key purchasing consideration, especially in these early days of container adoption.
These types of solutions typically come from the same world that containers originated from — the open source world. It’s the place that provides the necessary flexibility to create the combination of resources developers can use to achieve really granular control over containerized storage.
Another great indicator on which storage vendors are leading the charge in containerizing the storage platform is to look at contributions to the Docker and Kubernetes open source projects. Find out which storage vendors have skin in the game — i.e., who have written volume plugins for Kubernetes, or added sophisticated features such as dynamic provisioning, storage classes, and volume security.
Just as the monolith in 2001 ultimately led humankind to its next phase, containers are taking storage down paths developers had not even considered only a few short years ago. Now, as containers continue to make waves in mainstream enterprises, developers will need to not only control that storage, but to make it persistent and not reliant on outdated storage methodologies. SDS solutions that are able to deliver true container-native storage will be able to take developers where they need to go.
About the author: Irshad Raihan is Senior Principal, Product Marketing at Red Hat