Thursday, June 13, 2024

Top Software-Defined Data Center (SDDC) Trends

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Hardware has always defined the data center. You looked into a big room and there stood aisle after aisle of servers, storage, and networking equipment. Along the walls were massive cooling systems and power management hardware, such as switches, batteries, and more. 

For resilience and disaster recovery (DR), the solution was simple. You build a mirror image of that data center at another location: Buy a new set of that equipment and install it in the new center. Of course, there was plenty of software too. But the hardware defined the existence of the data center. 

But that may be changing as the software-defined movement gathers momentum. The basic idea is to decouple the software from the underlying hardware. Instead of a vendor building a storage area network (SAN) array with proprietary software that only runs on that system or another vendor building a switch with secret-sauce software inside, the idea is to have the software able to run on any hardware. There are so many software-defined elements that people are now talking about entire software-defined data centers (SDDCs). 

Here are the top trends in software-defined data center market: 

Software-defined flash 

We’ve had software-defined storage (SDS), software-defined compute, and software-defined networking (SDN). And now we have software-defined flash. 

To reach efficiency at scale, hyperscale cloud and data center storage needs more from flash storage devices that are currently based on hard disk drive (HDD) protocols. The Linux Foundation’s Software-Enabled Flash Community Project, therefore, has evolved a software-defined flash API. Developers can use it to customize flash storage specific to data center, application, and workload requirements. 

Kioxia, for example, introduced software-defined technology and sample hardware based on PCIe and NVMe technology. This technology uncouples flash storage from legacy HDD protocols, allowing flash to realize its full capability and potential as a storage media. 

“Software-enabled flash technology fundamentally redefines the relationship between the host and solid-state storage,” said Eric Ries, SVP, memory storage strategy division, Kioxia America. 


Magic begins to happen once you decouple physical servers from the software they host, storage arrays from the many types of software they can deploy, and networking software from the underlying switches, routers, and other networking gear. 

But so, too, does complexity emerge. What is needed is a way to orchestrate the many elements, so the data center “symphony” of elements is all playing in the same key, keeping time, and following what the conductor requires. 

“With the increased complexity and scale of data centers, the industry must move beyond automating the configuration of infrastructure and workloads to a new paradigm built around orchestration,” said Rick Taylor, CTO, Ori

“We must think about the desired state of services and leverage smart software to plan and deploy instances and their connectivity.” 

Already there? 

Many think that the software-defined data center (SDDC) is gradually emerging. 

But Ugur Tigli, CTO at MinIO, believes we are already there due to containerization and especially as a result of Kubernetes.

“The modern data center is already software-defined and the colossal success of Kubernetes only ensures that it will remain that way,” Tigli said. 

“With software-defined infrastructure, you gain the ability to dynamically provision, operate, and maintain applications and services. Once infrastructure is virtualized and software-defined, automation becomes a force multiplier and the only way to achieve elasticity and scalability.”   

Appliance reliance 

Appliances have sprung up over the past two decades to take care of a multitude of data center functions. 

They are used for deduplication, compression, backup, and a host of other uses. There are even massive appliances from the likes of Oracle that package all the compute, networking, and storage hardware in a box along with Oracle software and databases – all tuned and optimized to be the environment for that application or database. 

But there is a problem. These appliances tend to go against the software-defined paradigm. They generally have proprietary software inside. Yet, data centers, and IT in general, are riddled with them, as they have worked so well. 

“There is a major challenge that existing infrastructure vendors face – you can’t containerize an appliance,” said Tigli with MinIO

“Every appliance maker is frantically trying to separate their software from their hardware, because the cloud-native data center is an extinction event for them.”  

You will still need CPU, networks, and drives, Tigli said, but everything else is software and that software needs to run on anything. 

Look at the cloud today, the diversity of CPU options includes Intel, AMD, Nvidia, TPU and Graviton, to name a few. Even private clouds present considerable diversity with commodity hardware from Supermicro, Dell Technologies, HPE, Seagate and Western Digital offering different options and price and performance configurations. 

“The result is that we live in a data center world that is software-defined and increasingly open,” Tigli said. 

“Only through open-source software can the developer achieve the freedom required to understand the software in the context of heterogeneous hardware.”  

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles