At the Open Compute Project Summit this week, startup Vapor announced a new open-source specification designed to help manage data center environments.
The new technology is the Open DCRE (Data Center Runtime Environment) which is an open source platform for data center workload automation. Open DCRE can be used to monitor environmental characteristics and power usage in a data center. On top of Open DCRE is the Vapor Core (Core Operating Runtime Environment) that can be used to drive Big Data analytics information about data center operations.
Cole Crawford, CEO of Vapor told Datamation that a key goal of Open DCRE and CORE is to help truly enable Software Defined Data Centers. In his view, much of the discussion to date about Software Defined Data Center has just been about the software and hasn’t taken hardware into consideration.
“Vapor is now here to help drive the intelligence layer into the data center,” Crawford said.
The Open DCRE is being licensed under the OCPHL-R license, which Crawford said is intended to be the hardware equivalent of GPL. The GNU Public License (GPL) is the open-source license under which Linux is made available.
“I’m a big fan of the GPL license for software, but we felt that we could create a simple community based reciprocal license specifically for hardware that explicitly called out the freedoms you receive with regard to patent non-asserts,” Crawford said.
The idea of a data center management platform that looks at the physical attributes of the data center is not a new idea. Joulex, which is a company that was acquired by Cisco for $107 million in 2013 and now rebranded as the Cisco Energy Management suite, had the same idea.
“First, I think that for every closed source solution there should be an open source solution,” Crawford said. “There may be neat features that Joulex had introduced to the market but through a community based ecosystem I’m positive there will be additional innovations on top of Open DCRE.”
In Crawford’s view, the CORE platform also has a different take on how to save money in the data center. Crawford explained that CORE will take all available infrastructure that is available to an administrator across all of their clouds (public / private / managed) and offer interfaces to introduce critical environments capabilities so workloads can manage themselves across virtual classes of service. He added that through this interface workload orchestration and workload automation engines will be able to re provision and workloads to achieve an optimal run state.
“This isn’t just about saving energy,” Crawford said. “This is about making optimal use of all of your data centers in a highly distributed, hybrid world.”
Sean Michael Kerner is a senior editor at Datamation and InternetNews.com. Follow him on Twitter @TechJournalist
Photo courtesy of Shutterstock.