Thursday, March 28, 2024

The Data Center of the Future

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Data centers already account for over half of corporate IT budgets and META Group projects a 70% increase in data center budgets over the next decade. But where that money is going has changed drastically. While servers and storage together comprised 20% of 2002 expenditures, those figures will drop in both relative and absolute terms by 2012. Software spending, meanwhile, is expected to more than double over that same time period.

But all is not well with data centers. At 25%, utilization remains low and costs cannot continue to rise at that rate without jeopardizing the rest of the IT budget. But not all the news is bad, there are several emerging trends which will permit data centers to boost performance while keeping costs under control.

The Rise of Linux

While Unix is far from dead, Linux is quickly becoming the operating system of choice in datacenters.

“In 2004, Linux adoption will explode in every data center,” said Ted Schadler, an analyst for Forrester Research.

Part of this is due to the low cost of the operating system itself, but that is not the only factor. Operating systems actually don’t account for that large a percentage of overall costs. In fact, companies are willing to pay for Linux in order to get a supported, enterprise version. According to Gartner, the Linux market will exceed $9 billion in 2007.

But besides lower cost, Linux offers the data center a wide variety of application choices. It scales down to cell phones and runs on the IBM z90 mainframe. It runs on low-end web servers and eight-way, mid-sized boxes. It runs on laptops, workstations and is the operating system of choice for clusters, which now comprise over 40% of the worlds top 500 supercomputers. No other operating system provides this same range of options in the enterprise. By standardizing on Linux, an organization can save on the number of different skill sets in the data center.

Smaller Servers

Once upon a time, “big-iron” dominated the data center. While mainframes like IBM’s zSeries and the HP Superdome are experiencing a bit of a renaissance, there is also the trend toward using the smallest servers possible. Currently, that means blade servers. Though they have only been on the market for a short while, according to IDC, sales of these micro-servers surpassed $100 million in 2003 and will account for $3.7 billion in 2006.

Blade servers offer companies low cost scalability, since it is easy to assign a batch of these servers to a particular application, rather than having to buy a more expensive server which then remains underutilized. Since blade servers stuff a dozen or more servers into a single box, they drastically cut the infrastructure costs for racks, cabling and cooling. Then there is the ease of support. When one goes down it is a simple act to swap out a server card and let the system automatically rebuild the system.

One other factor contributing to blade servers growth is Linux. Since it is a lightweight operating system that doesn’t consume a lot of disk space or processor overhead, it is ideal for use on these small servers.

Virtualization

Companies have been moving toward storage virtualization for years. Now they are looking to do the same with the rest of their IT resources. Virtualization brings all the computing resources into a common interface, where they can be viewed as a single system.

This solves two major problems for the data center. To begin with it cuts down the time needed for configuring and assigning resources since the virtualization software dynamically assigns the traffic load to the best available server. Otherwise, the administrator has to set up the services handled by each machine. It also cuts costs by reducing over-provisioning.

A typical scenario today is for each application to be assigned to its own server, with a second server acting as back up or development server. Both servers need to be oversized so they can comfortably manage the greatest anticipated traffic load. With virtualization, however, this over-provisioning of a single server can stop since all the available servers are viewed as a single system.

Evolving Standards

Finally, the data center of the future will be based on common standards in order to ensure greater interoperability and ease the management burden. Currently, there are two competing systems.

One of these is the data center markup language (DCML). DCML is an XML-based specification that provides a structured model and encoding to describe, construct, replicate, and recover data center environments and elements. It’s a new effort which was started by EDS and Opsware in mid-October 2003. Six weeks later, the DCML Organization had a website (www.dcml.org), about fifty members and plans to issue its 1.0 specification for public comment by the end of the year. Eventually, the organization will submit the spec to a standards body such as the Distributed Management Task Force (DMTF) for approval.

Microsoft, meanwhile, is offering its own XML specification called the system definition model (SDM). Last May, the company demonstrated SDM in conjunction with HP. SDM helps to automatically configure Windows servers and applications.

The DCML says its standard will accept SDM information in order to manage Windows servers as part of a heterogeneous environment. Microsoft, however, is not a member of the DCML. Also missing are major hardware manufacturers including Dell, IBM, Hitachi and HP. Computer Associates, BEA, BMC and other major management software vendors, however, are part of the DCML Organization.

Those are some of the factors affecting the future development of data centers. But what will all this add up to from the viewpoint of a data center manager? To begin with, the job will be more about provisioning services, than knowing the ins and outs of all the center’s components. Just as consumer hardware and applications are plug-and-play, so will most enterprise applications and hardware self-configure.

Barring a disaster like SCO winning its lawsuits, many if not all of your machines will be running on Linux. Autonomic systems will correct most errors without human intervention. Open standards will make interoperability problems a thing of the past and the hardware costs associated with stocking the new-age data center will be marginal. And while, the data center wont run itself it will be easier to manage than ever before.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles