Saturday, April 17, 2021

Intel Rethinks Server Racks

Intel thinks the time has come to redesign server racks. Instead of multiple, self-contained systems, Intel envisions an architecture where all processors are grouped together and all storage is grouped together, with 100 gigabit connections tying it all together.

All Things D’s Arik Hesseldahl reported, “In comments at the Intel Developer Forum in Beijing overnight, Diane Bryant, senior vice president and head of [Intel’s] Datacenter and Connected Systems Group, described a rethink of how data centers might be designed. Currently, individual servers, each with its own computing and storage, are being packed tightly together in a rack, and in turn packed into a room with other similar racks. Intel sees a world where all the computing and storage portions are separated. CPUs would be grouped together so they could be cooled together. They would in turn be linked to storage infrastructure by screaming-fast optical connections running as fast as 100 gigabits per second.”

ServerWatch quoted Intel’s Lisa Graff who said, “We’re putting together all elements of the rack together in a reference architecture, be that the compute node, the photonics piece with the fabric, storage including SSDs, and switches. It’s a whole kit of building blocks to be able to achieve this vision of maximizing the flexibility, efficiency and cost effectiveness in the rack.”

Rich Miller with Data Center Knowledge explained, “At IDF Beijing, Intel articulated this vision of how rack scale architecture will change how servers are built and refreshed. ‘Ultimately, the industry will move to subsystem disaggregation where processing, memory and I/O will be completely separated into modular subsystems, making it possible to easily upgrade these subsystems rather than doing a complete system upgrade.’ Separating the processor refresh cycle from other server components would create some interesting possibilities for Intel, which currently works closely with OEMs to coordinate the inclusion of new chips in new server releases. Facebook hardware guru Frank Frankovsky has said the ability to easily swap out processors could transform the way chips are procured at scale, perhaps shifting to a subscription model.”

Ars Technica’s Jon Brodkin added, “The networking technology used by typical data centers isn’t quite fast enough to power disaggregated racks just yet. That’s why Intel is developing silicon photonics technology that uses light to move data at up to 100Gbps. Silicon photonics has the added benefit of reducing the amount of cabling needed in a rack. ‘Silicon photonics made with inexpensive silicon rather than expensive and exotic optical materials provides a distinct cost advantage over older optical technologies in addition to providing greater speed, reliability, and scalability benefits,’ Intel said in January, when it announced that it has produced engineering samples of the technology. ‘Businesses with server farms or massive data centers could eliminate performance bottlenecks and ensure long-term upgradability while saving significant operational costs in space and energy.'”

Similar articles

Latest Articles

IT Planning During a...

Without a doubt, 2020 changed everything. I like to compare it to a science fiction movie where time travel is involved. Clearly, we have...

Best Data Quality Tools...

Data quality is a critical issue in today’s data centers. The complexity of the Cloud continues to grow, leading to an increasing need for...

NVIDIA’s New Grace ARM/GPU...

This week is NVIDIA’s GTC, or GPU Technology Conference, and they likely should have changed the name to ATC because this year – it...

What is Data Segmentation?

Definition of Data Segmentation Data segmentation is the process of grouping your data into at least two subsets, although more separations may be necessary on...