InfiniCon Switches Boost Performance

The company announces high-end InfiniBand switches to keep up with converging computing needs.

A day after the new supercomputer rankings were released, a leading InfiniBand gear maker unveiled a new switch platform for boosting the performance of large computing clusters and grids.

InfiniCon Systems Tuesday unveiled new InfinIO 9000 Series switches for large-scale computing, as well as the InfinIO 5000 Series, which supports bisectional clustering bandwidth from up to 12 InfiniBand-attached servers to SAN and WAN/LAN networks -- at the same time.

InfiniBand is an increasingly popular interconnect technology that pipes data between processors and I/O devices at 30 gigabits per second. The technology has been replacing the PCI bus in high-end servers and PCs.

Companies like InfiniCon and rivals Topspin and Voltaire have been lining up partners and customers at a prodigious rate, fearing being left in the dust by competition. The competition has prompted vendors to create InfiniBand switches that handle large-scale computing.

Charles Foley, executive vice president of InfiniCon, said the industry is moving toward large-scale switching systems, a trend that has evolved dramatically from as recently as two years ago when 16-, 32- and 64-node clusters was common.

"The average cluster size is far bigger; it used to be that if you had a cluster of 256, it was an architectural engineering feat that was cause for a big celebration," Foley told "Now a cluster of 2,000 nodes is common because of the success of commodity 64-bit processors and high-speed interconnects."

Foley said that another key driver for large-scale clusters is the convergence of storage and the clusters, which used to be considered independent of one another. As the need for data storage increases to the tens of terabytes, Foley said InfiniBand will become increasingly necessary to help clustering and grid computing.

Accordingly, InfiniCon's new 9000 Series switches employ a combination of InfiniBand with Fibre Channel and Ethernet to scale out processing for large clustering applications. Users can run the 9100 Model with 144-port or the 9200 Model with 288-port capacities, with the 9200 delivering 5.76 terabits of bidirectional throughput.

Clusters now have to support hundreds of thousands of users, whereas a couple of years ago, a massive cluster consisted of three or four professors posing hypothetical questions to it, Foley explained. To connect the servers to Fibre Channel SANs, NAS appliances and networks, 9000 systems can be configured with virtual I/O blades, which also contain InfiniBand ports.

"InfiniBand is a wonderfully convergent vehicle for scale-out computing, because it handles massive bandwidth, low latency and multiple protocols simultaneously at wire speed," Foley said.

Yankee Group Senior Analyst Jamie Gruener applauded the products in a statement.

"Systems such as the InfinIO 9000 will be cornerstones in the broad industry move to fabric/grid/cluster computing," he said. "The performance and scalability make commodity 'scale out' processing viable, and the integration of Fibre Channel and Ethernet allow seamless integration into the existing environment."

The InfinIO 9000 Series will be available in Q3 2004.

While large-scale clustering is part of InfiniCon's repertoire, the King of Prussia, Pa.-based company also dabbles in the lowest end of the computing pool with the InfinIO 5000. The company said it devised the 5000 Series to satisfy the growing need for server clusters that support smaller, high-performance computing applications and database applications, such as Oracle Database 10g.

In its base configuration, the switch is geared for blade server deployments and lets customers build fully managed 12-node, 10Gbps clusters. The 5000 Series starts at $9,995 and will be available this summer.

0 Comments (click to add your comment)
Comment and Contribute


(Maximum characters: 1200). You have characters left.