Infiniband
InfiniBand, also called System I/O, is a point-to-point bidirectional serial link that, in storage area networks, connects computers to fabric switches. In addition, it has been used as an interconnect inside computer chassis. Modern InfiniBand specifications, however, also specifies software functionality for routing and for End-to-end protocolsl.
It supports signaling rates of 10, 20, and 40 Gbps, and, as with PCI Express, links can be put in parallel for additional bandwidth. For physical transmission, it supports board-level connection, both active and passive copper (up to 30 meters pending speeds) and fiber-optic cabling (up to 10km)[1]
The InfiniBand Trade Association (ITA) sees it as complementary to Ethernet and Fibre Channel technologies, which they see as appropriate to feed an InfiniBand core switching fabric. It has lower latency than an Ethernet of the same signaling speed. Case-by-case analysis, however, will is required to tell if InfiniBand of a 10 Gbps signaling rate, for example, is more cost-effective than a 40 Gbps Ethernet fabric.
ITA recently released the Remote Direct Memory Access over Converged Ethernet (RoCE, pronounced Rocky) high performance computing (HPC) architecture, which layers Infiniband on top of the physical and data link layers of IEEE 802.3, but replaces the TCP/IP end-to-end and routing protocols with their InfiniBand equivalents. With the advent of 40 and 100 Gbps Ethernet, the idea of running different middle-layer protocols over a common low-level architecture has attractions. Vendors implementing these protocols in dedicated processor software expect to see latencies of 7-10 microseconds, while pure hardware vendor Mellanox predicts they can achieve 1.3 microseconds.[2]
History
It came from the merger of two technologies. Compaq, IBM, and Hewlett-Packard developed the first, Future I/O. Tandem's ServerNet was the ancestor of the Compaq technology. The other half of the merger came from the Next Generation I/O team of Intel, Microsoft and Sun.
InfiniBand was initially deployed as a HPC interconnect, bur it was always envisioned as a "system area network", interconnecting computers, network devices, and storage arrays in data centers.
Vendor support
Oracle uses InfiniBand as the key technology in the Sun Oracle Database Machine. Cisco Systems dropped support for InfiniBand switches in 2009.[3]
Software
The Open Fabrics Alliance has specified software stacks for it. Some vendors say they can achieve better performance, in the highly specialized supercomputing environment, by using proprietary extensions. [4]
Topologies
For HPC clusters, Fat Tree is the most common topology, but others use torus or mesh topologies, especially when interconnecting thousands of processors.
References
- ↑ Specification FAQ, InfiniBand Trade Association
- ↑ Rick Merritt (19 April 2010), "New converged network blends Ethernet, Infiniband", EE Times
- ↑ "Cisco’s move out of InfiniBand early, but may ultimately prove correct", Inside HPC, 17 August 2009
- ↑ "3 Questions: David Smith on InfiniBand", Supercomputing Online