You are on page 1of 2

InfiniBand (IB) is a computer networking communications standard used in high-

performance computing that features very high throughput and very low latency.
It is used for data interconnect both among and within computers. InfiniBand is
also used as either a direct or switched interconnect between servers and storage
systems, as well as an interconnect between storage systems. It is designed to be
scalable and uses a switched fabric network topology. By 2014, it was the most
commonly used interconnect in the TOP500 list of supercomputers, until about
2016.[1]

Mellanox (acquired by Nvidia) manufactures InfiniBand host bus adapters and


network switches, which are used by large computer system and database
vendors in their product lines.[2] As a computer cluster interconnect, IB competes
with Ethernet, Fibre Channel, and Intel Omni-Path. The technology is promoted
by the InfiniBand Trade Association.

History

InfiniBand originated in 1999 from the merger of two competing designs: Future I/O and Next
Generation I/O (NGIO). NGIO was led by Intel, with a specification released on 1998,[3] and joined by
Sun Microsystems and Dell. Future I/O was backed by Compaq, IBM, and Hewlett-Packard.[4] This led to
the formation of the InfiniBand Trade Association (IBTA), which included both sets of hardware vendors
as well as software vendors such as Microsoft. At the time it was thought some of the more powerful
computers were approaching the interconnect bottleneck of the PCI bus, in spite of upgrades like PCI-X.
[5] Version 1.0 of the InfiniBand Architecture Specification was released in 2000. Initially the IBTA vision
for IB was simultaneously a replacement for PCI in I/O, Ethernet in the machine room, cluster
interconnect and Fibre Channel. IBTA also envisaged decomposing server hardware on an IB fabric.

Mellanox had been founded in 1999 to develop NGIO technology, but by 2001 shipped an InfiniBand
product line called InfiniBridge at 10 Gbit/second speeds.[6] Following the burst of the dot-com bubble
there was hesitation in the industry to invest in such a far-reaching technology jump.[7] By 2002, Intel
announced that instead of shipping IB integrated circuits ("chips"), it would focus on developing PCI
Express, and Microsoft discontinued IB development in favor of extending Ethernet. Sun and Hitachi
continued to support IB.[8]

In 2003, the System X supercomputer built at Virginia Tech used InfiniBand in what was estimated to be
the third largest computer in the world at the time.[9] The OpenIB Alliance (later renamed OpenFabrics
Alliance) was founded in 2004 to develop an open set of software for the Linux kernel. By February,
2005, the support was accepted into the 2.6.11 Linux kernel.[10][11] In November 2005 storage devices
finally were released using InfiniBand from vendors such as Engenio.[12]

Of the top 500 supercomputers in 2009, Gigabit Ethernet was the internal interconnect technology in
259 installations, compared with 181 using InfiniBand.[13] In 2010, market leaders Mellanox and
Voltaire merged, leaving just one other IB vendor, QLogic, primarily a Fibre Channel vendor.[14] At the
2011 International Supercomputing Conference, links running at about 56 gigabits per second (known as
FDR, see below), were announced and demonstrated by connecting booths in the trade show.[15] In
2012, Intel acquired QLogic's InfiniBand technology, leaving only one independent supplier.[16]

By 2014, InfiniBand was the most popular internal connection technology for supercomputers, although
within two years, 10 Gigabit Ethernet started displacing it.[1] In 2016, it was reported that Oracle
Corporation (an investor in Mellanox) might engineer its own InfiniBand hardware.[2] In 2019 Nvidia
acquired Mellanox, the last independent supplier of InfiniBand products.[17]

You might also like