Professional Documents
Culture Documents
PRODUCT BRIEF
World’s first 200Gb/s HDR InfiniBand and Ethernet network adapter HIGHLIGHTS
card, offering industry-leading performance, smart offloads and
FEATURES
In-Network Computing, leading to the highest return on investment
– Up to 200Gb/s connectivity per port
for High-Performance Computing, Cloud, Web 2.0, Storage and – Max bandwidth of 200Gb/s
Machine Learning applications – Up to 215 million messages/sec
– Sub 0.6usec latency
ConnectX-6 Virtual Protocol Interconnect® (VPI) cards are a groundbreaking addition to the Mellanox
– Block-level XTS-AES mode hardware
ConnectX series of industry-leading adapter cards. Providing two ports of 200Gb/s for InfiniBand and encryption
Ethernet connectivity, sub-600ns latency and 215 million messages per second, ConnectX-6 VPI cards
– FIPS capable
enable the highest performance and most flexible solution aimed at meeting the continually growing
– Advanced storage capabilities including
demands of data center applications. In addition to all the existing innovative features of past versions,
block-level encryption and checksum
ConnectX-6 cards offer a number of enhancements to further improve performance and scalability. offloads
ConnectX-6 VPI cards supports HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand speeds as well – Supports both 50G SerDes (PAM4) and
as 200, 100, 50, 40, 25, and 10 Gb/s Ethernet speeds. 25G SerDes (NRZ) based ports
– Best-in-class packet pacing with
HPC Environments sub-nanosecond accuracy
Over the past decade, Mellanox has consistently driven HPC performance to new record heights. With – PCIe Gen 3.0 and Gen 4.0 support
the introduction of the ConnectX-6 adapter card, Mellanox continues to pave the way with new features – RoHS compliant
and unprecedented performance for the HPC market. – ODCC compatible
ConnectX-6 VPI delivers the highest throughput and message rate in the industry. As the first adapter to
BENEFITS
deliver 200Gb/s HDR InfiniBand, 100Gb/s HDR100 InfiniBand and 200Gb/s Ethernet speeds, ConnectX-6
– Industry-leading throughput, low CPU
VPI is the perfect product to lead HPC data centers toward Exascale levels of performance and scalability.
utilization and high message rate
ConnectX-6 supports the evolving co-design paradigm, which transforms the network into a distributed – Highest performance and most
processor. With its In-Network Computing and In-Network Memory capabilities, ConnectX-6 offloads intelligent fabric for compute and
computation even further to the network, saving CPU cycles and increasing network efficiency. storage infrastructures
ConnectX-6 VPI utilizes both IBTA RDMA (Remote Direct Memory Access) and RoCE (RDMA over – Cutting-edge performance in virtualized
Converged Ethernet) technologies, delivering low-latency and high performance. ConnectX-6 enhances networks including Network Function
RDMA network capabilities even further by delivering end-to-end packet level flow control. Virtualization (NFV)
– Mellanox Host Chaining technology for
Machine Learning and Big Data Environments economical rack design
Data analytics has become an essential function within many enterprise data centers, clouds and – Smart interconnect for x86, Power, Arm,
Hyperscale platforms. Machine learning relies on especially high throughput and low latency to train GPU and FPGA-based compute and
storage platforms
deep neural networks and to improve recognition and classification accuracy. As the first adapter card to
deliver 200Gb/s throughput, ConnectX-6 is the perfect solution to provide machine learning applications – Flexible programmable pipeline for new
with the levels of performance and scalability that they require. network flows
– Efficient service chaining enablement
ConnectX-6 utilizes the RDMA technology to deliver low-latency and high performance. ConnectX-6
– Increased I/O consolidation efficiencies,
enhances RDMA network capabilities even further by delivering end-to-end packet level flow control.
reducing data center costs & complexity
Compatibility
PCI Express Interface – PCIe switch Downstream Port Operating Systems/Distributions* Connectivity
– PCIe Gen 4.0, 3.0, 2.0, 1.1 compatible Containment (DPC) enablement for PCIe – RHEL, SLES, Ubuntu and other major – Interoperability with InfiniBand switches
– 2.5, 5.0, 8, 16 GT/s link rate hot-plug Linux distributions (up to HDR, as 4 lanes of 50Gb/s data
– 32 lanes as 2x 16-lanes of PCIe – Advanced Error Reporting (AER) – Windows rate)
– Support for PCIe x1, x2, x4, x8, and x16 – Access Control Service (ACS) for – FreeBSD – Interoperability with Ethernet switches
configurations peer-to-peer secure communication – VMware (up to 200GbE, as 4 lanes of 50Gb/s
– Process Address Space ID (PASID) data rate)
– PCIe Atomic – OpenFabrics Enterprise Distribution
Address Translation Services (ATS) (OFED) – Passive copper cable with ESD
– TLP (Transaction Layer Packet)
– IBM CAPIv2 (Coherent Accelerator protection
Processing Hints (TPH) – OpenFabrics Windows Distribution
Processor Interface) (WinOF-2) – Powered connectors for optical and
– Support for MSI/MSI-X mechanisms active cable support
Features*
InfiniBand Enhanced Features Open vSwitch (OVS) offload using – Virtualization hierarchies (e.g., NPAR)
– HDR / HDR100 / EDR / FDR / QDR / – Hardware-based reliable transport ASAP2 • Virtualizing Physical Functions on a
DDR / SDR – Collective operations offloads • Flexible match-action flow tables physical port
– IBTA Specification 1.3 compliant – Vector collective operations offloads • Tunneling encapsulation / • SR-IOV on every Physical Function
– RDMA, Send/Receive semantics – Mellanox PeerDirect® RDMA de-capsulation – Configurable and user-programmable
– Hardware-based congestion control (aka GPUDirect®) communication – Intelligent interrupt coalescence QoS
– Atomic operations acceleration – Header rewrite supporting hardware – Guaranteed QoS for VMs
– 16 million I/O channels – 64/66 encoding offload of NAT router HPC Software Libraries
– 256 to 4Kbyte MTU, 2Gbyte messages – Enhanced Atomic operations Storage Offloads – HPC-X, OpenMPI, MVAPICH, MPICH,
– 8 virtual lanes + VL15 – Advanced memory mapping support, – Block-level encryption: OpenSHMEM, PGAS and varied
Ethernet allowing user mode registration and XTS-AES 256/512 bit key commercial packages
– 200GbE / 100GbE / 50GbE / 40GbE / remapping of memory (UMR) – NVMe over Fabric offloads for target
– Extended Reliable Connected transport Management and Control
25GbE / 10GbE / 1GbE machine – NC-SI, MCTP over SMBus and MCTP
– IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet (XRC) – T10 DIF - signature handover operation at
– Dynamically Connected Transport (DCT) over PCIe - Baseboard Management
– IEEE 802.3by, Ethernet Consortium 25, wire speed, for ingress and egress traffic Controller interface
50 Gigabit Ethernet, supporting all FEC – On demand paging (ODP) – Storage protocols: SRP, iSER, NFS
– MPI Tag Matching – PLDM for Monitor and Control
modes RDMA, SMB Direct, NVMe-oF DSP0248
– IEEE 802.3ba 40 Gigabit Ethernet – Rendezvous protocol offload
– Out-of-order RDMA supporting Overlay Networks – PLDM for Firmware Update DSP0267
– IEEE 802.3ae 10 Gigabit Ethernet – RoCE over overlay networks – SDN management interface for
– IEEE 802.3az Energy Efficient Ethernet Adaptive Routing
– Burst buffer offload – Stateless offloads for overlay network managing the eSwitch
– IEEE 802.3ap based auto-negotiation tunneling protocols – I2C interface for device control and
and KR startup – In-Network Memory registration-free
RDMA memory access – Hardware offload of encapsulation and configuration
– IEEE 802.3ad, 802.1AX Link decapsulation of VXLAN, NVGRE, and – General Purpose I/O pins
Aggregation CPU Offloads GENEVE overlay networks – SPI interface to flash
– IEEE 802.1Q, 802.1P VLAN tags and – RDMA over Converged Ethernet (RoCE) – JTAG IEEE 1149.1 and IEEE 1149.6
priority – TCP/UDP/IP stateless offload Hardware-Based I/O Virtualization
– IEEE 802.1Qau (QCN) – Congestion – LSO, LRO, checksum offload - Mellanox ASAP² Remote Boot
Notification – RSS (also on encapsulated packet), – Single Root IOV – Remote boot over InfiniBand
– IEEE 802.1Qaz (ETS) TSS, HDS, VLAN and MPLS tag – Address translation and protection – Remote boot over Ethernet
– IEEE 802.1Qbb (PFC) insertion/stripping, Receive flow steering – VMware NetQueue support – Remote boot over iSCSI
– IEEE 802.1Qbg – Data Plane Development Kit (DPDK) for • SR-IOV: Up to 1K Virtual Functions – Unified Extensible Firmware Interface
– IEEE 1588v2 kernel bypass applications • SR-IOV: Up to 8 Physical Functions (UEFI)
– Jumbo frame support (9.6KB) per host – Pre-execution Environment (PXE)
* This section describes hardware features and capabilities. Please refer to the driver and firmware release notes for feature availability.
Notes: Dimensions without brackets are 167.65mm x 68.90mm. All tall-bracket adapters are shipped with the tall bracket mounted and a short bracket as an accessory.
The last digit of the OCP3.0 OPN-suffix displays the OPN’s default bracket option: I = Internal Lock; E = Ejector Latch. For other bracket types, contact Mellanox.