Professional Documents
Culture Documents
PRODUCT BRIEF
ConnectX -5 EN Card ®
The embedded PCIe switch enables customers to build standalone Mellanox Multi-Host® technology allows multiple hosts to be connected
storage or Machine Learning appliances. As with earlier generations of into a single adapter by separating the PCIe interface into multiple and
ConnectX adapters, standard block and file access protocols leverage independent interfaces.
RoCE for high-performance storage access. A consolidated compute and The portfolio also offers Mellanox Socket-Direct® configurations that
storage network achieves significant cost-performance advantages over enable servers without x16 PCIe slots to split the card’s 16-lane PCIe bus
multi-fabric networks.
into two 8-lane buses on dedicated cards connected by a harness.
ConnectX-5 enables an innovative storage rack design, Host Chaining,
which enables different servers to interconnect without involving the Top Host Management
of the Rack (ToR) switch. Leveraging Host Chaining, ConnectX-5 lowers Host Management includes NC-SI over MCTP over SMBus, and MCTP
the data center’s total cost of ownership (TCO) by reducing CAPEX (cables, over PCIe - Baseboard Management Controller (BMC) interface, as well as
NICs, and switch port expenses). OPEX is also reduced by cutting down on PLDM for Monitor and Control DSP0248 and PLDM for Firmware Update
switch port management and overall power usage. DSP0267.
©2020 Mellanox Technologies. All rights reserved.
Mellanox ConnectX-5 Ethernet Adapter Card page 3
Compatibility
PCI Express Interface – PCIe switch Downstream Port Operating Systems/Distributions* Connectivity
– PCIe Gen 4.0 Containment (DPC) enablement for – RHEL/CentOS – Interoperability with Ethernet
– PCIe Gen 3.0, 1.1 and 2.0 compatible PCIe hot-plug – Windows switches (up to 100GbE)
– 2.5, 5.0, 8.0, 16.0 GT/s link rate – Access Control Service (ACS) for – FreeBSD – Passive copper cable with ESD
peer-to-peer secure communication protection
– Auto-negotiates to x16, x8, x4, x2, or – VMware
x1 lane(s) – Advance Error Reporting (AER) – Powered connectors for optical and
– OpenFabrics Enterprise Distribution
– PCIe Atomic – Process Address Space ID (PASID) (OFED) active cable support
Address Translation Services (ATS)
– TLP (Transaction Layer Packet) – OpenFabrics Windows Distribution
Processing Hints (TPH) – IBM CAPI v2 support (Coherent (WinOF-2)
Accelerator Processor Interface)
– Embedded PCIe Switch:
Up to 8 bifurcations – Support for MSI/MSI-X mechanisms
Features*
Ethernet CPU Offloads Overlay Networks Management and Control
– Jumbo frame support (9.6KB) – RDMA over Converged Ethernet – RoCE over Overlay Networks – NC-SI over MCTP over SMBus
Enhanced Features (RoCE) – Stateless offloads for overlay and NC-SI over MCTP over PCIe -
– TCP/UDP/IP stateless offload network tunneling protocols Baseboard Management Controller
– Hardware-based reliable transport
– LSO, LRO, checksum offload – Hardware offload of encapsulation interface
– Collective operations offloads
– RSS (also on encapsulated packet), and decapsulation of VXLAN, – PLDM for Monitor and Control
– Vector collective operations offloads DSP0248
TSS, HDS, VLAN and MPLS tag NVGRE, and GENEVE overlay
– Mellanox PeerDirect® RDMA networks – PLDM for Firmware Update DSP0267
insertion/stripping, Receive flow
(aka GPUDirect®) communication
steering Hardware-Based I/O Virtualization – SDN management interface for
acceleration
– Data Plane Development Kit (DPDK) - Mellanox ASAP² managing the eSwitch
– 64/66 encoding
for kernel bypass applications – Single Root IOV – I2C interface for device control and
– Extended Reliable Connected configuration
– Open VSwitch (OVS) offload using – Address translation and protection
transport (XRC)
ASAP2 – VMware NetQueue support – General Purpose I/O pins
– Dynamically Connected Transport
• Flexible match-action flow tables • SR-IOV: Up to 512 Virtual Functions – SPI interface to Flash
(DCT)
• Tunneling encapsulation/ • SR-IOV: Up to 8 Physical Functions – JTAG IEEE 1149.1 and IEEE 1149.6
– Enhanced Atomic operations
de-capsulation per host Remote Boot
– Advanced memory mapping support,
– Intelligent interrupt coalescence – Virtualization hierarchies (e.g., NPAR – Remote boot over Ethernet
allowing user mode registration and
remapping of memory (UMR) – Header rewrite supporting hardware when enabled) – Remote boot over iSCSI
offload of NAT router • Virtualizing Physical Functions on
– On demand paging (ODP) – Unified Extensible Firmware Interface
– MPI Tag Matching Storage Offloads a physical port (UEFI)
– Rendezvous protocol offload – NVMe over Fabric offloads for target • SR-IOV on every Physical Function – Pre-execution Environment (PXE)
– Out-of-order RDMA supporting machine – Configurable and user-programmable
Adaptive Routing – T10 DIF – Signature handover QoS
– Burst buffer offload operation at wire speed, for ingress – Guaranteed QoS for VMs
and egress traffic
– In-Network Memory registration-free
RDMA memory access – Storage protocols: SRP, iSER, NFS
RDMA, SMB Direct, NVMe-oF
* This section describes hardware features and capabilities. Please refer to the driver and firmware release notes for feature availability.
** When using Mellanox Socket Direct or Mellanox Multi-Host in virtualization or dual-port use cases, some restrictions may apply. For further details, contact Mellanox Customer Support.
Standards*
–IEEE 802.3cd, 50,100 and 200 Gigabit Ethernet –IEEE 802.3az Energy Efficient Ethernet (supports only –IEEE 802.1Qbb (PFC)
–IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet “Fast-Wake” mode) –IEEE 802.1Qbg
–IEEE 802.3by, Ethernet Consortium 25, 50 Gigabit –IEEE 802.3ap based auto-negotiation and KR startup –IEEE 1588v2
Ethernet supporting all FEC modes –IEEE 802.3ad, 802.1AX Link Aggregation –25G/50G Ethernet Consortium “Low Latency FEC”
–IEEE 802.3ba 40 Gigabit Ethernet –IEEE 802.1Q, 802.1P VLAN tags and priority for 50/100/200GE PAM4 links
–IEEE 802.3ae 10 Gigabit Ethernet –IEEE 802.1Qau (QCN) Congestion Notification –PCI Express Gen 3.0 and 4.0
–IEEE 802.1Qaz (ETS)