You are on page 1of 19

HP BlladeSysttem Netw

working
g Referen
nce
Archiitecture
HP Virtu
ual Connect FlexFabric
F Module
M and V
VMware vSp
phere 4

Technica
al white paperr

Table of contents

Executive Summary ................................................................................................................................. 2

onnect FlexFabrric Module Hard


Virtual Co dware Overvieew ........................................................................... 3

Designing an HP FlexFab
bric Architecture
e for VMware vvSphere ................................................................... 4
Designing a Highly Avvailable Netwo ork Strategy witth Virtual Conneect FlexFabric m modules and M Managed
VLANs ............................................................................................................................................... 4
Designing a Highly Avvailable Netwo
ork Strategy witth FlexFabric modules and Passs-through VLAN
Ns ............ 8

Designing a vSphere Ne dule ............................. 11


etwork Architectture with the Viirtual Connect FFlexFabric mod
vNetwo
ork Distributed Switch
S Design ......................
. ............................................................................. 11
Hypervisor Load Balan ms ............................................................................................... 13
ncing Algorithm
HP Virtu nd DCC.............................................................................................................. 14
ual Connect an

nnect Bill of Materials ......................................................................................... 15


Appendix A: Virtual Con

y cross-referencce ............................................................................................... 16
Appendix B: Terminology

Appendix C: Glossary off Terms .............................................................................................................. 17

For more information


i ............................................................................................................................. 19

1
Execu
utive Sum
mmary
HP has re evolutionized the way IT thinks about neetworking and d server management. With h the release
of the HP P ProLiant Blad
deSystem Gen neration 6 serrvers, along w
with Virtual Co
onnect Flex-10 0 Ethernet
modules, HP provided a great platfo orm for VMwa are vSphere. Virtual Conne ect Flex-10 is the world's
nology to divid
first techn de and fine-tu
une 10Gb Ethhernet networkk bandwidth a at the server eedge. When
combined d with Virtual Connect, the BladeSystem architecture sstreamlines thhe typical change processess
for provissioning in the datacenter.

HP has siince evolved Virtual


V Conneect Flex-10 to the next level: Virtual Con nnect FlexFabrric modules.
By combining the pow wer of ProLiantt BladeSystemm Generation 7 servers and d Virtual Conn
nect, the
Virtual Connect FlexFaabric module allows
a custommers to consolidate networkk connections and storage
nto a single module. This fu
fabrics in urther reducess infrastructuree cost and co
omplexity by e
eliminating
HBA ada apters and Fib
bre Channel modules
m at thee edge.

These serrvers include virtualization


v friendly featu res such as la
arger memory capacity, dense
populatio
on, room for additional
a mezzzanine cardss and 6 - 48 (with Intel Hyp per-Threading
g technology
enabled and AMD Op pteron 6100-sseries) processsing cores. Th he following PProLiant BL Serrvers ship
standard with the NC5 551i FlexFabrric Adapter:

 BL465
B G7
 BL685
B G7

The follow
wing ProLiant BL Servers sh
hip with the N
NC553i FlexFa
abric Adapterr:

 BL460
B G7
 BL490
B G7
 BL620/680
B G7
G

Additionaally, the NC551m providess support for tthe FlexFabricc Adapter in PProLiant Blade
eSystem G6
servers.

NOTE:
N Please check the la
atest NC551m
m QuickSpecss1 for the officcial server sup
pport matrix.

The Virtual Connect Fle exFabric mod dule, along wiith the FlexFab bric Adapter, introduces a new Physical
Function called the FleexHBA. The FlexHBA, alonng with the FleexNIC, adds tthe unique ab bility to fine-
tune each h connection to
t adapt to yoour virtual serv
rver channels and workload ds on-the-fly. The effect of
using thee Virtual Connect FlexFabricc module is a reduction in tthe number off interconnectt modules
required to uplink outsside of the encclosure, whilee still maintain
ning full redun
ndancy acrosss the service
console, VMkernel, virrtual machine (VM) networkks and storage fabrics. Thiss translates to o a lower cost
infrastruccture with fewe
er manageme ent points and d cables.

When de esigning a vSpphere Networrk infrastructurre with the Virtual Connectt FlexFabric m
module, there
are two frequent
f netwoork architectures customerss choose. Thiss document describes how to design
highly avvailable Virtua
al Connect stra
ategy with:

1
http://h18004.www1.hp.com
m/products/blades/
/components/emullex/nc551m/indexx.html

2
 Virtual
V Conn nect Manag ged VLANs - In this design, we are maximizing the m management
features
f of Virrtual Connect,, while providding customerss with the flexxibility to provvide “any
networking
n to any host” witthin the Virtua
al Connect do omain. Simplyy put, this dessign will not
over-provision
o n servers, while keeping thee number of u plinks used to o a minimum. This helps
reduce
r infrastrructure cost and complexityy by trunking the necessaryy VLANs (IP Subnets) to the
Virtual
V Connect domain, an nd minimizingg potentially eexpensive 10G Gb uplink porrts.
 Virtual
V Conn nect Pass-thhrough VLA ANs - This dessign addresse es customer re
equirements to
o
support
s a sign
nificant numbe
er of VLANs fo or Virtual Macchine traffic. The previous design has a
limited numbe er of VLANs it can support. While providing similar sserver profile n network
connection
c assignments as the previous ddesign, more uplink ports a are required, and VLAN
Tunneling
T musst be enabled within the Virrtual Connect domain.

Both desiigns provide highly


h availab
ble network arrchitecture, an
nd also take into account e
enclosure level
redundanncy and vSphere cluster de esign. By spreeading the cluuster scheme a
across both en
nclosures,
each can
n provide locaal HA in case of network annd enclosure ffailure.

 Finally,
F this do
ocument will provide
p key d esign best pra
actices for vSp
phere 4 network
architecture
a with
w HP FlexFabric, including g:vDS design for Hyperviso or networking
g
 vSwitch
v and dvPortGroup
d lo
oad balance algorithms

Virtua
al Conne
ect FlexFFabric M
Module H
Hardwarre Overvview
The Virtual Connect FleexFabric moddule is the firstt Data Center Bridging (DC CB) and Fibre Channel overr
Ethernet (FCoE)
( solutio
on introduced into the HP B ladeSystem p portfolio. It prrovides 24 line rate ports,
Full-Duple
ex 240Gbps bridging, sing gle DCB-hop ffabric. As sho own in Image e 1, there are 8 faceplate
ports. Po
orts X1-X4 aree SFP+ transce
eiver slots onlyy; which can accept a 10G Gb or 8Gb SFFP+
transceiver. Ports X5-X
X8 are SFP annd SFP+ capa able, and do nnot support 8G Gb SFP+ transsceivers.

NOTE:
N The CX-4
C port provvided by the V
Virtual Conneect Flex-10 and
d legacy mod
dules has been
n
depreciated.
d

Image 1: HP Virtual Conn


nect FlexFabric Module
M

Ports X7 and X8 are sh hared with intternal Stackin g Link ports. If the externa
al port is popu
ulated with a
transceiver, the interna
al Stacking Lin
nk is disabled .

3
Important: Even though the Virtual Connect FlexFabric module supports Stacking, stacking
only applies to Ethernet traffic. FC uplinks cannot be consolidated, as it is not possible to
stack the FC ports, nor provide a multi-hop DCB bridging fabric today.

Designing an HP FlexFabric Architecture for VMware


vSphere
In this section, we will discuss two different and viable strategies for customers to choose from. Both
provide flexible connectivity for hypervisor environments. We will provide the pros and cons to each
approach, and provide you with the general steps to configure the environment.

Designing a Highly Available Network Strategy with Virtual Connect


FlexFabric modules and Managed VLANs

In this design, two HP ProLiant c-Class 7000 Enclosures with Virtual Connect FlexFabric modules are
stacked to form a single Virtual Connect management domain2. By stacking Virtual Connect
FlexFabric modules, customer can realize the following benefits:

 Management control plane consolidated


 More efficient use of WWID, MAC and Serial Number Pools
 Provide greater uplink port flexibility and bandwidth
 Profile management across stacked enclosures

Shared Uplink Sets provide administrators the ability to distribute VLANs into discrete and defined
Ethernet Networks (vNet.) These vNets can then be mapped logically to a Server Profile Network
Connection allowing only the required VLANs to be associated with the specific server NIC port. This
also allows customers the flexibility to have various network connections for different physical
Operating System instances (i.e. VMware ESX host and physical Windows host.)

As of Virtual Connect Firmware 2.30 release, the following Shared Uplink Set rules apply per
domain:

 320 Unique VLANs per Virtual Connect Ethernet module


 128 Unique VLANs per Shared Uplink Set
 28 Unique Server Mapped VLANs per Server Profile Network Connection
 Every VLAN on every uplink counts towards the 320-VLAN limit. If a Shared Uplink Set is
comprised of multiple uplinks, each VLAN on that Shared Uplink Set is counted multiple times.

2
Only available with Virtual Connect Manager Firmware 2.10 or greater. Please review the Virtual Connect Manager Release Notes for more
information regarding domain stacking requirements: http://h18004.www1.hp.com/products/blades/components/c-class-tech-installing.html

4
By providding two stackked Enclosure es, this will allo
ow for not on
nly Virtual Con
nnect FlexFabric module
failure, but
b also Enclossure failure. The uplink portts assigned to o each Shared d Uplink Set (S
SUS) were
vertically offset to allow
w for horizonttal redundanccy purposes, a as shown in FFigure 1-2.

IP-based storage (NFS and/or iSCS SI) can be ded dicated and seegregated byy a separate vvNet and
assigned uplink port. This design approach
a provvides administrators to ded
dicate a netwo
ork (physically
y
switched,, directly conn
nected or logiical within a S
Shared Uplinkk Set) to provide access to IP-based
storage arrays.
a

Directly connecting
c an
n IP-based Storage array ha
as certain limiitations:

 Each
E storage array front-en
nd port will req
quire a unique vNet
 Each
E uire separate server network connections
defined vNet will requ
 You
Y are limited to the numb
ber of IP-based
d arrays baseed on the num
mber of unassigned uplink
ports
p

Virtual Connect has the capability to o create an innternal, privatte network witthout uplink p ports, by using
g
atency mid-pla
the low la ane connectio ons to facilitatte communica ation. This vNNet can be use ed for cluster
heartbeat networks, orr in this case VMotion
V and/ /or Fault Toleerance traffic. Traffic will not pass to the
upstreamm switch infrastructure, whicch will eliminaate the bandw width otherwise consumed.

Figure 1-1: Physical VMware vSphere Cluste


er Design

Figure 1-2 show the ph


hysical cablin
ng. The X5 annd X6 Etherneet ports of the FlexFabric module are
ng to a redund
connectin dant pair of Top
T of Rack (TToR) switches, using LACP ((802.3AD) for link

5
redundancy. The ToR switches can be placed End of Row to save on infrastructure cost. Ports X7 are
used for vertical External Stacking Links, while X8 are used for Internal Stacking Links.

As noted in the previous section, Virtual Connect FlexFabric Stacking Links will only carry Ethernet
traffic, and do not provide any Fibre Channel stacking options. Thus, ports X1 and X2 from each
module are populated with 8Gb SFP+ transceivers, providing 16Gb net FC bandwidth for storage
access. Ports X3 and X4 are available to provide additional bandwidth if FC storage traffic is
necessary.

If additional Ethernet bandwidth is necessary, ports Enc0:Bay2:X5, Enc0:Bay2:X6, Enc1:Bay1:X5,


and Enc1:Bay1:X6 can be used for additional Ethernet Networks or Shared Uplink Sets.

6
Figure 1-2
2: Physical desig
gn

7
Figure 1-3
3: Logical design

Design
ning a Hig
ghly Availa
able Netw
work Strategy with FlexFabricc modules
and Pa
ass-through VLANs

In this de
esign, two HP ProLiant c-Cla
ass 7000 Enc losure with Viirtual Connect FlexFabric m
modules are
stacked to form a single Virtual Connnect manageement domain n3. By stacking Virtual Con nnect
FlexFabriic modules, cu
ustomer can realize the folllowing benefiits:

 Management
M d
control plane consolidated
 More
M efficientt use of WWID
D, MAC and Serial Numbeer Pools
 Provide
P greate
er uplink port flexibility and
d bandwidth
 Profile
P management across stacked enclo
osures

This desig
gn does not ta
ake into account for other p physical serveer instances (i.e. Windows Server.) If
the desig
gn requires sup
pport for multiple types of pphysical OS instances, whe ere the non-hyypervisor
hosts require access to
o a specific VLLAN, additionnal uplink porrts will be requuired. This w
will add
al cost and ad
additiona dministrative overhead
o to thhe overall dessign.

This desig
gn also does not take into account
a wherre multiple hyppervisor hostss will require d
different
Virtual Machine
M netwo orking. If therre is a prerequuisite to suppo
ort this, additiional uplink p
ports will be
necessaryy to tunnel the
e specific VLA
AN(s).

By providding two stackked Enclosuree, this will allo


ow for not onlyy Virtual Conn
nect FlexFabric module
failure, but
b also Enclossure failure. The uplink portts assigned too each vNet w were offset to a
allow for
horizontaal redundancyy purposes. To o reduce transsceiver and upstream port cost, a 1Gb S SFP
transceiver would be used
u to provid
de Service Co onsole networkking.

IP-based storage (NFS and/or iSCS SI) can be ded


dicated and seegregated byy a separate vvNet and
assigned uplink port. This design approach
a provvides administrators to ded
dicate a netwo
ork (physically
y

3
Only availa
able with Virtual Co
onnect Manager Firrmware 2.10 or greeater. Please review
w the Virtual Conne
ect Manager Releasse Notes for more
information
n regarding domain n stacking requirements: http://h1800
04.www1.hp.com/ /products/blades/components/c-class--tech-installing.html

8
switched,, directly conn
nected or logiical within a S
Shared Uplinkk Set) to provide access to IP-based
storage arrays.
a

Directly connecting
c an
n IP-based Storage array ha
as certain limiitations:

 Each
E storage array front-en
nd port will req
quire a unique vNet
 Each
E uire separate server network connections
defined vNet will requ
 You
Y are limited to the numb
ber of IP-based
d arrays baseed on the num
mber of unassigned uplink
ports
p

Virtual Connect has the capability to o create an innternal, privatte network witthout uplink p ports, by using
g
atency mid-pla
the low la ane connectio ons to facilitatte communica ation. This vNNet can be use ed for cluster
heartbeat networks, orr in this case VMotion
V and/ /or Fault Toleerance traffic. Traffic will not pass to the
upstreamm switch infrastructure, whicch will eliminaate the bandw width otherwise consumed.

1: Physical VMware vSphere Clusster Design


Figure 1-11

Figure 1-12 show the physical cabling. The Etheernet ports X3 and X4 of the FlexFabric m module are
ng to a redund
connectin dant pair of Top
T of Rack (TToR) switches, using LACP ((802.3AD) for link
redundanncy. The ToR switches can be placed Ennd of Row to ssave on infrasstructure cost. Ports X7 are e
used for vertical
v Extern
nal Stacking Links,
L while X8
8 are used forr Internal Staccking Links.

As noted in the previous section, Virtual Connectt FlexFabric Sttacking Links will only carryy Ethernet
nd do not pro
traffic, an ovide any Fibrre Channel sta
acking optionss. Thus, ports X1 and X2 frrom each
module area populated with 8Gb SFFP+ transceiveers, providing 16Gb net FC C bandwidth ffor storage
access. If additional FC
F bandwidth h is necessary,, the transceivver installed in
n X3 will need
d to be moved
d
to X6, wh hile an 8Gb SFP+
S transceivver would be installed into X3 on all mod dules.

9
Important:
I This design uses X1-X3 forr FC connectivvity, no additional FC portss can be used.

If additio
onal Ethernet bandwidth
b is necessary,
n po
orts Enc0:Bay2
2:X3, Enc0:Baay2:X4, Enc1:Bay1:X3,
and Enc1 1:Bay1:X4 can be used for additional Etthernet Netwoorks or Shared
d Uplink Sets.

2: Physical desig
Figure 1-12 gn

10
3: Logical design
Figure 1-13 n

Desig
gning a vSphere
v Networrk Archittecture w
with the Virtual
Connect FlexFabric module
m
The introd
duction of VMMware vSpherre 4, customerrs are now ab ble to centrallyy manage the e network
configuraation within th
he hypervisor. This new fea ature, called vvNetwork Disttributed Switcch4 (vDS),
allows ann administratoor to create a centralized d
distributed vSwwitch. Port Gro oups are still utilized in thiss
new mod del, but have a different asssociation to ho
ost uplink porrts. Host uplin
nk ports are a added to
Uplink Groups (dvUpliinkGroup), wh here a logicall association bbetween the ddvUplinkGrou up and a
PortGroup (dvPortGrou up) is formed.. vDS can serrvice any of thhe vmkernel fu unctions; Servvice Console,
VMotion,, IP Storage, and
a Virtual Machine traffic .

In this chapter, we will outline the overall


o vDS deesign. The vD
DS design will remain the sa
ame,
regardlesss of the Virtua
al Connect deesign chosen.

vNetw
work Distrib
buted Switch Design

Each pair of redundan nt pNICs will beb assigned tto dvUplinkGrroups, which tthen are assig gned to a
specific vDS.
v This will simplify host network conffiguration, wh
hile providing all of the ben
nefits of a
vDS. Tab ble 2-1 and Figure 2-1 shows which hyp pervisor netwoorking function will be assig
gned to vDS
configuraation.
Table 2--1 VMware vDS Configuration

Vmkerne
el Function vDS Name dvPorrtGroup Namee

Service Console
C dvs1_mgmt dvPorttGroup1_Mgmtt

4
Requires vS
Sphere 4.0 Enterpriise Plus licensing

11
1
Table 2--1 VMware vDS Configuration

Vmkerne
el Function vDS Name dvPorrtGroup Namee

VMotion dvs2_vmkernel dvPorttGroup2_vmkerrnel

VM Netw
working1 dvs3_vmnet dvPorttGroup3_vmnett100

workingN
VM Netw dvs3_vmnet dvPorttGroupN_vmne
etNNN

Figure 2-1: Hypervisor Netw


working Design

VMware Fault Tolerance (FT) could introduce mo ore complexityy in to the ove erall design. VVMware
states tha
at a single 1G
Gb NIC should d be dedicatedd for FT loggiing, which wo ould have the potential to
starve anny shared pNIC with that off another vmkkernel functionn (i.e. VMotion traffic.) FT has not been
taken into
o consideratioon within this document. Evven though FTT could be sha ared with anoother vmkernel
function, and if FT is a design requirement, then tthe overall im
mpact of its incclusion should
d be
examined d.

vSphere 4.1
4 introduce es a new featu
ure called Nettwork I/O Co ontrol5 (NetIOC), which pro
ovides QoS-
like capa
abilities within the vDS. NeetIOC can ideentify the following types off traffic:

5
http://www
w.vmware.com/files/pdf/techpaper/V
VMW_Netioc_BestPPractices.pdf

12
 Virtual Machine
 FT Logging
 iSCSI
 NFS
 Service Console Management
 vMotion

NetIOC can be used to control identified traffic, when multiple types of traffic are sharing the same
pNIC. In our design example, FT Logging could share the same vDS as the vmkernel, and NetIOC
would be used to control the two types of traffic.

With the design example given, there are three options one could choose to incorporate FT Logging:
Table 2-2 VMware Fault Tolerance Options

FT Design Choice Justification Rating

Share with VMotion The design choice to keep VMotion traffic internally to the Enclosure ***
network allows the use of low latency links for inter-Enclosure communication.
By giving enough bandwidth for VMotion and FT traffic, while
defining a NetIOC policy, latency should not be an issue.

Non-redundant VMotion Dedicate one pNIC for VMotion traffic, and the other for FT logging **
and FT networks traffic. Neither network will provide pNIC redundancy.

Add additional This option increases the overall CapEx to the solution, but will *
FlexFabric Adapters provide more bandwidth options.
and Modules

Hypervisor Load Balancing Algorithms

VMware provides a number of different NIC teaming algorithms, which are outlined in Table 2-2. As
the table shows, any of the available algorithms can be used, except IP Hash. IP Hash requires
switch assisted load balancing (802.3ad), which Virtual Connect does not support 802.3ad with
server downlink ports. HP and VMware recommend using Originating Virtual Port ID, as shown in
Table 2-2.
Table 2-3 VMware Load Balancing Algorithms

Name Algorithm Works with VC

Originating Virtual Port Choose an uplink based on the virtual port where the Yes
ID traffic entered the virtual switch.

Source MAC Address MAC Address seen on vnic port Yes

IP Hash Hash of Source and Destination IP’s. Requires switch No


assisted load balancing, 802.3ad. Virtual Connect
does not support 802.3ad on server downlink ports,
as 802.3ad is a Point-to-Point bonding protocol.

Explicit Failover Highest order uplink from the list of Active pNICs. Yes

13
HP Virtual Connect and DCC

Virtual Connect firmware v2.30 introduced Device Control Channel (DCC) support to enable Smart
Link, Dynamic Bandwidth Allocation, and Network Assignement to FlexNICs without powering off the
server. There are three components required for DCC:

 NIC Firmware (Bootcode 5.0.11 or newer)


 NIC Driver (Windows Server v5.0.32.0 or newer; Linux 5.0.19-1 or newer; VMware ESX
4.0 v1.52.12.v40.3; VMware ESX 4.1 v1.60)
 Virtual Connect Firmware (v2.30 or newer)

14
Appendix A: Virtual Connect Bill of Materials
Table A-1 Virtual Connect FlexFabric module Mapped VLAN BoM

Partnumber Description Qty

571956-B21 HP Virtual Connect FlexFabric Module 4

AJ716A HP StorageWorks 8Gb B-series SW SFP+ 8

487649-B21 .5m 10Gb SFP+ DAC Stacking Cable 2

455883-B21 10Gb SR SFP+ transceiver 4

Or

487655-B21 3m SFP+ 10Gb Copper DAC 4

Table A-2 Virtual Connect FlexFabric module Tunneled VLAN BoM

Partnumber Description Qty

571956-B21 HP Virtual Connect FlexFabric Ethernet Module 4

487649-B21 .5m 10Gb SFP+ DAC Stacking Cable 2

AJ716A HP StorageWorks 8Gb B-series SW SFP+ 8

453154-B21 1Gb RJ45 SFP transceiver 2

455883-B21 10Gb SR SFP+ transceiver 4

Or

487655-B21 3m SFP+ 10Gb Copper DAC 4

15
Appendix B: Terminology cross-reference
Table C-1 Terminology cross-reference

Customer term Industry term IEEE term Cisco term Nortel term HP Virtual
Connect term

Port Bonding or Port Aggregation 802.3ad Etherchannel or MultiLink 802.3ad LACP


Virtual Port or Port-trunking LACP channeling Trunking (MLT)
LACP (PaGP)

VLAN Tagging VLAN Trunking 802.1Q Trunking 802.1Q Shared Uplink


Set

16
Appendix C: Glossary of Terms
Table C-1 Glossary

Term Definition
vNet/Virtual Connect Ethernet A standard Ethernet Network consists of a single broadcast
Network domain. However, when “VLAN Tunnelling” is enabled within
the Ethernet Network, VC will treat it as an 802.1Q Trunk port,
and all frames will be forwarded to the destined host untouched.
Shared Uplink Set (SUS) An uplink port or a group of uplink ports, where the upstream
switch port(s) is configured as an 802.1Q trunk. Each associated
Virtual Connect Network within the SUS is mapped to a specific
VLAN on the external connection, where VLAN tags are removed
or added as Ethernet frames enter or leave the Virtual Connect
domain.
Auto Port Speed** Let VC automatically determine best FlexNIC speed
Custom Port Speed** Manually set FlexNIC speed (up to Maximum value defined)
DCC** Device Control Channel: method for VC to change either a
FlexNIC or FlexHBA Adapter port settings on the fly (without
power no/off)
EtherChannel* A Cisco proprietary technology that combines multiple NIC or
switch ports for greater bandwidth, load balancing, and
redundancy. The technology allows for bi-directional aggregated
network traffic flow.
FlexNIC** One of four virtual NIC partitions available per FlexFabric
Adapter port. Each capable of being tuned from 100Mb to
10Gb
FlexHBA*** The second Physical Function providing an HBA for either Fibre
Channel or iSCSI functions
IEEE 802.1Q An industry standard protocol that enables multiple virtual
networks to run on a single link/port in a secure fashion through
the use of VLAN tagging.
IEEE 802.3ad An industry standard protocol that allows multiple links/ports to
run in parallel, providing a virtual single link/port. The protocol
provides greater bandwidth, load balancing, and redundancy.
LACP Link Aggregation Control Protocol (see IEEE802.3ad)
LOM LAN-on-Motherboard. Embedded network adapter on the system
board
Maximum Link Connection Maximum FlexNIC speed value assigned to vNet by the network
Speed** administrator. Can NOT be manually overridden on the server
profile.
Multiple Networks Link Speed Global Preferred and Maximum FlexNIC speed values that
Settings** override defined vNet values when multiple vNets are assigned to
the same FlexNIC
MZ1 or MEZZ1; LOM Mezzanine Slot 1; LAM on Motherboard/systemboard NIC

17
Network Teaming Software A software that runs on a host, allowing multiple network
interface ports to be combined to act as a single virtual port. The
software provides greater bandwidth, load balancing, and
redundancy.
pNIC Physical NIC port. A FlexNIC is seen by VMware as a pNIC
Port Aggregation Combining ports to provide one or more of the following benefits:
greater bandwidth, load balancing, and redundancy.
Port Aggregation Protocol A Cisco proprietary protocol aids in the automatic creation of
(PAgP)* Fast EtherChannel links. PAgP packets are sent between Fast
EtherChannel-capable ports to negotiate the forming of a
channel.
Port Bonding A term typically used in the Unix/Linux world that is synonymous
to NIC teaming in the Windows world.
Preferred Link Connection Preferred FlexNIC speed value assigned by a vNet by the
Speed** network administrator.
Trunking (Cisco) 802.1Q VLAN tagging
Trunking (Industry) Combining ports to provide one or more of the following benefits:
greater bandwidth, load balancing, and redundancy. See also
Port Aggregation.
VLAN A virtual network within a physical network.
VLAN Tagging Tagging/marking an Ethernet frame with an identity number
representing a virtual network.
VLAN Trunking Protocol (VTP)* A Cisco proprietary protocol used for configuring and
administering VLANs on Cisco network devices.
vNIC Virtual NIC port. A software-based NIC used by VMs

*The feature is not supported by Virtual Connect. **The feature was added for Virtual Connect Flex-10
***The feature was added for Virtual Connect FlexFabric modules

18
For more
m info
ormation
n
To read more
m about th
he Virtual Con
nnect FlexFabrric module, go
o to www.hp.ccom/go/virtuallconnect.

© Co opyright 2010 Hew wlett-Packard Develo opment Company, L.P. The information n contained herein is subject to
channge without notice. The
T only warrantiess for HP products and services are set forth in the expresss warranty
statements accompanyin ng such products an nd services. Nothing herein should be construed as consttituting an
additional warranty. HPP shall not be liable for technical or ed
ditorial errors or omissions contained hherein.
c025
505638, August 20
010

You might also like