You are on page 1of 30

VMware vSphere

Distributed Switch
Best Practices
T E CH NI C AL WHI T E PAP E R
VMware vSphere Distributed Switch
Best Practices
T E C H NI C AL WH I T E PAP E R / 2
Table of Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Infrastructure Design Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Infrastructure Component Congurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . o
Virtual Infrastructure Trac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . o
Example Deployment Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . o
Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . o
VMware vCenter Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Network Infrastructure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Virtual Infrastructure Trac Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Important Virtual and Physical Switch Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
VDS Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .o
Host Uplink Connections (vmnics) and dvuplink Parameters . . . . . . . . . . . . . . . . . . . .o
Trac Types and dvportgroup Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
dvportgroup Specic Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
NIOC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1C
Bidirectional Trac Shaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1C
Physical Network Switch Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
Spanning Tree Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
Link Aggregation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
Link-State Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Maximum Transmission Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
Rack Server in Example Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Rack Server with Eight 1GbE Network Adaptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
Design Option 1 Static Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
dvuplink Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
dvportgroup Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
Physical Switch Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1o
Design Option 2 Dynamic Conguration with NIOC and LBT . . . . . . . . . . . . . . . . . 17
dvportgroup Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
VMware vSphere Distributed Switch
Best Practices
T E C H NI C AL WH I T E PAP E R / 3
Rack Server with Two 1CGbE Network Adaptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2C
Design Option 1 Static Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
dvuplink Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
dvportgroup Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21
Physical Switch Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22
Design Option 2 Dynamic Conguration with NIOC and LBT . . . . . . . . . . . . . . . . .2
dvportgroup Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Blade Server in Example Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Blade Server with Two 1CGbE Network Adaptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2o
Design Option 1 Static Conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2o
Design Option 2 Dynamic Conguration with NIOC and LBT . . . . . . . . . . . . . . . . .2o
Blade Server with Hardware-Assisted Logical Network Adaptors
(HP Flex-1C or Cisco UCSlike Deployment) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Operational Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
VMware vSphere Command-Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2o
VMware vSphere API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2o
Virtual Network Monitoring and Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
vCenter Server on a Virtual Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
T E C H NI C AL WH I T E PAP E R / 4
VMware vSphere Distributed Switch
Best Practices
Introduction
This paper provides best practice guidelines for deploying the VMware vSphere distributed switch (VDS) in a
vSphere environment. The advanced capabilities of VDS provide network administrators with more control of
and visibility into their virtual network infrastructure. This document covers the dierent considerations that
vSphere and network administrators must take into account when designing the network with VDS. It also
discusses some standard best practices for conguring VDS features.
The paper describes two example deployments, one using rack servers and the other using blade servers. For
each of these deployments, dierent VDS design approaches are explained. The deployments and design
approaches described in this document are meant to provide guidance as to what physical and virtual switch
parameters, options and features should be considered during the design of a virtual network infrastructure. It is
important to note that customers are not limited to the design options described in this paper. The exibility of
the vSphere platform allows for multiple variations in the design options that can fulll an individual customers
unique network infrastructure needs.
This document is intended for vSphere and network administrators interested in understanding and deploying
VDS in a virtual datacenter environment. With the release of vSphere 5, there are new features as well as
enhancements to the existing features in VDS. To learn more about these new features and enhancements, refer
to the Whats New in Networking paper: http://www.vmware.com/resources/techresources/10194.
Readers are also encouraged to refer to basic virtual and physical networking concepts before reading through
this document. The following link provides technical resources for virtual networking concepts:
http://www.vmware.com/technical-resources/virtual-networking/resources.html
For physical networking concepts, readers should refer to any physical network switch vendors documentation.
Design Considerations
The following three main aspects inuence the design of a virtual network infrastructure:
) Customersinfrastructuredesigngoals
) Customersinfrastructurecomponentcongurations
) Virtualinfrastructuretracrequirements
Lets take a look at each of these aspects in a little more detail.
Infrastructure Design Goals
Customers want their network infrastructure to be available 24/7, to be secure from any attacks, to perform
eciently throughout day-to-day operations, and to be easy to maintain. In the case of a virtualized
environment, these requirements become increasingly demanding as growing numbers of business-critical
applications run in a consolidated setting. These requirements on the infrastructure translate into design
decisions that should incorporate the following best practices for a virtual network infrastructure:
Avo|dany:|nq|eoo|noa||Jre|n|enevor|
:o|aeeac|racyoeor|ncrea:edre:|||encyand:ecJr|y
|a|eJ:eoracnanaqenenandoo|n|.a|oncaoab||||e:
T E C H NI C AL WH I T E PAP E R / 5
VMware vSphere Distributed Switch
Best Practices
Infrastructure Component Congurations
In every customer environment, the utilized compute and network infrastructures dier in terms of
conguration, capacity and feature capabilities. These dierent infrastructure component congurations
inuence the virtual network infrastructure design decisions. The following are some of the congurations
and features that administrators must look out for:
ServerconqJra|onrac|orb|ade:erver:
Nevor|adaoorconqJra|onCb|orCb|nevor|adaoor:nJnberoava||ab|eadaoor:
ooad function on these adaptors, if any
||y:|ca|nevor|:v|c||nra:rJcJrecaoab||||e::v|c|c|J:er|nq
It is impossible to cover all the dierent virtual network infrastructure design deployments based on the various
combinations of type of servers, network adaptors and network switch capability parameters. In this paper, the
following four commonly used deployments that are based on standard rack server and blade server
congurations are described:
Pac|:erverv||e|q|Cb|nevor|adaoor:
Pac|:erverv||voCb|nevor|adaoor:
||ade:erverv||voCb|nevor|adaoor:
||ade:erverv|||ardvarea::|:ednJ||o|e|oq|ca|||ernenevor|adaoor:
It is assumed that the network switch infrastructure has standard layer 2 switch features (high availability,
redundant paths, fast convergence, port security) available to provide reliable, secure and scalable connectivity
to the server infrastructure.
Virtual Infrastructure Trac
vSphere virtual network infrastructure carries dierent trac types. To manage the virtual infrastructure trac
eectively, vSphere and network administrators must understand the dierent trac types and their
characteristics. The following are the key trac types that ow in the vSphere infrastructure, along with their
trac characteristics:
|anaqenenracT||:racov:|roJq|avn|n|candcarr|e:V|vare|SX||o:oV|varevCener
conqJra|onandnanaqenenconnJn|ca|ona:ve||a:|SX||o:o|SX||o:||q|ava||ab|||y|A
related communication. This trac has low network utilization but has very high availability and security
requirements.
V|varevSo|erev|o|onrac\||advancenen|nv|o|onec|no|oqya:|nq|ev|o|on|n:ancecan
con:Jnea|no:aJ||Cbbandv|d|Anax|nJnoe|q|:|nJ|aneoJ:v|o|on|n:ance:canbeoerorned
onaCbJo||n|oJr:|nJ|aneoJ:v|o|on|n:ance:area||ovedonaCbJo||n|v|o|onrac|a:very
high network utilization and can be bursty at times. Customers must make sure that vMotion trac doesnt
|noaco|erracyoe:becaJ:e|n|q|con:Jnea||ava||ab|eOre:oJrce:Ano|erorooeryov|o|on
trac is that it is not sensitive to throttling and makes a very good candidate on which to perform trac
management.
|aJ|o|eranrac\|enV|vare|aJ|To|erance|T|oqq|nq|:enab|edorav|rJa|nac||nea|||e
logging trac is sent to the secondary fault-tolerant virtual machine over a designated vmknic port. This
process can require a considerable amount of bandwidth at low latency because it replicates the I/O trac
and memory-state information to the secondary virtual machine.
|SCSN|Srac|:oraqerac|:carr|edovervn|n|coor:T||:racvar|e:accord|nqod|:|O
reoJe::\||endoendJnboraneconqJra|onnoredaa|:ran:erredv||eac|||ernerane
decreasing the number of frames on the network. This larger frame reduces the overhead on servers/targets
and|norove:|e|:oraqeoerornanceOn|eo|er|andconqe:edand|over:oeednevor|:cancaJ:e
|aency|::Je:|ad|:rJoacce::o|:oraqe|:reconnended|aJ:er:orov|dea||q|:oeedoa|or
|:oraqeandavo|danyconqe:|on|n|enevor||nra:rJcJre
T E C H NI C AL WH I T E PAP E R / 6
VMware vSphere Distributed Switch
Best Practices
V|rJa|nac||nerac|eoend|nqon|evor||oad:|aarerJnn|nqon|eqJe:v|rJa|nac||ne|erac
patterns will vary from low to high network utilization. Some of the applications running in virtual machines
n|q|be|aency:en:||vea:|:|eca:ev||VO|vor||oad:
Table 1 summarizes the characteristics of each trac type.
TRAFFI C TYPE BANDWI DTH USAGE OTHER TRAFFI C
REQUI REMENTS
MANAGEMENT Low ||q||yre||ab|eand:ecJrec|anne|
vMOTI ON ||q| Isolated channel
FT Medium to high ||q||yre||ab|e|ov|aencyc|anne|
I SCSI ||q| Reliable, high-speed channel
VI RTUAL MACHI NE Depends on application Depends on application
Table 1. Trac Types and Characteristics
To understand the dierent trac ows in the physical network infrastructure, network administrators use
network trac management tools. These tools help monitor the physical infrastructure trac but do not provide
v|:|b|||y|nov|rJa||nra:rJcJrerac\|||ere|ea:eovSo|ereV|Snov:Jooor:|eNe||oveaJre
v||c|enab|e:exoor|nq|e|nerna|v|rJa|nac||neov|rJa|nac||nev|rJa||nra:rJcJreov|norna|on
o:andardnevor|nanaqenenoo|:Adn|n|:raor:nov|ave|ereoJ|redv|:|b|||y|nov|rJa||nra:rJcJre
trac. This helps administrators monitor the virtual network infrastructure trac through a familiar set of
network management tools. Customers should make use of the network data collected from these tools during
the capacity planning or network design exercises.
Example Deployment Components
Aer|oo||nqa|ed|erende:|qncon:|dera|on:||::ec|onorov|de:a||:oconoonen:|aareJ:ed
in an example deployment. This example deployment helps illustrate some standard VDS design approaches.
The following are some common components in the virtual infrastructure. The list doesnt include storage
components that are required to build the virtual infrastructure. It is assumed that customers will deploy
|:oraqe|n||:exano|edeo|oynen
Hosts
|oJr|SX||o::orov|deconoJenenoryandnevor|re:oJrce:accord|nqo|econqJra|ono|e
hardware. Customers can have dierent numbers of hosts in their environment, based on their needs. One VDS
can span across 350 hosts. This capability to support large numbers of hosts provides the required scalability to
build a private or public cloud environment using VDS.
Clusters
Ac|J:er|:aco||ec|ono|SX||o::anda::oc|aedv|rJa|nac||ne:v||:|aredre:oJrce:CJ:oner:can
have as many clusters in their deployment as are required. With one VDS spanning across 350 hosts, customers
have the exibility of deploying multiple clusters with a dierent number of hosts in each cluster. For simple
illustration purposes, two clusters with two hosts each are considered in this example deployment. One cluster
can have a maximum of 32 hosts.
T E C H NI C AL WH I T E PAP E R / 7
VMware vSphere Distributed Switch
Best Practices
VMware vCenter Server
V|varevCenerServercenra||ynanaqe:avSo|ereenv|ronnenCJ:oner:cannanaqeV|S|roJq|
this centralized management tool, which can be deployed on a virtual machine or a physical host. The vCenter
Server system is not shown in the diagrams, but customers should assume that it is present in this example
deployment. It is used only to provision and manage VDS conguration. When provisioned, hosts and virtual
nac||nenevor|:ooerae|ndeoenden|yovCenerServerA||conoonen:reoJ|redornevor|:v|c||nq
re:|deon|SX||o::|ven||evCenerServer:y:ena||:|e|o::andv|rJa|nac||ne:v|||:|||beab|e
to communicate.
Network Infrastructure
||y:|ca|nevor|:v|c|e:|n|eacce::andaqqreqa|on|ayerorov|deconnec|v|ybeveen|SX||o::ando
the external world. These network infrastructure components support standard layer 2 protocols providing
secure and reliable connectivity.
A|onqv|||eoreced|nqoJrconoonen:o|eo|y:|ca||nra:rJcJre|n||:exano|edeo|oynen:oneo
the virtual infrastructure tra c types are also considered during the design. The following section describes the
dierent tra c types in the example deployment.
Virtual Infrastructure Tra c Types
In this example deployment, there are standard infrastructure tra c types, including iSCSI, vMotion, FT,
management and virtual machine. Customers might have other tra c types in their environment, based on their
c|o|ceo:oraqe|nra:rJcJre|CN|S|Co|||qJre:|ov:|ed|erenracyoe:a|onqv||a::oc|aed
oorqroJo:onan|SX||o:a|:o:|ov:|enaoo|nqo|enevor|adaoor:o|ed|erenoorqroJo:
VM
VDS
iSCSI
Trac
vmk1
PG-A PG-B PG-C PG-D PG-E
Host
ESXi
FT
Trac
vmk2
Mgmt
Trac
vmk3
vMotion
Trac
vmk4
Figure 1. Dierent Tra c Types Running on a Host
T E C H NI C AL WH I T E PAP E R / 8
VMware vSphere Distributed Switch
Best Practices
Important Virtual and Physical Switch Parameters
|eoreqo|nq|no|ed|erende:|qnoo|on:|n|eexano|edeo|oynen|e:a|ea|oo|a|ev|rJa|and
physical network switch parameters that should be considered in all of the design options. These are some key
parameters that vSphere and network administrators must take into account when designing VMware virtual
nevor||nq|ecaJ:e|econqJra|onov|rJa|nevor||nqqoe:|and|n|andv||o|y:|ca|nevor|
conguration, this section will cover both the virtual and physical switch parameters.
VDS Parameters
VDS simplies the challenges of the conguration process by providing one single pane of glass to perform
v|rJa|nevor|nanaqenena:|:A:oooo:edoconqJr|nqavSo|ere:andard:v|c|VSSoneac|
|nd|v|dJa||o:adn|n|:raor:canconqJreandnanaqeone:|nq|eV|SA||cenra||yconqJrednevor|
policies on VDS get pushed down to the host automatically when the host is added to the distributed switch.
In this section, an overview of key VDS parameters is provided.
Host Uplink Connections (vmnics) and dvuplink Parameters
V|S|a:anevab:rac|onca||eddvJo||n|or|eo|y:|ca|||ernenevor|adaoor:vnn|c:oneac||o:
It is dened during the creation of the VDS and can be considered as a template for individual vmnics on each
|o:A|||eorooer|e:|nc|Jd|nqnevor|adaoorean|nq|oadba|anc|nqanda||overoo||c|e:onV|Sand
dvoorqroJo:areconqJredondvJo||n|:T|e:edvJo||n|orooer|e:areaJona|ca||yaoo||edovnn|c:on
individual hosts when a host is added to the VDS and when each vmnic on the host is mapped to a dvuplink.
This dvuplink abstraction therefore provides the advantage of consistently applying teaming and failover
conqJra|on:oa|||e|o::o|y:|ca|||ernenevor|adaoor:vnn|c:
||qJre:|ov:vo|SX||o::v||oJr||ernenevor|adaoor:eac|\|en|e:e|o::areaddedo|e
VDS, with four dvuplinks congured on a dvuplink port group, administrators must assign the network adaptors
(vmnics) of the hosts to dvuplinks. To illustrate the mapping of the dvuplinks to vmnics, Figure 2 shows one type
onaoo|nqv|ere|SX||o:vnn|c|:naooedodvJo||n|vnn|codvJo||n|and:oonCJ:oner:canc|oo:e
dierent mapping, if required, where vmnic0 can be mapped to a dierent dvuplink instead of dvuplink1. VMware
recommends having consistent mapping across dierent hosts because it reduces complexity in the environment.
VM VM VM VM VM VM VM VM
ESXi Host1 ESXi Host2
dvuplink1 dvuplink4
PG-A
Legend:
PG-B
dvuplink
Port Group
dvuplink
vSphere Distributed Switch
vmnic0 vmnic1 vmnic2 vmnic3 vmnic0 vmnic1 vmnic2 vmnic3
Figure 2. dvuplink-to-vmnic Mapping
T E C H NI C AL WH I T E PAP E R / 9
VMware vSphere Distributed Switch
Best Practices
A:abe:orac|cecJ:oner::|oJ|da|:oryodeo|oy|o::v|||e:anenJnberoo|y:|ca|||erne
nevor|adaoor:andv||:|n||aroor:oeed:A|:obecaJ:e|enJnberodvJo||n|:onV|Sdeoend:on|e
nax|nJnnJnberoo|y:|ca|||ernenevor|adaoor:ona|o:adn|n|:raor::|oJ|da|e|a|noaccoJn
during dvuplink port group conguration. Customers always have an option to modify this dvuplink
conguration based on the new hardware capabilities.
Trac Types and dvportgroup Parameters
Similar to port groups on standard switches, dvportgroups dene how the connection is made through the VDS
o|enevor|T|eV|AN|rac:|ao|nqoor:ecJr|yean|nqand|oadba|anc|nqoaraneer:are
congured on these dvportgroups. The virtual ports (dvports) connected to a dvportgroup share the same
properties congured on a dvportgroup. When customers want a group of virtual machines to share the security
and teaming policies, they must make sure that the virtual machines are part of one dvportgroup. Customers
can choose to dene dierent dvportgroups based on the dierent trac types they have in their environment
or based on the dierent tenants or applications they support in the environment. If desired, multiple
dvoorqroJo:can:|are|e:aneV|AN|
In this example deployment, the dvportgroup classication is based on the trac types running in the virtual
|nra:rJcJreAeradn|n|:raor:Jnder:and|ed|erenracyoe:|n|ev|rJa||nra:rJcJreand|den|y
specic security, reliability and performance requirements for individual trac types, the next step is to create
Jn|oJedvoorqroJo:a::oc|aedv||eac|racyoeA:va:orev|oJ:|ynen|oned|edvoorqroJo
conguration dened at VDS level is automatically pushed down to every host that is added to the VDS. For
exano|e|n||qJre|evodvoorqroJo:|CAye||ovand|C|qreendeneda|ed|:r|bJed:v|c|
|eve|areava||ab|eoneac|o|e|SX||o::|aareoaro|aV|S
dvportgroup Specic Conguration
AercJ:oner:dec|deon|enJnberoJn|oJedvoorqroJo:|eyvanocreae|n|e|renv|ronnen|ey
can start conguring them. The conguration options/parameters are similar to those available with port groups
on vSphere standard switches. There are some additional options available on VDS dvportgroups that are
related to teaming setup and are not available on vSphere standard switches. Customers can congure the
following key parameters for each dvportgroup.
NJnberov|rJa|oor:dvoor:
|orb|nd|nq:a|cdynan|ceo|enera|
V|ANrJn||nqor|vaeV|AN:
Tean|nqand|oadba|anc|nqa|onqv||ac|veand:andby||n|:
||d|rec|ona|rac:|ao|nqoaraneer:
|or:ecJr|y
A:oaro|eean|nqa|qor||n:JooorV|Sorov|de:aJn|oJeaooroac|o|oadba|anc|nqracacro::|e
eanednevor|adaoor:T||:aooroac||:ca||ed|oadba:edean|nq||Tv||c|d|:r|bJe:|eracacro::
|enevor|adaoor:ba:edon|eoercenaqeJ|||.a|onoracon|o:eadaoor:||Ta|qor||nvor|:on
both ingress and egress direction of the network adaptor trac, as opposed to the hashing algorithms that work
on|y|neqre::d|rec|onracov|nqoJo|enevor|adaoorA|:o||Toreven:|evor:ca:e:cenar|o
that might happen with hashing algorithms, where all trac hashes to one network adaptor of the team while
other network adaptors are not used to carry any trac. To improve the utilization of all the links/network
adaoor:V|varereconnend:|eJ:eo||:advancedeaJre||ToV|ST|e||Taooroac||:reconnended
over|e||erC|anne|ono|y:|ca|:v|c|e:androJeba:ed||a:|conqJra|onon|ev|rJa|:v|c|
T E C H NI C AL WH I T E PAP E R / 1 0
VMware vSphere Distributed Switch
Best Practices
|or:ecJr|yoo||c|e:aoorqroJo|eve|enab|ecJ:oneroroec|onroncera|nac|v|y|an|q|
compromise security. For example, a hacker might impersonate a virtual machine and gain unauthorized access
by:ooonq|ev|rJa|nac||ne:|ACaddre::V|varereconnend::e|nq|e|ACaddre::C|anqe:and
|orqedTran:n|:oPeeco|e|ooroecaqa|n:aac|:|aJnc|edbyaroqJeqJe:ooera|nq:y:en
CJ:oner::|oJ|d:e|e|ron|:cJoJ:|odeoPeecJn|e::|eyvanonon|or|eracornevor|
troubleshooting or intrusion detection purposes.
NIOC
Nevor|Oconro|NOC|:|eracnanaqenencaoab|||yava||ab|eonV|ST|eNOCconceorevo|ve:
aroJndre:oJrceooo|:|aare:|n||ar|nnanyvay:o|eone:ex|:|nqorC|UandnenoryvSo|ereand
nevor|adn|n|:raor:novcana||ocaeO:|are:od|erenracyoe::|n||ar|yoa||oca|nqC|Uand
memory resources to a virtual machine. The share parameter species the relative importance of a trac type
over other trac and provides a guaranteed minimum when the other trac competes for a particular network
adaptor. The shares are specied in abstract units numbered 1 to 100. Customers can provision shares to
dierent trac types based on the amount of resources each trac type requires.
This capability of provisioning I/O resources is very useful in situations where there are multiple trac types
competing for resources. For example, in a deployment where vMotion and virtual machine trac types are
owing through one network adaptor, it is possible that vMotion activity might impact the virtual machine
racoerornancen||::|Ja|on:|are:conqJred|nNOCorov|de|ereoJ|red|:o|a|ono|ev|o|on
andv|rJa|nac||neracyoeandorevenoneovracyoerondon|na|nq|eo|erovNOC
conguration provides one more parameter that customers can utilize if they want to put any limits on a particular
racyoeT||:oaraneer|:ca||ed|e||n|T|e||n|conqJra|on:oec|e:|eab:o|Jenax|nJnbandv|d|
oraracyoeona|o:T|econqJra|ono|e||n|oaraneer|::oec|ed|n|bo:NOC||n|:and:|are:
oaraneer:vor|on|yon|eoJboJndrac|erac|a|:ov|nqoJo|e|SX||o:
VMware recommends that customers utilize this trac management feature whenever they have multiple trac
yoe:ov|nq|roJq|onenevor|adaoora:|Ja|on|a|:noreoron|nenv||C|qab|||erneCb|
nevor|deo|oynen:bJcan|aooen|nCb|nevor|deo|oynen:a:ve||T|econnonJ:eca:eorJ:|nq
NOC|nCb|nevor|adaoordeo|oynen:|:v|en|eracrond|erenvor||oad:ord|erencJ:oner
v|rJa|nac||ne:|:carr|edover|e:anenevor|adaoorA:nJ||o|evor||oadracov:|roJq|anevor|
adaptor, it becomes important to provide I/O resources based on the needs of the workload. With the release of
vSphere 5, customers now can make use of the new user-dened network resource pools capability and can
allocate I/O resources to the dierent workloads or dierent customer virtual machines, depending on their
needs. This user-dened network resource pools feature provides the granular control in allocating I/O resources
andnee|nq|e:erv|ce|eve|aqreenenS|AreoJ|renen:or|ev|rJa||.ed|ervor||oad:
Bidirectional Trac Shaping
|e:|de:NOC|ere|:ano|errac:|ao|nqeaJre|a|:ava||ab|e|n|evSo|ereo|aorncanbe
congured on a dvportgroup or dvport level. Customers can shape both inbound and outbound trac using
three parameters: average bandwidth, peak bandwidth and burst size. Customers who want more granular
trac-shaping controls to manage their trac types can take advantage of this capability of VDS along with the
NOCeaJre|:reconnended|anevor|adn|n|:raor:|nyoJrorqan|.a|onbe|nvo|vedv|||econqJr|nq
|e:eqranJ|arracoaraneer:T|e:econro|:na|e:en:eon|yv|en|ereareover:Jb:cr|o|on:cenar|o:
caJ:edby|eover:Jb:cr|bedo|y:|ca|:v|c||nra:rJcJreorv|rJa||nra:rJcJre|aarecaJ:|nqnevor|
performance issues. So it is very important to understand the physical and virtual network environment before
making any bidirectional trac-shaping congurations.
T E C H NI C AL WH I T E PAP E R / 1 1
VMware vSphere Distributed Switch
Best Practices
Physical Network Switch Parameters
The congurations of the VDS and the physical network switch should go hand in hand to provide resilient,
secure and scalable connectivity to the virtual infrastructure. The following are some key switch conguration
parameters the customer should pay attention to.
VLAN
V|AN:areJ:edoorov|de|oq|ca||:o|a|onbeveend|erenracyoe:||:|nooranona|e:Jre|a
|o:eV|AN:arecarr|edovero|eo|y:|ca|:v|c||nra:rJcJreTodo:oenab|ev|rJa|:v|c|aqq|nqVST
on|ev|rJa|:v|c|andrJn|a||V|AN:o|eo|y:|ca|:v|c|oor:|or:ecJr|yrea:on:||:reconnended
|acJ:oner:noJ:e|eV|AN|deaJ|oranyV|vare|nra:rJcJrerac
Spanning Tree Protocol
Soann|nqTree|rooco|ST||:no:Joooredonv|rJa|:v|c|e::onoconqJra|on|:reoJ|redonV|S
|J||:|nooranoenab|e||:orooco|on|eo|y:|ca|:v|c|e:ST|na|e::Jre|a|ereareno|ooo:|n
|enevor|A:abe:orac|cecJ:oner::|oJ|dconqJre|eo||ov|nq
U:e|or|a:onan|SX||o:ac|nqo|y:|ca|:v|c|oor:\||||::e|nqnevor|converqenceon|e:e
:v|c|oor:v|||a|eo|aceoJ|c||yaer|ea||JrebecaJ:e|eoorv|||ener|eST|orvard|nq:ae
immediately, bypassing the listening and learning states.
U:e|e|or|a:|r|dqe|rooco||aaUn||||UqJardeaJreoenorce|eST|boJndaryT||:conqJra|on
oroec:aqa|n:any|nva||ddev|ceconnec|onon|e|SX||o:ac|nqacce:::v|c|oor:A:va:orev|oJ:|y
nen|onedV|Sdoe:n:JooorST|:o|doe:n:endany|||Urane:o|e:v|c|oor|ovever|any
|||U|::eenon|e:e|SX||o:ac|nqacce:::v|c|oor:|e|||UqJardeaJreoJ:|aoar|cJ|ar
switch port in error-disabled state. The switch port is completely shut down and prevents aecting the
Spanning Tree Topology.
T|ereconnenda|onoenab||nq|or|a:and|e|||UqJardeaJreon|e:v|c|oor:|:va||don|yv|en
customers connect nonswitching/bridging devices to these ports. The switching/bridging devices can be
hardware-based physical boxes or servers running a software-based switching/bridging function. Customers
:|oJ|dna|e:Jre|a|ere|:no:v|c||nqbr|dq|nqJnc|onenab|edon|e|SX||o::|aareconnecedo
the physical switch ports.
|ovever|n|e:cenar|ov|ere|e|SX||o:|a:aqJe:v|rJa|nac||ne|a|:conqJredooerorna
br|dq|nqJnc|on|ev|rJa|nac||nev|||qenerae|||Urane:and:end|enoJo|eV|Sv||c||en
orvard:|e|||Urane:|roJq||enevor|adaooro|eo|y:|ca|:v|c|oor\|en|e:v|c|oor
conqJredv|||||UqJardrece|ve:|e|||Urane|e:v|c|v|||d|:ab|e|eoorand|ev|rJa|nac||ne
will lose connectivity. To avoid this network failure scenario when running the software bridging function on
an|SX||o:cJ:oner::|oJ|dd|:ab|e|e|or|a:and|||UqJardconqJra|onon|eo|y:|ca|:v|c|oor
andrJnST|
cJ:oner:areconcernedaboJ|ac|:|acanqenerae|||Urane:|ey:|oJ|dna|eJ:eo
V|varevS||e|dAoov||c|canb|oc||erane:andoroec|ev|rJa||nra:rJcJreron:Jc||ayer
aac|:PeeroV|varevS||e|dorodJcdocJnena|onornoredea||:on|ovo:ecJreyoJrvSo|ere
virtual infrastructure: http://www.vmware.com/products/vshield/overview.html.
Link Aggregation Setup
Link aggregation is used to increase throughput and improve resiliency by combining multiple network
connections. There are various proprietary solutions on the market along with vendor-independent
|||ad|AC|:andardba:ed|no|enena|onA||:o|J|on:e:ab||:|a|oq|ca|c|anne|beveen|evo
endpoints, using multiple physical links. In the vSphere virtual infrastructure, the two ends of the logical channel
are the VDS and physical switch. These two switches must be congured with link aggregation parameters
before the logical channel is established. Currently, VDS supports static link aggregation conguration and does
noorov|de:Jooorordynan|c|AC|\|encJ:oner:vanoenab|e||n|aqqreqa|ononao|y:|ca|:v|c|
|ey:|oJ|dconqJre:a|c||n|aqqreqa|onon|eo|y:|ca|:v|c|and:e|ec||a:|a:nevor|adaoor
teaming on the VDS.
T E C H NI C AL WH I T E PAP E R / 1 2
VMware vSphere Distributed Switch
Best Practices
When establishing the logical channel with multiple physical links, customers should make sure that the
||ernenevor|adaoorconnec|on:ron|e|o:areern|naedona:|nq|eo|y:|ca|:v|c||ovever|
cJ:oner:|avedeo|oyedc|J:eredo|y:|ca|:v|c|ec|no|oqy|e||ernenevor|adaoorconnec|on:can
be terminated on two dierent physical switches. The clustered physical switch technology is referred to by
dierent names by networking vendors. For example, Cisco calls their switch clustering solution Virtual
Sv|c||nqSy:en|rocadeca||:|e|r:V|rJa|C|J:erSv|c||nqPeero|enevor||nqvendorqJ|de||ne:
and conguration details when deploying switch clustering technology.
Link-State Tracking
Link-state tracking is a feature available on Cisco switches to manage the link state of downstream ports, ports
connected to servers, based on the status of upstream ports, ports connected to aggregation/core switches.
When there is any failure on the upstream links connected to aggregation or core switches, the associated
downstream link status goes down. The server connected on the downstream link is then able to detect the
failure and reroute the tra c on other working links. This feature therefore provides the protection from network
a||Jre:dJeo|ea||edJo:reanoor:|nnonne:|ooo|oq|e:UnorJnae|y||:eaJre|:noava||ab|eon
all vendors switches, and even if it is available, it might not be referred to as link-state tracking. Customers
should talk to the switch vendors to nd out whether a similar feature is supported on their switches.
Figure 3 shows the resilient mesh topology on the left and a simple loop-free topology on the right. VMware
highly recommends deploying the mesh topology shown on the left, which provides highly reliable redundant
design and doesnt need a link-state tracking feature. Customers who dont have high-end networking expertise
and are also limited in number of switch ports might prefer the deployment shown on the right. In this
deo|oynencJ:oner:don|aveorJnST|becaJ:e|ereareno|ooo:|n|enevor|de:|qnT|edovn:|de
of this simple design is seen when there is a failure in the link between the access and aggregation switches.
In that failure scenario, the server will continue to send tra c on the same network adaptor even when the
access layer switch is dropping the tra c at the upstream interface. To avoid this blackholing of server tra c,
customers can enable link-state tracking on the virtual and physical switches and indicate any failure between
access and aggregation switch layers to the server through link-state information.
VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
ESXi ESXi
vSphere Distributed Switch
ESXi ESXi
vSphere Distributed Switch
Access
Layer
Aggregation
Layer
Loop
Resilient mesh topology
with loops. Need STP.
Resilient topology with no loops.
No STP but need link-state tracking.
Figure 3. Resilient Loop and No-Loop Topologies
T E C H NI C AL WH I T E PAP E R / 1 3
VMware vSphere Distributed Switch
Best Practices
V|S|a:deaJ|nevor|a||overdeec|onconqJra|on:ea:||n|:aJ:on|yCJ:oner::|oJ|d|eeo||:
conguration if they are enabling the link-state tracking feature on physical switches. If link-state tracking
capability is not available on physical switches, and there are no redundant paths available in the design,
customers can make use of the beacon probing feature available on VDS. The beacon probing function is a
software solution available on virtual switches for detecting link failures upstream from the access layer physical
:v|c|o|eaqqreqa|oncore:v|c|e:|eaconorob|nq|:no:J:eJ|v|||reeornoreJo||n|:|naean
Maximum Transmission Unit
|a|e:Jre|a|enax|nJnran:n|::|onJn||TUconqJra|onnac|e:acro::|ev|rJa|ando|y:|ca|
network switch infrastructure.
Rack Server in Example Deployment
Aer|oo||nqa|enaorconoonen:|n|eexano|edeo|oynenand|eyv|rJa|ando|y:|ca|:v|c|
parameters, lets take a look at the dierent types of servers that customers can have in their environment.
CJ:oner:candeo|oyan|SX||o:one||erarac|:erverorab|ade:erverT||::ec|ond|:cJ::e:a
deo|oynen|nv||c||e|SX||o:|:rJnn|nqonarac|:erverTvoyoe:orac|:erverconqJra|onv|||be
described in the following section:
Pac|:erverv||e|q|Cb|nevor|adaoor:
Pac|:erverv||voCb|nevor|adaoor:
The various VDS design approaches will be discussed for each of the two congurations.
Rack Server with Eight 1GbE Network Adaptors
narac|:erverdeo|oynenv||e|q|Cb|nevor|adaoor:oer|o:cJ:oner:cane||erJ:e|erad||ona|
static design approach of allocating network adaptors to each trac type or make use of advanced features of
V|S:Jc|a:NOCand||TT|eNOCand||TeaJre:|e|oorov|deadynan|cde:|qn|aec|en|yJ|||.e:O
resources. In this section, both the traditional and new design approaches are described, along with their pros
and cons.
Design Option 1 Static Conguration
This design option follows the traditional approach of statically allocating network resources to the dierent
v|rJa||nra:rJcJreracyoe:A::|ovn|n||qJreeac||o:|a:e|q|||ernenevor|adaoor:|oJr
areconnecedooneo|er:acce::|ayer:v|c|e:|eo|eroJrareconnecedo|e:econdacce::|ayer
switch, to avoid single point of failure. Lets look in detail at how VDS parameters are congured.
T E C H NI C AL WH I T E PAP E R / 1 4
VMware vSphere Distributed Switch
Best Practices
Access
Layer
Aggregation
Layer
PG-A
Legend:
PG-B
. . . . . . . . . . . . . . . . . . .
VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
ESXi ESXi
vSphere Distributed Switch
ESXi ESXi
Cluster 2 Cluster 1
Figure 4. Rack Server with Eight 1GbE Network Adaptors
dvuplink Conguration
To:Jooor|enax|nJnoe|q|Cb|nevor|adaoor:oer|o:|edvJo||n|oorqroJo|:conqJredv||
e|q|dvJo||n|:dvJo||n|dvJo||n|On|e|o::dvJo||n||:a::oc|aedv||vnn|cdvJo||n||:a::oc|aed
with vmnic1, and so on. It is a recommended practice to change the names of the dvuplinks to something
meaningful and easy to track. For example, dvuplink1, which gets associated with vmnic on a motherboard, can
berenaneda:|O|Jo||n|dvJo||n|v||c|qe:a::oc|aedv||vnn|conanexoan:|oncardcanbe
renaneda:|xoan:|onJo||n|
|e|o::|ave:one||ernenevor|adaoor:a:|ANonno|erboard|O|and:oneonexoan:|oncard:
for a better resiliency story, VMware recommends selecting one network adaptor from LOM and one from an
expansion card when conguring network adaptor teaming. To congure this teaming on a VDS, administrators
must pay attention to the dvuplink and vmnic association along with dvportgroup conguration where network
adaptor teaming is enabled. In the network adaptor teaming conguration on a dvportgroup, administrators
must choose the various dvuplinks that are part of a team. If the dvuplinks are named appropriately according to
|e|o:vnn|ca::oc|a|onadn|n|:raor:can:e|ec|O|Jo||n|and|xoan:|onJo||n|v|enconqJr|nq
the teaming option for a dvportgroup.
dvportgroup Conguration
A:de:cr|bed|nTab|e|ereareved|erenoorqroJo:|aareconqJredor|eved|erenracyoe:
Customers can create up to 5,000 unique port groups per VDS. In this example deployment, the decision on
creating dierent port groups is based on the number of tra c types.
Accord|nqoTab|edvoorqroJo|CA|:creaedor|enanaqenenracyoeT|ereareo|er
dvoorqroJo:denedor|eo|erracyoe:T|eo||ov|nqare|e|eyconqJra|on:odvoorqroJo|CA
Tean|nqoo|on|xo||c|a||overorderorov|de:adeern|n|:|cvayod|rec|nqracoaoar|cJ|arJo||n|
|y:e|ec|nqdvJo||n|a:anac|veJo||n|anddvJo||n|a:a:andbyJo||n|nanaqenenracv|||becarr|ed
overdvJo||n|Jn|e::|ere|:aa||JreondvJo||n|A||o|erdvJo||n|:areconqJreda:JnJ:edConqJr|nq
|ea||bac|oo|onoNo|:a|:oreconnendedoavo|d|eaoo|nqoracbeveenvonevor|adaoor:
T E C H NI C AL WH I T E PAP E R / 1 5
VMware vSphere Distributed Switch
Best Practices
The failback option determines how a physical adaptor is returned to active duty after recovering from a
a||Jrea||bac||::eoNoaa||edadaoor|:|e|nac|veevenaerrecoveryJn||ano|ercJrren|y
active adaptor fails and requires a replacement.
V|varereconnend:|:o|a|nqa||racyoe:roneac|o|erbyden|nqa:eoaraeV|ANoreac|
dvportgroup.
T|ereare:evera|o|eroaraneer:|aareoaro|edvoorqroJoconqJra|onCJ:oner:canc|oo:e
to congure these parameters based on their environment needs. For example, customers can congure
|V|ANoorov|de|:o|a|onv|en|ereare||n|edV|AN:ava||ab|e|n|eenv|ronnen
A:yoJo||ov|edvoorqroJo:conqJra|on|nTab|eyoJcan:ee|aeac|racyoe|:carr|edovera
specic dvuplink, with the exception of the virtual machine trac type. The virtual machine trac type uses
voac|ve||n|:dvJo||n|anddvJo||n|and|e:e||n|:areJ|||.ed|roJq||e||Ta|qor||nA:va:
orev|oJ:|ynen|oned|e||Ta|qor||n|:nJc|noreec|en|an|e:andard|a:||nqa|qor||n|n
utilizing link bandwidth.
TRAFFI C TYPE PORT
GROUP
TEAMI NG
OPTI ON
ACTI VE
UPLI NK
STANDBY
UPLI NK
UNUSED
UPLI NK
MANAGEMENT
|CA |xo||c|
Failover
dvuplink1 dvuplink2
vMOTI ON
|C| |xo||c|
Failover
dvuplink3 dvuplink4
FT
|CC |xo||c|
Failover
dvuplink4 dvuplink3
I SCSI
|C| |xo||c|
Failover
dvuplink5 dvuplink6
VI RTUAL
MACHI NE
|C| ||T dvuplink7/
dvJo||n|
None 1, 2, 3, 4, 5, 6
Table 2. Static Design Conguration
Physical Switch Conguration
T|eexerna|o|y:|ca|:v|c|v|ere|erac|:erver:nevor|adaoor:areconnecedo|:conqJredv||
rJn|conqJra|onv||a|||eaooroor|aeV|AN:enab|edA:de:cr|bed|n|e||y:|ca|Nevor|Sv|c|
|araneer::ec|on|eo||ov|nq:v|c|conqJra|on:areoerornedba:edon|eV|S:eJode:cr|bed
in Table 2.
|nab|eST|on|erJn|oor:ac|nq|e|SX||o::a|onqv|||e|or|a:nodeand|||UqJardeaJre
T|eean|nqconqJra|ononV|S|::a|c:ono||n|aqqreqa|on|:conqJredon|eo|y:|ca|:v|c|e:
|ecaJ:eo|ene:|ooo|oqydeo|oynena::|ovn|n||qJre|e||n|:aerac||nqeaJre|:noreoJ|red
on the physical switches.
In this design approach, resiliency to the infrastructure trac is achieved through active/standby uplinks, and
:ecJr|y|:accono||:|edbyorov|d|nq:eoaraeo|y:|ca|oa|:or|ed|erenracyoe:|oveverv||||:
design, the I/O resources are underutilized because the dvuplink2 and dvuplink6 standby links are not used to
:endorrece|veracA|:o|ere|:noex|b|||yoa||ocaenorebandv|d|oaracyoev|en|need:|
T E C H NI C AL WH I T E PAP E R / 1 6
VMware vSphere Distributed Switch
Best Practices
There is another variation to the static design approach that addresses the need of some customers to provide
higher bandwidth to the storage and vMotion trac type. In the static design that was previously described,
|SCSandv|o|onrac|:||n|edoC|acJ:onervan:o:Jooor||q|erbandv|d|or|SCS|eycan
na|eJ:eo|e|SCSnJ||oa||nq:o|J|onA|:ov|||ere|ea:eovSo|erev|o|onraccanbecarr|ed
overnJ||o|e||ernenevor|adaoor:|roJq||e:JoooronJ||nevor|adaoorv|o|on|ereby
providing higher bandwidth to the vMotion process.
For more details on how to set up iSCSI multipathing, refer to the VMware vSphere Storage guide:
https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.
T|econqJra|ononJ||nevor|adaoorv|o|on|:oJ|e:|n||aro|e|SCSnJ||oa|:eJov|ere
administrators must create two separate vmkernel interfaces and bind each one to a separate dvportgroup.
T||:conqJra|onv||vo:eoaraedvoorqroJo:orov|de:|econnec|v|yovod|eren||ernenevor|
adaptors or dvuplinks.
TRAFFI C
TYPE
PORT
GROUP
TEAMI NG
OPTI ON
ACTI VE
UPLI NK
STANDBY
UPLI NK
UNUSED
UPLI NK
MANAGEMENT
|CA |xo||c|
Failover
dvuplink1 dvuplink2
vMOTI ON
|C| None dvuplink3 dvuplink4
vMOTI ON
|C| None dvuplink4 dvuplink3
FT
|CC |xo||c|
Failover
dvuplink2 dvuplink1
I SCSI
|C| None dvuplink5 None 1, 2, 3, 4, 6, 7,

I SCSI
|C| None dvuplink6 None 1, 2, 3, 4, 5, 7,

VI RTUAL
MACHI NE
|C| ||T dvuplink7/
dvJo||n|
None 1, 2, 3, 4, 5, 6
Table 3. Static Design Conguration with iSCSI Multipathing and MultiNetwork Adaptor vMotion
A::|ovn|nTab|e|erearevoenr|e:eac|or|ev|o|onand|SCSracyoe:A|:o:|ovn|:a||:o
|eadd||ona|dvoorqroJoconqJra|on:reoJ|redo:Jooor|enJ||nevor|adaoorv|o|onand|SCS
nJ||oa||nqoroce::e:|ornJ||nevor|adaoorv|o|ondvoorqroJo:|C|and|C|are||:ed
conqJredv||dvJo||n|anddvJo||n|re:oec|ve|ya:ac|ve||n|:Andor|SCSnJ||oa||nqdvoorqroJo:
|C|and|C|areconnecedodvJo||n|anddvJo||n|re:oec|ve|ya:ac|ve||n|:|oadba|anc|nqacro::
|enJ||o|edvJo||n|:|:oerornedby|enJ||oa||nq|oq|c|n|e|SCSoroce::andby|e|SX|o|aorn|n
the vMotion process. Conguring the teaming policies for these dvportgroups is not required.
FT, management and virtual machine trac-type dvportgroup conguration and physical switch conguration
or||:de:|qnrena|n|e:anea:|o:ede:cr|bed|n|e:|qnOo|ono|eorev|oJ::ec|on
T E C H NI C AL WH I T E PAP E R / 1 7
VMware vSphere Distributed Switch
Best Practices
This static design approach improves on the rst design by using advanced capabilities such as iSCSI
nJ||oa||nqandnJ||nevor|adaoorv|o|on|Ja|e:ane|ne||:oo|on|a:|e:anec|a||enqe:
related to underutilized resources and inexibility in allocating additional resources on the y to dierent
trac types.
Design Option 2 Dynamic Conguration with NIOC and LBT
Aer|oo||nqa|erad||ona|de:|qnaooroac|v||:a|cJo||n|conqJra|on:|e:a|ea|oo|a|eV|vare
reconnendedde:|qnoo|on|aa|e:advanaqeo|eadvancedV|SeaJre::Jc|a:NOCand||T
In this design, the connectivity to the physical network infrastructure remains the same as that described in the
:a|cde:|qnoo|on|ovever|n:eadoa||oca|nq:oec|cdvJo||n|:o|nd|v|dJa|racyoe:|e|SX|
platform utilizes those dvuplinks dynamically. To illustrate this dynamic design, each virtual infrastructure trac
types bandwidth utilization is estimated. In a real deployment, customers should rst monitor the virtual
infrastructure trac over a period of time, to gauge the bandwidth utilization, and then come up with bandwidth
numbers for each trac type. The following are some bandwidth numbers estimated by trac type:
|anaqenenracC|
v|o|onC|
|TC|
|SCSC|
V|rJa|nac||neC|
|a:edon||:bandv|d||norna|onadn|n|:raor:canorov|:|onaooroor|aeOre:oJrce:oeac|racyoe
byJ:|nq|eNOCeaJreoV|S|e:a|ea|oo|a|eV|SoaraneerconqJra|on:or||:de:|qna:ve||
a:|eNOC:eJoT|edvJo||n|oorqroJoconqJra|onrena|n:|e:anev||e|q|dvJo||n|:creaedor|e
e|q|Cb|nevor|adaoor:T|edvoorqroJoconqJra|on|:de:cr|bed|n|eo||ov|nq:ec|on
dvportgroup Conguration
In this design, all dvuplinks are active and there are no standby and unused uplinks, as shown in Table 4.
A||dvJo||n|:are|ereoreava||ab|eorJ:eby|eean|nqa|qor||nT|eo||ov|nqare|e|eyoaraneer
conqJra|on:odvoorqroJo|CA
Tean|nqoo|on||T|::e|eceda:|eean|nqa|qor||n\||||TconqJra|on|enanaqenenrac
initially will be scheduled based on the virtual port ID hash. Depending on the hash output, management trac
is sent out over one of the dvuplinks. Other trac types in the virtual infrastructure can also be scheduled on
|e:anedvJo||n||n||a||y|oveverv|en|eJ|||.a|ono|edvJo||n|qoe:beyond|eoercen|re:|o|d
|e||Ta|qor||nv|||be|nvo|edand:oneo|eracv|||benovedoo|erJnderJ|||.eddvJo||n|:|:
oo::|b|e|ananaqenenracv|||benovedoo|erdvJo||n|:v|en:Jc|an||TevenoccJr:
T|ea||bac|oo|onnean:qo|nqronJ:|nqa:andby||n|oJ:|nqanac|veJo||n|aer|eac|veJo||n|
comes back into operation after a failure. This failback option works when there are active and standby
dvuplink congurations. In this design, there are no standby dvuplinks. So when an active uplink fails, the
trac owing on that dvuplink is moved to another working dvuplink. If the failed dvuplink comes back,
|e||Ta|qor||nv|||:c|edJ|enevracon|advJo||n|T||:oo|on|:|ea:|edeaJ|
V|varereconnend:|:o|a|nqa||racyoe:roneac|o|erbyden|nqa:eoaraeV|ANoreac|
dvportgroup.
T|ereare:evera|o|eroaraneer:|aareoaro|edvoorqroJoconqJra|onCJ:oner:canc|oo:eo
conqJre|e:eoaraneer:ba:edon|e|renv|ronnenneed:|orexano|e|eycanconqJre|V|ANo
orov|de|:o|a|onv|en|ereare||n|edV|AN:ava||ab|e|n|eenv|ronnen
T E C H NI C AL WH I T E PAP E R / 1 8
VMware vSphere Distributed Switch
Best Practices
A:yoJo||ov|edvoorqroJo:conqJra|on|nTab|eyoJcan:ee|aeac|racyoe|a:a||dvJo||n|:
ac|veand|a|e:e||n|:areJ|||.ed|roJq||e||Ta|qor||n|e:nov|oo|a|eNOCconqJra|on
described in the last two columns of Table 4.
T|eNOCconqJra|on|n||:de:|qn|e|o:orov|de|eaooroor|aeOre:oJrce:o|ed|erenracyoe:
|a:edon|eorev|oJ:|ye:|naedbandv|d|nJnber:oerracyoe|e:|are:oaraneer|:conqJred|n
|eNOC:|are:co|Jnn|nTab|eT|e:|are:va|Je::oec|y|ere|a|ve|nooranceo:oec|cracyoe:
andNOCen:Jre:|adJr|nqconen|on:cenar|o:on|edvJo||n|:eac|racyoeqe:|ea||ocaed
bandwidth. For example, a shares conguration of 10 for vMotion, iSCSI and FT allocates equal bandwidth to
these trac types. Virtual machines get the highest bandwidth with 20 shares and management gets lower
bandwidth with 5 shares.
To|||J:rae|ov:|areva|Je:ran:|aeobandv|d|nJnber:|e:a|eanexano|eoCbcaoac|ydvJo||n|
carrying all ve trac types. This is a worst-case scenario where all trac types are mapped to one dvuplink.
T||:v|||never|aooenv|encJ:oner:enab|e|e||TeaJrebecaJ:e||Tv|||ba|ance|eracba:edon
the utilization of uplinks. This example shows how much bandwidth each trac type will be allowed on one
dvJo||n|dJr|nqaconen|onorover:Jb:cr|o|on:cenar|oandv|en||T|:noenab|ed
Toa|:|are:nanaqenenv|o|on|T|SCSv|rJa|nac||ne
|anaqenen:|are:Cb|bo:
v|o|on:|are:Cb|bo:
|T:|are:Cb|bo:
|SCS:|are:Cb|bo:
V|rJa|nac||ne:|are:Cb|bo:
To calculate the bandwidth numbers during contention, you should rst calculate the percentage of bandwidth
for a trac type by dividing its share value by the total available share number (55). In the second step, the total
bandv|d|o|edvJo||n|Cb|:nJ||o||edv|||eoercenaqeobandv|d|nJnberca|cJ|aed|n|er:
step. For example, 5 shares allocated to management trac translate to 90.91Mbps of bandwidth to
nanaqenenoroce::onaJ||yJ|||.edCbnevor|adaoorn||:exano|ecJ:on:|areconqJra|on|:
discussed, but a customer can make use of predened high (100), normal (50) and low (25) shares when
assigning them to dierent trac types.
The vSphere platform takes these congured share values and applies them per uplink. The schedulers running
at each uplink are responsible for making sure that the bandwidth resources are allocated according to the
:|are:n|eca:eoane|q|Cb|nevor|adaoordeo|oynen|erearee|q|:c|edJ|er:rJnn|nq|eoend|nq
on the number of trac types scheduled on a particular uplink, the scheduler will divide the bandwidth among
the trac types, based on the share numbers. For example, if only FT (10 shares) and management (5 shares)
trac are owing through dvuplink 5, FT trac will get double the bandwidth of management trac, based on
|e:|are:va|JeA|:ov|en|ere|:nonanaqenenracov|nqa||bandv|d|canbeJ|||.edby|e|T
oroce::T||:ex|b|||y|na||oca|nqOre:oJrce:|:|e|eybeneo|eNOCeaJre
T|eNOC||n|:oaraneeroTab|e|:noconqJred|n||:de:|qnT|e||n|:va|Je:oec|e:anab:o|Je
maximum limit on egress trac for a trac type. Limits are specied in Mbps. This conguration provides a hard
||n|onanyraceven|Ore:oJrce:areava||ab|eoJ:eU:|nq||n|:conqJra|on|:noreconnended
unless you really want to control the trac, even though additional resources are available.
T|ere|:noc|anqe|no|y:|ca|:v|c|conqJra|on|n||:de:|qnaooroac|evenv|||ec|o|ceo|enev||T
a|qor||nT|e||Tean|nqa|qor||ndoe:nreoJ|reany:oec|a|conqJra|onono|y:|ca|:v|c|e:Peero
|eo|y:|ca|:v|c|:e|nq:de:cr|bed|n|e:|qnOo|on
T E C H NI C AL WH I T E PAP E R / 1 9
VMware vSphere Distributed Switch
Best Practices
TRAFFI C TYPE PORT
GROUP
TEAMI NG
OPTI ON
ACTI VE
UPLI NK
STANDBY
UPLI NK
NI OC
SHARES
NI OC
LI MI TS
MANAGEMENT |CA ||T 1, 2, 3, 4,

None 5
vMOTI ON |C| ||T 1, 2, 3, 4,

None 10
FT |CC ||T 1, 2, 3, 4,

None 10
I SCSI |C| ||T 1, 2, 3, 4,

None 10
VI RTUAL
MACHI NE
|C| ||T 1, 2, 3, 4,

None 20
Table 4. Dynamic Design Conguration with NIOC and LBT
T||:de:|qndoe:noorov|de||q|er|anCbbandv|d|o|ev|o|onand|SCSracyoe:a:|:|eca:e
v||:a|cde:|qnJ:|nqnJ||nevor|adaoorv|o|onand|SCSnJ||oa||nqT|e||Ta|qor||ncanno:o||
the infrastructure trac across multiple dvuplink ports and utilize all the links. So even if vMotion dvportgroup
|C||a:a||e|q|Cb|nevor|adaoor:a:ac|veJo||n|:v|o|onracv|||becarr|edoveron|yoneo
the eight uplinks. The main advantage of this design is evident in the scenarios where the vMotion process is not
using the uplink bandwidth, and other trac types are in need of the additional resources. In these situations,
NOCna|e::Jre|a|eJnJ:edbandv|d||:a||ocaedo|eo|erracyoe:|aneed|
This dynamic design option is the recommended approach because it takes advantage of the advanced VDS
eaJre:andJ|||.e:Ore:oJrce:ec|en|yT||:oo|ona|:oorov|de:ac|veac|vere:|||encyv|erenoJo||n|:
are in standby mode. In this design approach, customers allow the vSphere platform to make the optimal
decisions on scheduling trac across multiple uplinks.
Some customers who have restrictions in the physical infrastructure in terms of bandwidth capacity across
dierent paths and limited availability of the layer 2 domain might not be able to take advantage of this dynamic
design option. When deploying this design option, it is important to consider all the dierent trac paths that a
trac type can take and to make sure that the physical switch infrastructure can support the specic
characteristics required for each trac type. VMware recommends that vSphere and network administrators
work together to understand the impact of the vSphere platforms trac scheduling feature over the physical
network infrastructure before deploying this design option.
|verycJ:onerenv|ronnen|:d|erenand|ereoJ|renen:or|eracyoe:area|:od|eren|eoend|nq
on the need of the environment, a customer can modify these design options to t their specic requirements.
For example, customers can choose to use a combination of static and dynamic design options when they need
higher bandwidth for iSCSI and vMotion activities. In this hybrid design, four uplinks can be statically allocated to
iSCSI and vMotion trac types while the remaining four uplinks are used dynamically for the remaining trac
yoe:Tab|e:|ov:|eracyoe:anda::oc|aedoorqroJoconqJra|on:or|e|ybr|dde:|qnA::|ovn
in the table, management, FT and virtual machine trac will be distributed on dvuplink1 to dvuplink4 through
|evSo|ereo|aorn:rac:c|edJ||nqeaJre:||TandNOCT|erena|n|nqoJrdvJo||n|:are:a|ca||y
assigned to vMotion and iSCSI trac types.
T E C H NI C AL WH I T E PAP E R / 2 0
VMware vSphere Distributed Switch
Best Practices
TRAFFI C TYPE PORT
GROUP
TEAMI NG
OPTI ON
ACTI VE
UPLI NK
STANDBY
UPLI NK
NI OC
SHARES
NI OC
LI MI TS
MANAGEMENT |CA ||T 1, 2, 3, 4 None 5
vMOTI ON |C| None 5 6 -
vMOTI ON |C| None 6 5 -
FT |CC ||T 1, 2, 3, 4 None 10
I SCSI |C| None 7 None -
I SCSI |C| None None -
VI RTUAL
MACHI NE
|C| ||T 1, 2, 3, 4 None 20
Table 5. Hybrid Design Conguration
Rack Server with Two 10GbE Network Adaptors
T|evoCb|nevor|adaoor:deo|oynennode||:becon|nqveryconnonbecaJ:eo|ebene:|ey
provide through I/O consolidation. The key benets include better utilization of I/O resources, simplied
nanaqenenandredJcedCA||XandO||XA||oJq|||:deo|oynenorov|de:|e:ebene:|ereare:one
c|a||enqe:v|en|cone:o|eracnanaqenena:oec:|:oec|a||y|n||q||ycon:o||daedv|rJa||.ed
env|ronnen:v|erenoreracyoe:arecarr|edovereverCb|nevor|adaoor:|becone:cr||ca|o
or|or||.eracyoe:|aare|nooranandorov|de|ereoJ|redS|AqJaranee:T|eNOCeaJreava||ab|e
on the VDS helps in this trac management activity. In the following sections, you will see how to utilize this
feature in the dierent designs.
A::|ovn|n||qJrerac|:erver:v||voCb|nevor|adaoor:areconnecedo|evoacce::|ayer
:v|c|e:oavo|dany:|nq|eoo|noa||JreS|n||aro|erac|:erverv||e|q|Cb|nevor|adaoor:|e
dierent VDS and physical switch parameter congurations are taken into account with this design. On the
o|y:|ca|:v|c|:|de|enevCb:v|c|e:n|q||ave:Joooror|Co||aenab|e:converqenceorSAN
and|ANracT||:docJnencover:on|y|e:andardCbdeo|oynen:|a:Jooor|:oraqerac
|SCSN|Sandno|Co|
T E C H NI C AL WH I T E PAP E R / 2 1
VMware vSphere Distributed Switch
Best Practices
n||::ec|onvode:|qnoo|on:arede:cr|bedone|:arad||ona|aooroac|and|eo|erone|:aV|vare
recommended approach.
Access
Layer
Aggregation
Layer
PG-A
Legend:
PG-B
. . . . . . . . . . . . . . . . . . . . . .
VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
ESXi ESXi
vSphere Distributed Switch
ESXi ESXi
Cluster 2 Cluster 1
Figure 5. Rack Server with Two 10GbE Network Adaptors
Design Option 1 Static Conguration
T|e:a|cconqJra|onaooroac|orrac|:erverdeo|oynenv||Cb|nevor|adaoor:|::|n||aro|eone
de:cr|bed|n|e:|qnOo|onorac|:erverdeo|oynenv||e|q|Cb|adaoor:T|ereareaevd|erence:
in the conguration where the numbers of dvuplinks are changed from eight to two, and dvportgroup
parameters are dierent. Lets take a look at the conguration details on the VDS front.
dvuplink Conguration
To:Jooor|enax|nJnvo||ernenevor|adaoor:oer|o:|edvJo||n|oorqroJo|:conqJredv||
two dvuplinks (dvuplink1, dvuplink2). On the hosts, dvuplink1 is associated with vmnic0 and dvuplink2 is
associated with vmnic1.
dvportgroup Conguration
A:de:cr|bed|nTab|e|ereareved|erendvoorqroJo:|aareconqJredor|eved|erenrac
yoe:|orexano|edvoorqroJo|CA|:creaedor|enanaqenenracyoeT|eo||ov|nqare|eo|er
|eyconqJra|on:odvoorqroJo|CA
Tean|nqoo|onAnexo||c|a||overorderorov|de:adeern|n|:|cvayod|rec|nqracoaoar|cJ|arJo||n|
|y:e|ec|nqdvJo||n|a:anac|veJo||n|anddvJo||n|a:a:andbyJo||n|nanaqenenracv|||becarr|ed
overdvJo||n|Jn|e::|ere|:aa||Jrev|||ConqJr|nq|ea||bac|oo|onoNo|:a|:oreconnendedo
avoid the apping of tra c between two network adaptors. The failback option determines how a physical
adaoor|:reJrnedoac|vedJyaerrecover|nqronaa||Jrea||bac||::eoNoaa||edadaoor|:|e
inactive, even after recovery, until another currently active adaptor fails, requiring its replacement.
V|varereconnend:|:o|a|nqa||racyoe:roneac|o|erbyden|nqa:eoaraeV|ANor
each dvportgroup.
T E C H NI C AL WH I T E PAP E R / 2 2
VMware vSphere Distributed Switch
Best Practices
T|erearevar|oJ:o|eroaraneer:|aareoaro|edvoorqroJoconqJra|onCJ:oner:canc|oo:eo
congure these parameters based on their environment needs.
Tab|eorov|de:|econqJra|ondea||:ora|||edvoorqroJo:Accord|nqo|econqJra|ondvJo||n|
carr|e:nanaqenen|SCSandv|rJa|nac||neracdvJo||n||and|e:v|o|on|Tandv|rJa|nac||ne
racA:yoJcan:ee|ev|rJa|nac||neracyoena|e:J:eovoJo||n|:and|e:eJo||n|:areJ|||.ed
|roJq||e||Ta|qor||n
With this deterministic teaming policy, customers can decide to map dierent trac types to the available uplink
ports, depending on environment needs. For example, if iSCSI trac needs higher bandwidth and other trac
types have relatively low bandwidth requirements, customers can decide to keep only iSCSI trac on dvuplink1
and move all other trac to dvuplink2. When deciding on these trac paths, customers should understand the
physical network connectivity and the paths bandwidth capacities.
Physical Switch Conguration
The external physical switch, which the rack servers network adaptors are connected to, has trunk conguration
v||a|||eaooroor|aeV|AN:enab|edA:de:cr|bed|n|eo|y:|ca|nevor|:v|c|oaraneer::ec|on:|e
following switch congurations are performed based on the VDS setup described in Table 6.
|nab|eST|on|erJn|oor:ac|nq|SX||o::a|onqv|||e|or|a:nodeand|||UqJardeaJre
T|eean|nqconqJra|ononV|S|::a|cand|ereoreno||n|aqqreqa|on|:conqJredon|eo|y:|ca|
switches.
|ecaJ:eo|ene:|ooo|oqydeo|oynen:|ovn|n||qJre|e||n|:aerac||nqeaJre|:noreoJ|redon
the physical switches.
TRAFFI C TYPE PORT
GROUP
TEAMI NG
OPTI ON
ACTI VE
UPLI NK
STANDBY
UPLI NK
UNUSED
UPLI NK
MANAGEMENT
|CA |xo||c|
Failover
dvuplink1 dvuplink2 None
vMOTI ON
|C| |xo||c|
Failover
dvuplink2 dvuplink1 None
FT
|CC |xo||c|
Failover
dvuplink2 dvuplink1 None
I SCSI
|C| |xo||c|
Failover
dvuplink1 dvuplink2 None
VI RTUAL
MACHI NE
|C| ||T dvuplink1/
dvuplink2
None None
Table 6. Static Design Conguration
This static design option provides exibility in the trac path conguration, but it cannot protect against one
trac types dominating others. For example, there is a possibility that a network-intensive vMotion process
n|q|a|eavayno:o|enevor|bandv|d|and|noacv|rJa|nac||nerac||d|rec|ona|rac:|ao|nq
oaraneer:aoorqroJoandoor|eve|:canorov|de:one|e|o|nnanaq|nqd|erenracrae:|ovever
using this approach for trac management requires customers to limit the trac on the respective dvportgroups.
Limiting trac to a certain level through this method puts a hard limit on the trac types, even when the
bandwidth is available to utilize. This underutilization of I/O resources because of hard limits is overcome
|roJq||eNOCeaJrev||c|orov|de:ex|b|eracnanaqenenba:edon|e:|are:oaraneer:
|e:|qnOo|onde:cr|bed|n|eo||ov|nq:ec|on|:ba:edon|eNOCeaJre
T E C H NI C AL WH I T E PAP E R / 2 3
VMware vSphere Distributed Switch
Best Practices
Design Option 2 Dynamic Conguration with NIOC and LBT
T||:dynan|cde:|qnoo|on|:|eV|varereconnendedaooroac||aa|e:advanaqeo|eNOCand||T
features of the VDS.
Connec|v|yo|eo|y:|ca|nevor||nra:rJcJrerena|n:|e:anea:|ade:cr|bed|n|e:|qnOo|on
|ovever|n:eadoa||oca|nq:oec|cdvJo||n|:o|nd|v|dJa|racyoe:|e|SX|o|aornJ|||.e:|o:e
dvuplinks dynamically. To illustrate this dynamic design, each virtual infrastructure trac types bandwidth
utilization is estimated. In a real deployment, customers should rst monitor the virtual infrastructure trac
over a period of time to gauge the bandwidth utilization, and then come up with bandwidth numbers.
The following are some bandwidth numbers estimated by trac type:
|anaqenenracC|
v|o|onC|
|TC|
|SCSC|
V|rJa|nac||neC|
T|e:ebandv|d|e:|nae:ared|erenron|eonecon:|deredv||rac|:erverdeo|oynenv||e|q|Cb|
network adaptors. Lets take a look at the VDS parameter congurations for this design. The dvuplink port group
conqJra|onrena|n:|e:anev||vodvJo||n|:creaedor|evoCb|nevor|adaoor:T|e
dvportgroup conguration is as follows.
dvportgroup Conguration
n||:de:|qna||dvJo||n|:areac|veand|ereareno:andbyandJnJ:edJo||n|:a::|ovn|nTab|eA||
dvuplinks are therefore available for use by the teaming algorithm. The following are the key congurations of
dvoorqroJo|CA
Tean|nqoo|on||T|::e|eceda:|eean|nqa|qor||n\||||TconqJra|onnanaqenenrac|n||a||y
v|||be:c|edJ|edba:edon|ev|rJa|oor||a:||a:edon|e|a:|oJoJnanaqenenracv|||be:en
out over one of the dvuplinks. Other trac types in the virtual infrastructure can also be scheduled on the
:anedvJo||n|v||||TconqJra|onSJb:eoJen|y||eJ|||.a|ono|eJo||n|qoe:beyond|eoercen
|re:|o|d|e||Ta|qor||nv|||be|nvo|edand:oneo|eracv|||benovedoo|erJnderJ|||.ed
dvuplinks. It is possible that management trac will get moved to other dvuplinks when such an event occurs.
T|ereareno:andbydvJo||n|:|n||:conqJra|on:o|ea||bac|:e|nq|:noaoo||cab|eor||:de:|qn
aooroac|T|edeaJ|:e|nqor||:a||bac|oo|on|:Ye:
V|varereconnend:|:o|a|nqa||racyoe:roneac|o|erbyden|nqa:eoaraeV|ANoreac|
dvportgroup.
T|ereare:evera|o|eroaraneer:|aareoaro|edvoorqroJoconqJra|onCJ:oner:canc|oo:eo
congure these parameters based on their environment needs.
A:yoJo||ov|edvoorqroJo:conqJra|on|nTab|eyoJcan:ee|aeac|racyoe|a:a|||edvJo||n|:
a:ac|veand|e:eJo||n|:areJ|||.ed|roJq||e||Ta|qor||n|e:a|ea|oo|a|eNOCconqJra|on
T|eNOCconqJra|on|n||:de:|qnnoon|y|e|o:orov|de|eaooroor|aeOre:oJrce:o|ed|erenrac
yoe:bJa|:oorov|de:S|AqJaranee:byoreven|nqoneracyoerondon|na|nqo|er:
|a:edon|ebandv|d|a::Jno|on:nadeord|erenracyoe:|e:|are:oaraneer:areconqJred|n
|eNOC:|are:co|Jnn|nTab|eTo|||J:rae|ov:|areva|Je:ran:|aeobandv|d|nJnber:|n||:
deo|oynen|e:a|eanexano|eoaCbcaoac|ydvJo||n|carry|nqa||veracyoe:T||:|:avor:ca:e
scenario in which all trac types are mapped to one dvuplink. This will never happen when customers enable
|e||TeaJrebecaJ:e||Tv|||nove|eracyoeba:edon|eJo||n|J|||.a|on
T E C H NI C AL WH I T E PAP E R / 2 4
VMware vSphere Distributed Switch
Best Practices
The following example shows how much bandwidth each trac type will be allowed on one dvuplink during a
conen|onorover:Jb:cr|o|on:cenar|oandv|en||T|:noenab|ed
Toa|:|are:nanaqenenv|o|on|T|SCSv|rJa|nac||ne
|anaqenen:|are:Cb|bo:
v|o|on:|are:CbCbo:
|T:|are:CbCbo:
|SCS:|are:CbCbo:
V|rJa|nac||ne:|are:CbCbo:
For each trac type, rst the percentage of bandwidth is calculated by dividing the share value by the total
ava||ab|e:|arenJnberand|en|eoa|bandv|d|o|edvJo||n|Cb|:J:edoca|cJ|ae|e
bandv|d|:|areor|eracyoe|orexano|e:|are:a||ocaedov|o|onracran:|aeoCbo:
obandv|d|o|ev|o|onoroce::onaJ||yJ|||.edCb|nevor|adaoor
n||:Cb|deo|oynencJ:oner:canorov|deb|qqero|oe:o|nd|v|dJa|racyoe:v||oJ|eJ:eo
rJn||nqornJ||oa||nqec|no|oq|e:T||:va:no|eca:ev||ane|q|Cb|deo|oynen
There is no change in physical switch conguration in this design approach, so refer to the physical switch
:e|nq:de:cr|bed|n|e:|qnOo|on|n|eorev|oJ::ec|on
TRAFFI C TYPE PORT
GROUP
TEAMI NG
OPTI ON
ACTI VE
UPLI NK
STANDBY
UPLI NK
NI OC
SHARES
NI OC
LI MI TS
MANAGEMENT
|CA ||T dvuplink1, 2 None 5
vMOTI ON
|C| ||T dvuplink1, 2 None 20
FT
|CC ||T dvuplink1, 2 None 10
I SCSI
|C| ||T dvuplink1, 2 None 20
VI RTUAL
MACHI NE
|C| ||T dvuplink1, 2 None 20
Table 7. Dynamic Design Conguration
This design option utilizes the advanced VDS features and provides customers with a dynamic and exible
de:|qnaooroac|n||:de:|qnOre:oJrce:areJ|||.edeec|ve|yandS|A:areneba:edon|e
shares allocation.
T E C H NI C AL WH I T E PAP E R / 2 5
VMware vSphere Distributed Switch
Best Practices
Blade Server in Example Deployment
||ade:erver:are:ervero|aorn:|aorov|de||q|er:ervercon:o||da|onoerrac|Jn|a:ve||a:|overoover
andcoo||nqco::||adec|a::|:|a|o:|eb|ade:erver:|aveoroor|earyarc||ecJre:andeac|vendor|a:
its own way of managing resources in the blade chassis. It is di cult in this document to look at all of the various
blade chassis available on the market and to discuss their deployments. In this section, we will focus on some
generic parameters that customers should consider when deploying VDS in a blade chassis environment.
From a networking point of view, all blade chassis provide the following two options:
neqraed:v|c|e:\||||:oo|on|eb|adec|a::|:enab|e:bJ|||n:v|c|e:oconro|racovbeveen
the blade servers within the chassis and the external network.
|a::|roJq|ec|no|oqyT||:|:ana|erna|vene|odonevor|connec|v|y|aenab|e:|e|nd|v|dJa|
blade servers to communicate directly with the external network.
n||:docJnen|e|neqraed:v|c|oo|on|:de:cr|beda:v|ere|eb|adec|a::|:|a:abJ|||n||erne
:v|c|T||:||erne:v|c|ac:a:anacce::|ayer:v|c|a::|ovn|n||qJre
T||::ec|ond|:cJ::e:adeo|oynen|nv||c||e|SX||o:|:rJnn|nqonab|ade:erverT|eo||ov|nqvoyoe:
of blade server conguration will be described in the next section:
||ade:erverv||voCb|nevor|adaoor:
||ade:erverv|||ardvarea::|:ednJ||o|e|oq|ca|nevor|adaoor:
For each of these two congurations, various VDS design approaches will be discussed.
Blade Server with Two 10GbE Network Adaptors
T||:deo|oynen|:oJ|e:|n||aro|aoarac|:erverv||voCb|nevor|adaoor:|nv||c|eac||SX|
|o:|:orov|dedv||voCb|nevor|adaoor:A::|ovn|n||qJrean|SX||o:rJnn|nqonab|ade:erver
|n|eb|adec|a::|:|:a|:oorov|dedv||voCb|nevor|adaoor:
VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
ESXi ESXi
vSphere Distributed Switch
ESXi ESXi
Access
Layer
Aggregation
Layer
PG-A
Legend:
Cluster 2 Cluster 1
PG-B
Figure 6. Blade Server with Two 10GbE Network Adaptors
T E C H NI C AL WH I T E PAP E R / 2 6
VMware vSphere Distributed Switch
Best Practices
In this section, two design options are described. One is a traditional static approach and the other one is a
V|varereconnendeddynan|cconqJra|onv||NOCand||TeaJre:enab|edT|e:evoaooroac|e:are
exac|y|e:anea:|edeo|oynende:cr|bed|n|ePac|Serverv||TvoCb|Nevor|Adaoor::ec|on
On|yb|adec|a::|::oec|cde:|qndec|:|on:v|||bed|:cJ::eda:oaro||::ec|on|ora||o|erV|Sand
:v|c|re|aedconqJra|on:reero|ePac|Serverv||TvoCb|Nevor|Adaoor::ec|ono||:
document.
Design Option 1 Static Conguration
T|econqJra|ono||:de:|qnaooroac||:exac|y|e:anea:|ade:cr|bed|n|e|e:|qnOo|on:ec|on
JnderPac|Serverv||TvoCb|Nevor|Adaoor:PeeroTab|eordvoorqroJoconqJra|ondea||:
|e:a|ea|oo|a|eb|ade:erver:oec|coaraneer:|areoJ|reaen|ondJr|nq|ede:|qn
Nevor|and|ardvarere||ab|||ycon:|dera|on::|oJ|dbe|ncorooraeddJr|nq|eb|ade:erverde:|qna:ve||
In these blade server designs, customers must focus on the following two areas:
||q|ava||ab|||yob|ade:v|c|e:|n|eb|adec|a::|:
Connec|v|yob|ade:ervernevor|adaoor:o|nerna|b|ade:v|c|e:
||q|ava||ab|||yob|ade:v|c|e:canbeac||evedby|av|nqvo||erne:v|c||nqnodJ|e:|n|eb|ade
c|a::|:And|econnec|v|yovonevor|adaoor:on|eb|ade:erver:|oJ|dbe:Jc||aonenevor|
adaoor|:connecedo|er:||erne:v|c|nodJ|eand|eo|ernevor|adaoor|:|oo|edo|e
second switch module in the blade chassis.
Ano|era:oec|areoJ|re:aen|on|n|eb|ade:erverdeo|oynen|:|enevor|bandv|d|ava||ab|||y
across the midplane of the blade chassis and between the blade switches and aggregation layer. If there is an
oversubscription scenario in the deployment, customers must think about utilizing trac shaping and
or|or||.a|onoaqq|nqeaJre:ava||ab|e|n|evSo|ereo|aornT|eor|or||.a|oneaJreenab|e:
cJ:oner:oaq|e|nooranraccon|nqoJo|evSo|ereo|aornT|e:e||q|or|or|yaqqedoac|e:
are then treated according to priority by the external switch infrastructure. During congestion scenarios, the
switch will drop lower-priority packets rst and avoid dropping the important, high-priority packets.
This static design option provides customers with the exibility to choose dierent network adaptors for
d|erenracyoe:|oveverv|endo|nq|eraca||oca|onona||n|edvoCb|nevor|adaoor:
adn|n|:raor:J||nae|yv|||:c|edJ|enJ||o|eracyoe:ona:|nq|eadaoorA:nJ||o|eracyoe:ov
through one adaptor, the chances of one trac types dominating others increases. To avoid the performance
|noaco|eno|:yne|q|bor:don|na|nqracyoecJ:oner:nJ:J|||.e|eracnanaqenenoo|:
orov|ded|n|evSo|ereo|aornOneo|eracnanaqeneneaJre:|:NOCand|aeaJre|:J|||.ed|n
|e:|qnOo|onv||c||:de:cr|bed|n|eo||ov|nq:ec|on
Design Option 2 Dynamic Conguration with NIOC and LBT
T||:dynan|cconqJra|onaooroac||:exac|y|e:anea:|ade:cr|bed|n|e|e:|qnOo|on:ec|on
JnderPac|Serverv||TvoCb|Nevor|Adaoor:PeeroTab|eor|edvoorqroJoconqJra|on
dea||:andNOC:e|nq:T|eo|y:|ca|:v|c|re|aedconqJra|on|n|eb|adec|a::|:deo|oynen|:|e:ane
a:|ade:cr|bed|n|erac|:erverdeo|oynen|or|eb|adecener:oec|creconnenda|ononre||ab|||yand
trac management, refer to the previous section.
VMware recommends this design option, which utilizes the advanced VDS features and provides customers with
adynan|candex|b|ede:|qnaooroac|\||||:de:|qnOre:oJrce:areJ|||.edeec|ve|yandS|A:arene
based on the shares allocation.
T E C H NI C AL WH I T E PAP E R / 2 7
VMware vSphere Distributed Switch
Best Practices
Blade Server with Hardware-Assisted Logical Network Adaptors
(HP Flex-10 or Cisco UCSlike Deployment)
Some of the new blade chassis support tra c management capabilities that enable customers to carve I/O
re:oJrce:T||:|:ac||evedbyorov|d|nq|oq|ca|nevor|adaoor:or|e|SX||o::n:eadovoCb|
nevor|adaoor:|e|SX||o:nov:ee:nJ||o|eo|y:|ca|nevor|adaoor:|aooeraead|eren
conqJrab|e:oeed:A::|ovn|n||qJreeac||SX||o:|:orov|dedv||e|q|||ernenevor|adaoor:
|aarecarvedoJovoCb|nevor|adaoor:
VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM
ESXi ESXi
vSphere Distributed Switch
ESXi ESXi
Access
Layer
Aggregation
Layer
PG-A
Legend:
Cluster 2 Cluster 1
PG-B
Figure 7. Multiple Logical Network Adaptors
T||:deo|oynen|:oJ|e:|n||aro|ao|erac|:erverv||e|q|Cb|nevor|adaoor:|ovever|n:ead
oCb|nevor|adaoor:|ecaoac|yoeac|nevor|adaoor|:conqJreda|eb|adec|a::|:|eve|n|e
blade chassis, customers can carve out dierent capacity network adaptors based on the need of each tra c
yoe|orexano|e||SCSracneed:Cbobandv|d|a|oq|ca|nevor|adaoorv|||aanoJno
I/O resources can be created on the blade chassis and provided for the blade server.
A:or|econqJra|ono|eV|Sandb|adec|a::|::v|c||nra:rJcJre|econqJra|onde:cr|bed|n
|e:|qnOo|onJnderPac|Serverv||||q|Cb|Nevor|Adaoor:|:norere|evanor||:deo|oynen
The static conguration option described in that design can be applied as is in this blade server environment.
Refer to Table 2 for the dvportgroup conguration details and switch congurations described in that section
for physical switch conguration details.
T|eoJe:|onnov|:v|e|erNOCcaoab|||yadd:anyva|Je|n||::oec|cb|ade:erverdeo|oynenNOC|:
a tra c management feature that helps in scenarios where multiple tra c types ow through one uplink or
nevor|adaoor|n||:oar|cJ|ardeo|oynenon|yoneracyoe|:a::|qnedoa:oec|c||ernenevor|
adaoor|eNOCeaJrev|||noaddanyva|Je|ovever|nJ||o|eracyoe:are:c|edJ|edoveronenevor|
adaoorcJ:oner:canna|eJ:eoNOCoa::|qnaooroor|ae:|are:od|erenracyoe:T||:NOC
conqJra|onv|||en:Jre|abandv|d|re:oJrce:area||ocaedoracyoe:and|aS|A:arene
T E C H NI C AL WH I T E PAP E R / 2 8
VMware vSphere Distributed Switch
Best Practices
A:anexano|e|e:con:|dera:cenar|o|nv||c|v|o|onand|SCSrac|:carr|edoveroneCb|oq|ca|Jo||n|
Tooroec|e|SCSracronnevor||nen:|vev|o|onracadn|n|:raor:canconqJreNOCand
allocate shares to each trac type. If the two trac types are equally important, administrators can congure
:|are:v||eoJa|va|Je:eac|\||||:conqJra|onv|en|ere|:aconen|on:cenar|oNOCv|||na|e
:Jre|a|e|SCSoroce::v|||qe|a|o|eCbJo||n|bandv|d|andavo|d|av|nqany|noacon|e
vMotion process.
VMware recommends that the network and server administrators work closely together when deploying the
trac management features of the VDS and blade chassis. To achieve the best end-to-end quality of service
(QoS) result, a considerable amount of coordination is required during the conguration of the trac
management features.
Operational Best Practices
AeracJ:oner:Jcce::J||yde:|qn:|ev|rJa|nevor||nra:rJcJre|enexc|a||enqe:are|ovodeo|oy
|ede:|qnand|ovo|eeo|enevor|ooera|ona|V|vareorov|de:var|oJ:oo|:A|:andorocedJre:o
help customers eectively deploy and manage their network infrastructure. The following are some key tools
available in the vSphere platform:
V|varevSo|ereConnand||neneracevSo|ereC|
V|varevSo|ereA|
V|rJa|nevor|non|or|nqandroJb|e:|oo|nq
Ne||ov
|orn|rror|nq
In the following section, we will briey discuss how vSphere and network administrators can utilize these tools to
manage their virtual network. Refer to the vSphere documentation for more details on the tools.
VMware vSphere Command-Line Interface
vSphere administrators have several ways to access vSphere components through vSphere interface options,
|nc|Jd|nqV|varevSo|ereC||envSo|ere\ebC||enandvSo|ereConnand||neneraceT|evSo|ere
CLI command set enables administrators to perform conguration tasks by using a vSphere vCLI package
|n:a||edon:Joooredo|aorn:orbyJ:|nqV|varevSo|ere|anaqenenA::|:anv|APeero|e
Getting Started with vSphere CLI document for more details on the commands:
http://www.vmware.com/support/developer/vcli.
The entire networking conguration can be performed through vSphere vCLI, helping administrators automate
the deployment process.
VMware vSphere API
The networking setup in the virtualized datacenter involves conguration of virtual and physical switches.
V|vare|a:orov|dedA|:|aenab|enevor|:v|c|vendor:oqe|norna|onaboJ|ev|rJa|
infrastructure, which helps them to automate the conguration of the physical switches and the overall process.
|orexano|evCenercanr|qqeranevenaer|ev|o|onoroce::oav|rJa|nac||ne|:oerornedAer
receiving this event trigger and related information, the network vendors can recongure the physical switch
ooroo||c|e::Jc||av|en|ev|rJa|nac||nenove:oano|er|o:|eV|ANacce::conro|||:AC|
congurations are migrated along with the virtual machine. Multiple networking vendors have provided this
aJona|onbeveeno|y:|ca|andv|rJa||nra:rJcJreconqJra|on:|roJq||neqra|onv||vSo|ereA|:
Customers should check with their networking vendors to learn whether such an automation tool exists that will
bridge the gap between physical and virtual networking and simplify the operational challenges.
T E C H NI C AL WH I T E PAP E R / 2 9
VMware vSphere Distributed Switch
Best Practices
Virtual Network Monitoring and Troubleshooting
Monitoring and troubleshooting network trac in a virtual environment require similar tools to those available in
the physical switch environment. With the release of vSphere 5, VMware gives network administrators the ability
onon|orandroJb|e:|oo|ev|rJa||nra:rJcJre|roJq|eaJre::Jc|a:Ne||ovandoorn|rror|nq
Ne||ovcaoab|||yonad|:r|bJed:v|c|a|onqv||aNe||ovco||ecoroo||e|o:non|oraoo||ca|onov:
and measures ow performance over time. It also helps in capacity planning and ensuring that I/O resources are
utilized properly by dierent applications, based on their needs.
|orn|rror|nqcaoab|||yonad|:r|bJed:v|c||:ava|Jab|eoo||a|e|o:nevor|adn|n|:raor:debJq
nevor||::Je:|nav|rJa||nra:rJcJreCranJ|arconro|overnon|or|nq|nqre::eqre::ora||racoaoor
helps administrators ne-tune what trac is sent for analysis.
vCenter Server on a Virtual Machine
A:nen|onedear||ervCenerServer|:on|yJ:edoorov|:|onandnanaqeV|SconqJra|on:CJ:oner:can
choose to deploy it on a virtual machine or a physical host, depending on their management resource design
requirements. In case of vCenter Server failure scenarios, the VDS will continue to provide network connectivity,
but no VDS conguration changes can be performed.
|ydeo|oy|nqvCenerServeronav|rJa|nac||necJ:oner:cana|eadvanaqeovSo|ereo|aorneaJre:
:Jc|a:vSo|ere||q|Ava||ab|||y|AandV|vare|aJ|To|erance|aJ|To|eranceoorov|de||q|erre:|||ency
to the management plane. In such deployments, customers must pay more attention to the network congurations.
This is because if the networking for a virtual machine hosting vCenter Server is miscongured, the network
connec|v|yovCenerServer|:|o:T||:n|:conqJra|onnJ:bexed|ovevercJ:oner:needvCener
Serverox|enevor|conqJra|onbecaJ:eon|yvCenerServercanconqJreaV|SA:avor|aroJndo
this situation, customers must connect to the host directly where the vCenter Server virtual machine is running
through vSphere Client. Then they must reconnect the virtual machine hosting vCenter Server to a VSS that is
a|:oconnecedo|enanaqenennevor|o|o::Aer|ev|rJa|nac||nerJnn|nqvCenerServer|:
reconnected to the network, it can manage and congure VDS.
Peero|econnJn|yar|c|eV|rJa||ac||ne|o:|nqavCenerServer|e:|rac|ce:orqJ|dancereqard|nq
the deployment of vCenter on a virtual machine:
|oconnJn||e:vnvarecon:erv|e.|veServ|eorev|ev|ody
V||o:VC|e:|rac||ce:|n|.
Conclusion
AV|varevSo|ered|:r|bJed:v|c|orov|de:cJ:oner:v|||er|q|nea:JreoeaJre:caoab||||e:and
ooera|ona|:|no||c|yordeo|oy|nqav|rJa|nevor||nra:rJcJreA:cJ:oner:noveonobJ||dor|vaeor
oJb||cc|oJd:V|Sorov|de:|e:ca|ab|||ynJnber:or:Jc|deo|oynen:Advancedcaoab||||e::Jc|a:NOC
and||Tare|eyorac||ev|nqbeerJ|||.a|onoOre:oJrce:andororov|d|nqbeerS|A:orv|rJa||.ed
business-critical applications and multitenant deployments. Support for standard networking visibility and
non|or|nqeaJre::Jc|a:oorn|rror|nqandNe||ov|e|o:adn|n|:raor:nanaqeandroJb|e:|ooav|rJa|
infrastructure through familiar tools. VDS also is an extensible platform that enables integration with other
nevor||nqvendororodJc:|roJq|ooenvSo|ereA|:
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2012 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed
at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be
trademarks of their respective companies. Item No: VMW-vSPHR-DIST-SWTCH-PRCTICES-USLET-101

You might also like