Professional Documents
Culture Documents
VMWare Virtual-Network-Design-Guide PDF
VMWare Virtual-Network-Design-Guide PDF
Table of Contents
Intended Audience. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Components of the VMware Network Virtualization Solution. . . . . . . . . . . . . . . . . . . . . . . . 4
vSphere Distributed Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Logical Network (VXLAN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
vCloud Networking and Security Edge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
vCloud Networking and Security Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
vCloud Director. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
VXLAN Technology Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Standardization Effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Encapsulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
VXLAN Packet Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Intra-VXLAN Packet Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Inter-VXLAN Packet Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Network Virtualization Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Physical Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Network Topologies with L2 Configuration in the Access Layer. . . . . . . . . . . . . . . . 12
Network Topologies with L3 Configuration in the Access Layer. . . . . . . . . . . . . . . . 13
Logical Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Scenario 1 Greenfield Deployment: Logical Network with a
Single Physical L2 Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Scenario 2 Logical Network: Multiple Physical L2 Domains. . . . . . . . . . . . . . . . . . . 15
Scenario 3 Logical Network: Multiple Physical L2 Domains with vMotion. . . . . . 16
Scenario 4 Logical Network: Stretched Clusters Across Two Datacenters . . . . . 17
Managing IP Addresses in Logical Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Scaling Network Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Consumption Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
In vCloud Director. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
In vCloud Networking and Security Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Using API. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Troubleshooting and Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Network Health Check. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
VXLAN Connectivity Check Unicast and Broadcast Tests . . . . . . . . . . . . . . . . . . . . . . 23
Monitoring Logical Flows IPFIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Port Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
T ECHNICAL W HI T E P A P E R / 2
Intended Audience
This document is targeted toward virtualization and network architects interested in deploying VMware
network virtualization solutions.
Overview
The IT industry has gained significant efficiency and flexibility as a direct result of virtualization. Organizations
are moving toward a virtual datacenter (VDC) model, and flexibility, speed, scale and automation are central to
their success. Although compute and memory resources are pooled and automated, networks and network
services, such as security, have not kept pace. Traditional network and security operations not only reduce
efficiency but also limit the ability of businesses to rapidly deploy, scale and protect applications. VMware
vCloud Networking and Security offers a network virtualization solution to overcome these challenges.
Application
Application
Application
Workload
x86 Environment
Virtual
Machine
Virtual
Machine
Workload
Workload
Server Hypervisor
Requirement: x86
Virtual
Network
Decoupled
Virtual
Network
Virtual
Network
Physical Network
Figure 1 draws an analogy between compute and network virtualization. Just as VMware vSphere abstracts
compute capacity from the server hardware to create virtual pools of resources, network virtualization abstracts
the network into a generalized pool of network capacity. The unified pool of network capacity can then be
optimally segmented into logical networks directly attached to specific applications. Customers can create
logical networks that span physical boundaries, optimizing compute resource utilization across clusters and
pods. Unlike legacy architectures, logical networks can be scaled without reconfiguring the underlying physical
hardware. Customers can also integrate network servicessuch as firewalls, VPNs and load balancersand
deliver them exactly where they are needed. Single pane of glass management for all these services further
reduces the cost and complexity of datacenter operations.
T ECHNICAL W HI T E P A P E R / 3
The VMware network virtualization solution addresses the following key needs in todays datacenter:
Increasing compute utilization by pooling compute clusters
Enabling noncontiguous cluster expansion
Leveraging capacity across multiple racks in the datacenter
Overcoming IP-addressing challenges when moving workloads
Avoiding VLAN sprawl in large environments
Enabling multitenancy at scale without encountering VLAN scale limitations
By adopting network virtualization, customers can effectively address these issues as well as realize the
following business benefits:
Drive faster provisioning of network and services, enabling business agility
Improve infrastructure utilization, leading to significant CapEx savings
Increase compute utilization by 30 percent by efficiently pooling compute resources
Increase network utilization by 40 percent due to compute pooling and improved traffic management
Decouple logical networks from physical networks, providing complete flexibility
Isolate and segment network traffic at scale
Provide multitenancy without increasing the administrative burden
Automate repeatable network and service provisioning workflows, translating to 30 percent or more in
OpEx savings on network operations alone
T ECHNICAL W HI T E P A P E R / 4
VCD
VMware L3
Edge
vShield
Manager/
vCenter
Logical Network
(VXLAN)
Physical IP Network
1
VM
VM
VM
VM
VM
T ECHNICAL W HI T E P A P E R / 5
vCloud Director
The vCloud Director virtual datacenter container is a highly automatable abstraction of the pooled virtual
infrastructure. Network virtualization is fully integrated in vCloud Director workflows, enabling rapid
self-service provisioning within the context of the application workload. vCloud Director uses vCloud Networking
and Security Manager in the backend to provision network virtualization elements. vCloud Director is not part
of vCloud Networking and Security; it is a separate purchased component. It is not mandatory for deploying a
network virtualization solution, but it is highly recommended to achieve the complete operational flexibility and
agility discussed previously. See consumption models for all available consumption choices for VMware
network virtualization.
T ECHNICAL W HI T E P A P E R / 6
Encapsulation
VXLAN makes use of an encapsulation or tunneling method to carry the L2 overlay network traffic on top of
L3 networks. A special kernel module running on the vSphere hypervisor host along with a vmknic acts as the
virtual tunnel endpoint (VTEP). Each VTEP is assigned a unique IP address that is configured on the
vmknic virtual adapter associated with the VTEP.
The VTEP on the vSphere host handles all encapsulation and deencapsulation of traffic for all virtual machines
running on that host. A VTEP encapsulates the MAC and IP packets from the virtual machines with a
VXLAN+UDP+IP header and sends the packet out as an IP unicast or multicast packet. The latter mode is used
for broadcast and unknown destination MAC frames originated by the virtual machines that must be sent across
the physical IP network.
Figure 3 shows the VXLAN frame format. The original packet between the virtual machines communicating on
the same VXLAN segment is encapsulated with an outer Ethernet header, an outer IP header, an outer UDP
header and a VXLAN header. The encapsulation is done by the source VTEP and is sent out to the destination
VTEP. At the destination VTEP, the packet is stripped of its outer header and is passed on to the destination
virtual machine if the segment ID in the packet is valid.
Outer
MAC
DA
Outer
MAC
SA
Outer
8021.Q
Outer IP
DA
Outer IP
SA
VXLAN Encapsulation
Outer
UDP
VXLAN
Header
8 bytes
Inner
MAC
DA
Inner
MAC
SA
Optional
Inner
8021.Q
Original
Ethernet
Payload
CRC
The destination MAC address in the outer Ethernet header can be the MAC address of the destination VTEP or
that of an intermediate L3 router. The outer IP header represents the corresponding source and destination
VTEP IPs. The association of the virtual machines MAC to the VTEPs IP is discovered via source learning. More
details on the forwarding table are provided in the VXLAN Packet Flow section. The outer UDP header
contains source port, destination port and checksum information. The source port of the UDP header is a hash of
the inner Ethernet frames header. This is done to enable a level of entropy for ECMP/load balancing of the
virtual machinetovirtual machine traffic across the VXLAN overlay. The VXLAN header is an 8-byte field that
has 8 bits to indicate whether the VXLAN Network Identifier (VNI) is valid, 24 bits for the VXLAN Segment
ID/VXLAN VNI and the remaining 24 bits reserved.
T ECHNICAL W HI T E P A P E R / 7
T ECHNICAL W HI T E P A P E R / 8
L2
IP
Payload
L2
IP
Payload
VM
VM
MAC 1
MAC 2
VXLAN 5001
vSphere Distributed Switch
vSphere
vSphere
Forwarding Table
VTEP IP
10.20.10.10
VM MAC
VTEP IP
Segment ID
MAC1
10.20.10.10
5001
VTEP IP
10.20.10.11
3
L2
IP
UDP
VXLAN
L2
IP
Payload
L2/L3 network
infra
The next part of this section describes packet flow in the following VXLAN deployments:
1) Intra-VXLAN packet flow; that is, two virtual machines on the same logical L2 network
2) Inter-VXLAN packet flow; that is, two virtual machines on two different logical L2 networks
T ECHNICAL W HI T E P A P E R / 9
VM
VM
192.168.1.10
VXLAN BLUE
192.168.1.11
192.168.1.0/24
192.168.1.1
vCloud Networking
and Security Edge
Gateway
172.26.10.10
External Network
172.26.10.0/24
Internet
In the case of virtual machinetovirtual machine communication on the same logical L2 network, the following
two traffic flow examples illustrate possibilities that are dependent on where the virtual machines are deployed:
1) Both virtual machines are on the same vSphere host.
2) The virtual machines are on two different vSphere hosts.
In the first case, traffic remains on one vSphere host; in the second case, the virtual machine packet is
encapsulated into a new UDP header by the source VTEP on one vSphere host and is sent over through the
external IP network infrastructure to the destination VTEP on another vSphere host. In this process, the external
switches and routers do not detect anything about the virtual machines IP (192.168.1.10/192.168.1.11) and MAC
address because they are embedded in the new UDP header.
In the scenario where the virtual machine is communicating with the external world, as shown by the green
dotted line, it first will send the traffic to gateway IP address 192.168.1.1; the vCloud Networking and Security
Edge gateway will send unencapsulated traffic over its external-facing interface to the Internet.
T ECHNICAL W HI T E P A P E R / 1 0
VM
VM
192.168.1.10
VM
192.168.1.11
192.168.2.10
VXLAN Blue
VXLAN Orange
192.168.1.0/24
192.168.2.0/24
192.168.1.1
vCloud Networking
and Security Edge
Gateway
192.168.2.1
172.26.10.10
External Network
172.26.10.0/24
Internet
T ECHNICAL W HI T E P A P E R / 1 1
Physical Network
The physical datacenter network varies across different customer environments in terms of which network
topology they use in their datacenter. Hierarchical network design provides the required high availability and
scalability to the datacenter network. This section assumes that the reader has some background in various
network topologies utilizing traditional L3 and L2 network configurations. Readers are encouraged to look at the
design guides from the physical network vendor of choice. We will examine some common physical network
topologies and how to enable network virtualization in them.
Network Topologies with L2 Configuration in the Access Layer
In this topology access layer, switches connect to the aggregation layer over an L2 network. Aggregation
switches are the VLAN termination points, as shown in Figure 7. Spanning Tree Protocol (STP) is traditionally
used to avoid loops. Routing protocols run between aggregation and core layers.
VM
VM
VM
VM
VM
VM
VM
Consume Logical L2
Network
VXLAN Fabric
Deploy VDS
VLAN100
VLAN100
Single Subnet
Enable IGMP
Snooping
L3 Access Layer
STP
L2 Trunks
Aggregation Layer
L3 Links
Enable IGMP
Querier
Routing
Rack 1
Core Layer
Rack 10
In such deployments with a single subnet (VLAN 100) configured on different racks, enabling network
virtualization based on VXLAN requires the following:
Enable IGMP snooping on the L2 switches.
Enable the IGMP querier feature on one of the L2/L3 switches in the aggregation layer.
Increase the end-to-end MTU by a minimum of 50 bytes to accommodate a VXLAN header. The recommended
size is 1,550 or jumbo frames.
T ECHNICAL W HI T E P A P E R / 1 2
To overcome slower convergence times and lower link utilization limitations of STP, most datacenter networks
today use technologies such as Cisco vPC/VSS (or MLAG, MCE, SMLT, and so on). From the VXLAN design
perspective, there is no change to the previously stated requirements.
When the physical topology has an access layer with multiple subnets configured (for example, VLAN 100 in
Rack 1 and VLAN 200 in Rack 10 in Figure 8), the aggregation layer must have Protocol-Independent Multicast
(PIM) enabled to ensure that multicast routes across multiple subnets are exchanged.
All the VXLAN requirements previously discussed apply to leaf and spine datacenter architectures as well.
Network Topologies with L3 Configuration in the Access Layer
In this topology, access layer switches connect to the aggregation layer over an L3 network. Access switches are
the VLAN termination points, as shown in Figure 8. Key advantages of this design are better utilization of all the
links using Equal-Cost Multipathing (ECMP) and elimination of STP.
From the VXLAN deployment perspective, the following requirements must be met:
Enable PIM on access switches.
Ensure that during the VXLAN preparation process, no VLAN is configured. This ensures that a VDS doesnt
perform VLAN tagging, also called virtual switch tagging (VST) mode.
Increase end-to-end MTU by a minimum of 50 bytes to accommodate a VXLAN header. The recommended
size is 1,550 or jumbo frames.
VM
VM
VM
VM
VM
VM
VM
Consume Logical L2
Network
VXLAN Fabric
Deploy VDS
L3 Links
Routing
L3 Access Layer
Enable PIM
ECMP
Aggregation Layer
Rack 1
Core Layer
Rack 10
T ECHNICAL W HI T E P A P E R / 1 3
Logical Network
After the physical network has been prepared, logical networks are deployed with VXLAN, with no ongoing
changes to the physical network. The logical network design differs based on the customers needs and the type
of compute, network and storage components they have in the datacenter. The following aspects of the virtual
infrastructure should be taken into account before deploying logical networks:
A cluster is a collection of vSphere hosts and associated virtual machines with shared resources. One cluster
can have a maximum of 32 vSphere hosts.
A VDS is the datacenter-wide virtual switch that can span across up to 500 hosts in the datacenter. Best
practice is to use one VDS across all clusters to enable simplified design and cluster-wide VMware vSphere
vMotion migration.
With VXLAN, a new traffic type is added to the vSphere host: VXLAN transport traffic. As a best practice, the
new VXLAN traffic type should be isolated from other virtual infrastructure traffic types. This can be achieved
by assigning a separate VLAN during the VXLAN preparation process.
A VMware vSphere ESXi hosts infrastructure traffic, including vMotion migration, VMware vSphere Fault
Tolerance, management, and so on, is not encapsulated and is independent of the VXLAN-based logical
network. These traffic types should be isolated from each other, and enough bandwidth should be allocated to
them. As of this release only, VMware does not support placing infrastructure traffic such as vMotion migration
on VXLAN-based virtual networks. Only virtual machine traffic is supported on logical networks.
To support vMotion migrations of workloads between clusters, all clusters should have access to all storage
resources.
The link aggregation method configured on the vSphere hosts also impacts how VXLAN transport traffic
traverses the host NICs. The VDS VXLAN port groups teaming can be configured as failover, LACP active
mode, LACP passive mode or static EtherChannel.
a. When LACP or static EtherChannel is configured, the upstream physical switch must have an equivalent
port channel or EtherChannel configured.
b. Also, if LACP is used, the physical switch must have 5-tuple hash distribution enabled.
c. Virtual port ID and load-based teaming are not supported with VXLAN.
Next, the design in the following three scenarios is discussed.
Greenfield deployment A datacenter built from scratch.
Brownfield deployment An existing operational datacenter with virtualization.
Stretched cluster Two datacenters separated by a short distance.
Scenario 1 Greenfield Deployment: Logical Network with a Single Physical L2 Domain
In a greenfield deployment, the recommended design is to have a single VDS stretching across all the compute
clusters within the same vCenter Server. All hosts in the VDS are placed on the same L2 subnet (single VLAN on
all uplinks). In Figure 9, the VLAN 10 spanning the racks is switchednot routedcreating a single L2 subnet.
This single subnet serves as the VXLAN transport subnet, and each host receives an IP address from this subnet,
used in VXLAN encapsulation. Multicast and other requirements are met based on the physical network
topology. Refer to the L2 configuration in the access layer shown in Figure 9 for details on multicast-related
configuration.
T ECHNICAL W HI T E P A P E R / 1 4
VM
VM
VM
VM
VM
VM
VM
Logical L2
Network
VM
VXLAN 5002
VXLAN 5001
VXLAN Fabric
Rack 1
Cluster 1
VLAN 10
vSphere
vSphere
vSphere
vSphere
Rack 10
Cluster 2
VLAN 10
Legend:
VTEP
vwire5001
portgroup
vwire5002
portgroup
Switch
T ECHNICAL W HI T E P A P E R / 1 5
VM
VM
VM
VM
VM
VM
VM
Logical L2
Network
VM
VXLAN 5002
VXLAN 5001
VXLAN Fabric
Rack 1
Cluster 1
VLAN 10
vSphere
vSphere
vSphere
vSphere
Rack 10
Cluster 2
VLAN 20
Legend:
VTEP
vwire5001
portgroup
vwire5002
portgroup
Switch
Router
Figure 10. Brownfield Deployment Two VDS
T ECHNICAL W HI T E P A P E R / 1 6
VM
VM
VM
VM
VM
VM
VM
Logical L2
Network
VM
VXLAN 5002
VXLAN 5001
VXLAN Fabric
Rack 1
Cluster 1
No VST
Rack 10
Cluster 2
vSphere
vSphere
vSphere
vSphere
No VST
Legend:
VTEP
vwire5001
portgroup
vwire5002
portgroup
VLAN 10
Switch
VLAN 20
Router
Because the storage network is parallel and independent of a logical network, it is assumed that both clusters
can reach the shared storage. Standard vMotion migration distance limitations and single vCenter requirements
still apply. Because the moved virtual machine is still in the same logical L2 network, no IP readdressing is
necessary, even though the physical hosts might be on different subnets.
Scenario 4 Logical Network: Stretched Clusters Across Two Datacenters
Stretched clusters offer the ability to balance workloads between two datacenters. This nondisruptive workload
mobility enables migration of services between geographically adjacent sites. A stretched cluster design helps
pool resources in two datacenters and enables workload mobility. Virtual machinetovirtual machine traffic is
within the same logical L2 network, enabling L2 adjacency across datacenters. The virtual machinetovirtual
machine traffic dynamics are the same as those previously cited. In this section, we will discuss the impact of this
design on northsouth traffic (virtual machine communicating outside the logical L2 network) because that is
the main difference as compared to previous scenarios.
Figure 12 shows two sites, site A and site B, with two hosts deployed in each site along with the storage and the
replication setup. Here all hosts are managed by a single vCenter Server and are part of the same VDS. In
general, for stretched cluster design, the following requirements must be met:
The two datacenters must be managed by one vCenter Server because the VXLAN scope is limited to a single
vCenter Server.
vMotion support requires that the datacenters have a common stretched VDS (as in scenario 3). A multiple
VDS design, discussed in scenario 2, can also be used, but vMotion migration will not work.
T ECHNICAL W HI T E P A P E R / 1 7
VM
After vMotion
VM
VXLAN 5002
vSphere Distributed Switch
Stretched Cluster
WAN
Site A
IP Network
Internet
Storage A
Site B
IP Network
FC/IP
LUN (R/W)
Storage B
Internet
LUN (R/O)
In this design, the vCloud Networking and Security Edge gateway is pinned to one of the datacenters (site A in
this example). In the vCloud Networking and Security 5.1 release, each VXLAN segment can have only one
vCloud Networking and Security Edge gateway. This has the following implications:
All northsouth traffic from the second datacenter (site B) in the same VXLAN (5002) must transit the
vCloud Networking and Security Edge gateway in the first datacenter (site A).
Also, when a virtual machine is moved from site A to site B, all northsouth traffic returns to site A before
reaching the Internet or other physical networks in the datacenter.
Storage must support a campus cluster configuration.
These implications raise obvious concerns regarding bandwidth consumption and latency, so an activeactive
multidatacenter design is not recommended. This design is mainly targeted toward the following scenarios:
Datacenter migrations that require no IP address changes on the virtual machines. After the migration has
been completed, the vCloud Networking and Security Edge gateway can be moved to the new datacenter,
requiring a change in external IP addresses on the vCloud Networking and Security Edge only. If all virtual
machines have public IP addresses and are not behind vCloud Networking and Security Edge gateway network
address translation (NAT), more changes are needed.
Deployments that require limited northsouth traffic. Because virtual machinevirtual machine traffic does not
require crossing the vCloud Networking and Security Edge gateway, the stretched cluster limitation does not
apply.
These scenarios also benefit from elastic pooling of resources and initial workload placement flexibility. If virtual
machines are in different VXLANs, the limitations do not apply.
T ECHNICAL W HI T E P A P E R / 1 8
192.168.1.11
VM
VM
192.168.3.10
VXLAN 5000
VM
192.168.1.0/24
VXLAN 5002
192.168.3.0/24
192.168.2.10
VM
192.168.1.1
192.168.3.1
VXLAN 5001
192.168.2.0/24
vCloud
Networking and
Security Edge
Gateway
192.168.2.1
Standard NAT
Configuration and
DHCP service
172.26.10.1
External Network
172.26.10.0/24
Internet
Figure 13. NAT and DHCP Configuration on vCloud Networking and Security Edge Gateway
T ECHNICAL W HI T E P A P E R / 1 9
The following are some configuration details of the vCloud Networking and Security Edge gateway:
Blue, green and purple virtual wires (VXLAN segments) are associated with separate port groups on a VDS.
Internal interfaces of the vCloud Networking and Security Edge gateway connect to these port groups.
The vCloud Networking and Security Edge gateway interface connected to the blue virtual wire is configured
with IP 192.168.1.1.
Enable DHCP service on this internal interface of vCloud Networking and Security Edge by providing a pool of
IP addresses. For example, 192.168.1.10 to 192.168.1.50.
All the virtual machines connected to the blue virtual wire receive an IP address from the DHCP service
configured on Edge or on the same subnet.
The NAT configuration on the external interface of the vCloud Networking and Security Edge gateway allows
virtual machines on a virtual wire to communicate with devices on the external network. This communication is
allowed only when the requests are initiated by the virtual machines connected to the internal interface of the
vCloud Networking and Security Edge.
In situations where overlapping IP and MAC address support is required, one vCloud Networking and Security
Edge gateway per tenant is recommended. Figure 14 shows an overlapping IP address deployment with two
tenants and two separate vCloud Networking and Security Edge gateways.
Tenant 1
Tenant 2
10.10.1.10
10.10.1.11
10.10.1.10
VM
VM
VM
VXLAN 5000
VXLAN 5001
10.10.1.0/24
10.10.1.0/24
10.10.1.1
10.10.1.1
vCloud
Networking and
Security Edge
Gateway
vCloud
Networking and
Security Edge
Gateway
10.10.20.1
10.10.10.1
External Network
10.10.0.0/16
IP Core
T ECHNICAL W HI T E P A P E R / 2 0
172.26.1.10
172.26..1.11
VM
VM
172.26..3.10
VXLAN 5000
VM
172.26.1.0/24
VXLAN 5002
172.26..3.0/24
172.26.2.10
VM
172.26.1.1
172.26.3.1
VXLAN 5001
172.26.2.0/24
vCloud
Networking and
Security Edge
Gateway
172.26.2.1
172.26.10.1
External Network
172.26.10.0/24
Internet
In the deployment shown in Figure 15, the vCloud Networking and Security Edge gateway is not configured with
the DHCP and NAT services. However, static routes are set up between different interfaces of the vCloud
Networking and Security Edge gateway.
Other Network Services
In a multitenant environment, vCloud Networking and Security Edge firewall can also be used to segment
intertenant and intratenant traffic.
vCloud Networking and Security Edge load balancer can be used for load balancing external to internal Web
traffic, for example, when multiple Web servers are deployed on the logical network. Static routes must be
configured on the upstream router to properly route inbound traffic to the vCloud Networking and Security
Edge external interface.
vCloud Networking and Security Edge also provides DNS relay functionality to resolve domain names. DNS
relay configuration should point to an existing DNS in the physical network. Alternatively, a DNS server can be
deployed in the logical network itself.
T ECHNICAL W HI T E P A P E R / 2 1
Consumption Models
After the VXLAN configuration has been completed, customers can create and consume logical L2 networks on
demand. Depending on the type of vCloud Networking and Security bundle purchased, they have the following
three options:
1) Use the vCloud Director interface.
2) Use the vCloud Networking and Security Manager interface.
3) Use REST APIs offered by vCloud Networking and Security products.
In vCloud Director
vCloud Director creates a VXLAN network pool implicitly for each provider VDC backed by VXLAN prepared
clusters. The total number of logical networks that can be created using a VXLAN network pool is determined by
the configuration at the time of VXLAN fabric preparation. A cloud administrator can in turn distribute this total
number to the various organization VDCs backed by the provider VDC. The quota allocated to an organization
VDC determines the number of logical networks (organization VDC/ VMware vSphere vApp networks) backed
by VXLAN that can be created in that organization VDC.
T ECHNICAL W HI T E P A P E R / 2 2
Using API
In addition to vCloud Director and vCloud Networking and Security Manager, vCloud Networking and Security
components can be managed using APIs provided by VMware. For detailed information on how to use the APIs,
refer to the vCloud Networking and Security 5.1 API Programming Guide at
https://www.vmware.com/pdf/vshield_51_api.pdf.
Port Mirroring
VDS provides multiple standard port mirroring features such as SPAN, RSPAN and ERSPAN that help in detailed
traffic analysis.
T ECHNICAL W HI T E P A P E R / 2 3
Conclusion
The VMware network virtualization solution addresses the current challenges with the physical network
infrastructure and brings flexibility, agility and scale through VXLAN-based logical networks. Along with the
ability to create on-demand logical networks using VXLAN, the vCloud Networking and Security Edge gateway
helps customers deploy various logical network services such as firewall, DHCP, NAT and load balancing on
these networks. The operational tools provided as part of the solution help in the troubleshooting and
monitoring of these overlay networks.
T ECHNICAL W HI T E P A P E R / 2 4
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2013 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed
at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be
trademarks of their respective companies. Item No: VMW-WP-NETWORK-VIRT-GUIDE-USLET-101
Docsource: OIC - 12VM008.07