You are on page 1of 60

Module 4: NSX-T Data Center

Design Considerations

© 2020 VMware, Inc.


Importance
The information gathered for data center design is comprehensive, including the physical and the virtual infrastructure.
You must consider the host and cluster design as well as the overlay and underlay of a data center when recommending
solutions for your customers.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4-2


Module Lessons
1. Physical Infrastructure Design
2. Compute Host Cluster Design
3. Collapsed Management and Edge Resources Design
4. Dedicated Management and Edge Resources Design

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4-3


Lesson 1: Physical Infrastructure Design

© 2019 VMware Inc. All rights reserved.


Learner Objectives
• Identify the components of a physical design
• Describe the requirements of the underlay network
• Recognize policies and configurations to peer with the physical infrastructure
• Describe the Spine and Leaf design
• Describe the connectivity and design for the host uplinks
• Provide examples of decisions in a physical design

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4-5


Physical Infrastructure Requirements: MTU
The minimum Maximum Transmission Unit (MTU) setting to support NSX is 1,700 bytes:
• This includes the default Ethernet frame size and the Geneve header.
For an MTU of 9,000 bytes:
• Future NSX-T Data Center releases might include new information about encapsulation headers.
• Tenants might change the MTU for their VMs from 1,500 to 8,800.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4-6


IP Connectivity Per Plane
Management and control plane: NSX Manager, transport nodes (hypervisors, edge nodes, and bridge nodes), and
Management IP addresses can be in different subnets (MTU 1,500).
Data plane: Transport nodes (hypervisors, edge nodes, bridge nodes) and Tunnel Endpoint (TEP) IP address can be in
different subnets (MTU 1,700).

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4-7


Transport Nodes: pNICs
Management and transport can use dedicated physical NICs (pNICs).
Management and transport can also share pNICs for hypervisors: ESXi or Kernel-based Virtual Machine (KVM).
You must ensure that the data plane does not saturate the uplinks and affect the management traffic.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4-8


Use Case: Spine and Leaf Design
All links between the spine and the leaf are forwarding. For small deployments, all racks can be combined.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4-9


Suggested Addressing for Spine and Leaf Design
Create an IP scheme that complements troubleshooting. The table provides the IP address allocations and VLANs for
the Compute rack.

Function VLAN ID IP Address


Management 10 10.X.Rack-
ID.0/26
vSphere vMotion 11 10.X.Rack-
ID.64/26
Geneve 12 10.X.Rack-
ID.128/26
vSAN 13 10.X.Rack-
ID.192/26

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 10


Other Infrastructure Designs
The Spine and Leaf design is a popular infrastructure because it enables the following features:
• Scale-up ease:
– Compute (add leaf)
– Throughput (add spine)
• Uniform access and consistent latency
Both designs work with NSX-T Data Center.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 11


Connectivity of NSX Manager Instances (1)
NSX Manager:
• Is a VM in hypervisor (ESXi or KVM)
• Has a single vNIC attached to management

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 12


Connectivity of NSX Manager Instances (2)
The NSX Manager cluster nodes placement must meet the following requirements:
• NSX Manager cluster nodes must be on different hypervisors.
• If vCenter Server exists, DRS anti-affinity rules can be used.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 13


Physical NICs
Management, storage, and vSphere vMotion traffic can use the same pNIC.
You can use Quality of Service (QoS) if overlay might saturate pNIC and block management.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 14


Connectivity to Fabric
Several options are available for connecting transport hosts to the fabric.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 15


Connectivity to Fabric: Active-Standby
Both ESXi and KVM support the active-standby connectivity option.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 16


Connectivity to Fabric: Active-Active (HSRP/VRRP)
An active-active (HSRP/VRRP) design is supported but is not available in KVM.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 17


Transport Node: Compute ESXi
Design Considerations:
• In the typical leaf-spine infrastructure, the overlay
VLAN and VRRP/HSRP are configured on both ToRs.
• The hypervisor TEPs (vmk10/vmk11) are in that same
overlay VLAN.
• The ToR VRRP/HSRP address is the TEP default
gateway.
• In the diagram, the ToR on the left is the active ToR for
that overlay subnet.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 18


Connectivity to Fabric: Active-Active (VARP)
ESXi supports the active-active (VARP) connectivity option.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 19


Connectivity to Fabric Options: Advantages
The different connectivity options have several advantages.

Active-Standby Active-Active (HSRP/VRRP) Active-Active (VARP)


• Simple to implement 2 pNICs of hypervisor used • 2 pNICs of hypervisor used
• Deterministic (easy to operate or • ECMP up to hypervisor
troubleshoot) • Limited ToR interswitch link
• Correct bandwidth within rack (no usage
interswitch link used) • Optimal bandwidth in/out rack
(both ToR used)
• Optimal bandwidth within rack
(both ToR used)

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 20


Connectivity to Fabric Options: Disadvantages
The different connectivity options have several disadvantages.

Active-Standby Active-Active (HSRP/VRRP) Active-Active (VARP)


Bandwidth in/out rack not • Bandwidth in/out rack not optimal (all • More complex to implement
optimal (all out-rack goes out-rack goes through ToR1) (requires MLAG and VARP
through ToR1) • Bandwidth within rack not optimal (some configuration on fabric)
use interswitch link) • Not deterministic (harder to
• No KVM operate or troubleshoot)
• Requires more TEP IP addresses
• Not deterministic (harder to operate or
troubleshoot)

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 21


ESXi Compute Transport Node: 2 x 10/25 pNIC
Teaming policy is based on the type of traffic.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 22


ESXi Compute Transport Node: 4 x 10 Design
Coexistence and traffic control the NSX Virtual Distributed Switch and the vSphere Distributed Switch.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 23


KVM Compute Rack: Teaming Policy

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 24


Summary of Physical Design Decisions (1)

Design Decision ID Design Decision Description


HY1303-PHYS-NET-001 Use an L3 transport network to offer a neutral data center network.
HY1303-PHYS-NET-002 Use redundant physical switches.
HY1303-PHYS-NET-003 Use minimum 2 x 10 GigE (or faster) ports for host uplinks. Use 4 x 10 GigE
(or faster) uplinks if you want infrastructure and overlay traffic isolation.
HY1303-PHYS-NET-004 Use VLAN to segment physical network functions.
HY1303-PHYS-NET-005 Configure switch ports that connect to transport nodes as trunk ports and
configure all necessary VLANs.
HY1303-PHYS-NET-006 Configure the Spanning Tree Protocol (STP) on any port that is connected
to a transport node to reduce the time taken to transition ports to the
forwarding state.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 25


Summary of Physical Design Decisions (2)

Design Decision ID Design Decision Description


HY1303-PHYS-NET-007 Configure the MTU size to at least 9,000 on the physical switch ports, vSphere
Distributed Switches, vSphere Distributed Switch port groups, and N-VDS that
support Geneve traffic.
HY1303-PHYS-NET-008 Use a physical network that is configured for BGP routing adjacency.
HY1303-PHYS-NET-009 Use an NTP time source to maintain accurate and synchronized time in the
infrastructure.
HY1303-PHYS-NET-010 Create DNS records (forward and reverse) for all management nodes and
VMs.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 26


Review of Learner Objectives
• Identify the components of a physical design
• Describe the requirements of the underlay network
• Recognize policies and configurations to peer with the physical infrastructure
• Describe the Spine and Leaf design
• Describe the connectivity and design for the host uplinks
• Provide examples of decisions in a physical design

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 27


Lesson 2: Compute Host Cluster Design

© 2019 VMware Inc. All rights reserved.


Learner Objectives
• Understand the network virtualization conceptual design
• Describe deployment considerations
• Understand the compute host cluster design with ESXi and KVM

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 29


Network Virtualization Conceptual Design (1)
The conceptual design typically includes a high-level view
of the network, focusing on functions that support business
requirements:
• External networks: Connectivity to and from external
networks is through the perimeter firewall.
• Perimeter firewall: The firewall exists at the perimeter
of the data center to filter external data center traffic.
• Upstream L3 devices: These devices are behind the
perimeter firewall and handle North-South traffic that
is entering and leaving the NSX-T Data Center
environment. Usually, this layer includes a pair of ToR
switches or redundant upstream L3 devices, such as
core routers.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 30


Network Virtualization Conceptual Design (2)
• NSX-T Data Center logical service router (SR): The SR
component of the NSX-T Data Center Tier-0 logical
router is responsible for establishing eBGP peering
with the upstream Layer 3 device and enabling North-
South routing.
• NSX-T Data Center logical distributed router (DR):
The DR component of the NSX-T Data Center logical
router is responsible for East-West routing.
• Management network: This network is VLAN-backed
and supports all management components, such as
NSX Manager instances.
• Internal workload networks: The NSX-T Data Center
logical switches provide connectivity for the tenant
workloads. Workloads are directly connected to these
networks. Internal workload networks are then
connected to a DR.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 31


Deployment Considerations
NSX-T Data Center management components require only VLANs and IP connectivity. These components can coexist
with any supported hypervisor in each release.
For predictable operational consistency, resources of all management and edge node VM elements must be reserved,
including vCenter Server, NSX Manager, and NSX Edge node VMs.
An NSX Virtual Distributed Switch can have only one teaming policy.
An NSX Edge node VM has an embedded NSX Virtual Distributed Switch that encapsulates overlay traffic for the
guest VMs:
• The NSX Edge node VM does not require a hypervisor to be prepared for the NSX-T Data Center overlay network.
• A VLAN and proper MTU are the only requirements.
• This requirement allows flexibility to deploy the NSX Edge node VM in either a dedicated or shared cluster.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 32


Small Deployments: Single-Rack ESXi with Dedicated Compute

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 33


Small Deployments: Single-Rack ESXi with No Dedicated Compute

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 34


Small Deployments: Single-Rack KVM with Dedicated Compute

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 35


Transport Node Design: ESXi and KVM

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 36


Review of Learner Objectives
• Understand the network virtualization conceptual design
• Describe deployment considerations
• Understand the compute host cluster design with ESXi and KVM

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 37


Lesson 3: Collapsed Management and Edge
Resources Design

© 2019 VMware Inc. All rights reserved.


Learner Objectives
• Describe cluster design
• Explain cluster design with collapsed management and edge resources
• Describe the multiple vCenter Server domains in collapsed management and edge resources design
• Describe the multiple vCenter Server domains with shared edge and compute resources

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 39


Collapsed Cluster Design
The diagram shows a collapsed management and edge resource design.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 40


North-South Routing and Edge VM on Compute Node

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 41


Multiple vCenter Server Domains: Shared Management and Edge Resources

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 42


2 x 10 Design: Single Cluster with Collapsed Management and Edge VM Resources

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 43


Multiple vCenter Server Domains: Shared Edge and Compute Resources

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 44


Dedicated Edge Cluster Edge VM on N-VDS: Not-Workable
The diagram shows an edge node VM on an NSX Virtual Distributed Switch.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 45


Collapsed Compute and Edge Cluster: Edge Node VMs on VDS
Compute TEP and edge TEP must be in different VLAN. VLAN 75 is the transport VLAN used for the compute host.
VLAN 73 is the transport VLAN used for edge.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 46


Review of Learner Objectives
• Describe cluster design
• Explain cluster design with collapsed management and edge resources
• Describe the multiple vCenter Server domains in collapsed management and edge resources design
• Describe the multiple vCenter Server domains with shared edge and compute resources

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 47


Lesson 4: Dedicated Management and Edge
Resources Design

© 2019 VMware Inc. All rights reserved.


Learner Objectives
• Describe dedicated management and edge cluster design
• Describe the North-South routing and edge node physical placement
• Explain the recommended design for high-performance clusters

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 49


Dedicated Cluster Design
The diagram shows a dedicated management and edge resources design.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 50


North-South Routing and Edge Node Physical Placement

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 51


Heterogeneous Compute Domain

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 52


Enterprise ESXi: High Performance for Services

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 53


Summary of Design Decisions: Virtual Infrastructure

Design Decision ID Design Decision Description


HY1303-VI-VC-001 Place the management and edge functions in the same rack.
HY1303-VI-VC-002 Create a shared management and edge cluster with a minimum of four ESXi hosts.
HY1303-VI-VC-003 NSX Manager cluster and NSX Edge VMs reside in a shared management and edge
cluster.
HY1303-VI-VC-004 Create a resource pool for the three NSX Manager nodes and two large sized edge
VMs with a CPU share level of high, a memory share of normal, and a 104 GB
memory reservation.
HY1303-VI-VC-005 Create a resource pool for all other management VMs on the shared management and
edge cluster with a CPU share value of normal and a memory share value of normal.
HY1303-VI-VC-006 Use vSphere HA to protect the shared management and edge cluster against failures.
HY1303-VI-VC-007 Create a host profile for the shared management and edge cluster.
HY1303-VI-VC-008 Create a host profile for the compute clusters.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 54


Summary of Design Decisions: Virtual Infrastructure Transport Node

Design Decision ID Design Decision Description


HY1303-VI-SDN-020 Define a transport node profile to capture the configuration required to create the
ESXi host transport nodes in the vSphere compute clusters. Using a transport node
profile provides configuration consistency across all ESXi host transport nodes.
HY1303-VI-SDN-021 Add all hypervisors in the compute clusters in the NSX-T Data Center fabric.
HY1303-VI-SDN-022 Add all ESXi hosts as transport nodes in the compute cluster by applying the transport
node profile to the vSphere cluster objects.
HY1303-VI-SDN-023 Add all KVM fabric nodes as transport nodes.
HY1303-VI-SDN-024 Add all edge VMs as transport nodes.
HY1303-VI-SDN-025 Edge functions and services are provided by the NSX Edge VM.
HY1303-VI-SDN-026 Use large-size NSX Edge VMs.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 55


Summary of Design Decisions: Virtual Infrastructure Cluster

Design Decision ID Design Decision Description


HY1303-VI-SDN-001 Deploy a three-node NSX Manager cluster to provide high availability and scale. This
NSX Manager cluster is used to configure and manage all compute clusters based on
NSX-T Data Center in a single region.
HY1303-VI-SDN-002 Deploy a three-node NSX Manager cluster by using the medium size virtual
appliance.
HY1303-VI-SDN-003 Create a virtual IP for the NSX Manager cluster to provide high availability for the
NSX Manager UI and API.
HY1303-VI-SDN-004 Replace the NSX Manager certificate with a certificate signed by a third-party public
key infrastructure.
HY1303-VI-SDN-005 Use the internal configuration backup of NSX-T Data Center and schedule an
automatic backup with a frequency interval of one hour.
HY1303-VI-SDN-006 Configure the vCenter Server systems as compute managers in NSX-T Data
Center.

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 56


Lab 2: Underlay Design
Review and document the customer’s existing physical network:
1. Read the Customer Underlay Design
2. Document the Customer Underlay Design

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 57


Lab 3: Virtual Infrastructure Design
Document the customer’s virtual infrastructure design:
1. Document the Virtual Infrastructure Design

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 58


Review of Learner Objectives
• Describe dedicated management and edge cluster design
• Describe the North-South routing and edge node physical placement
• Explain the recommended design for high-performance clusters

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 59


Key Points
• Though the minimum requirement for functionality is 1,600 bytes, an MTU size of 9,000 bytes is required to
support future NSX-T Data Center releases.
• A Spine and Leaf design is easily scalable and has deterministic latency.
• Active-standby, active-active (HSRP/VRRP), and active-active (VARP) are the options available to connect to the
fabric.
• Because NSX-T Data Center management components require only VLANs and IP connectivity, they can co-exist
with any supported hypervisor.
• Bridging services do not affect the active-standby service mode.
• Bridging services are enabled per logical switch.
Questions?

© 2020 VMware, Inc. VMware NSX-T Data Center: Design | 4 - 60

You might also like