Importance The information gathered for data center design is comprehensive, including the physical and the virtual infrastructure. You must consider the host and cluster design as well as the overlay and underlay of a data center when recommending solutions for your customers.
Learner Objectives • Identify the components of a physical design • Describe the requirements of the underlay network • Recognize policies and configurations to peer with the physical infrastructure • Describe the Spine and Leaf design • Describe the connectivity and design for the host uplinks • Provide examples of decisions in a physical design
Physical Infrastructure Requirements: MTU The minimum Maximum Transmission Unit (MTU) setting to support NSX is 1,700 bytes: • This includes the default Ethernet frame size and the Geneve header. For an MTU of 9,000 bytes: • Future NSX-T Data Center releases might include new information about encapsulation headers. • Tenants might change the MTU for their VMs from 1,500 to 8,800.
IP Connectivity Per Plane Management and control plane: NSX Manager, transport nodes (hypervisors, edge nodes, and bridge nodes), and Management IP addresses can be in different subnets (MTU 1,500). Data plane: Transport nodes (hypervisors, edge nodes, bridge nodes) and Tunnel Endpoint (TEP) IP address can be in different subnets (MTU 1,700).
Transport Nodes: pNICs Management and transport can use dedicated physical NICs (pNICs). Management and transport can also share pNICs for hypervisors: ESXi or Kernel-based Virtual Machine (KVM). You must ensure that the data plane does not saturate the uplinks and affect the management traffic.
Suggested Addressing for Spine and Leaf Design Create an IP scheme that complements troubleshooting. The table provides the IP address allocations and VLANs for the Compute rack.
Other Infrastructure Designs The Spine and Leaf design is a popular infrastructure because it enables the following features: • Scale-up ease: – Compute (add leaf) – Throughput (add spine) • Uniform access and consistent latency Both designs work with NSX-T Data Center.
Connectivity of NSX Manager Instances (2) The NSX Manager cluster nodes placement must meet the following requirements: • NSX Manager cluster nodes must be on different hypervisors. • If vCenter Server exists, DRS anti-affinity rules can be used.
Physical NICs Management, storage, and vSphere vMotion traffic can use the same pNIC. You can use Quality of Service (QoS) if overlay might saturate pNIC and block management.
Transport Node: Compute ESXi Design Considerations: • In the typical leaf-spine infrastructure, the overlay VLAN and VRRP/HSRP are configured on both ToRs. • The hypervisor TEPs (vmk10/vmk11) are in that same overlay VLAN. • The ToR VRRP/HSRP address is the TEP default gateway. • In the diagram, the ToR on the left is the active ToR for that overlay subnet.
• Simple to implement 2 pNICs of hypervisor used • 2 pNICs of hypervisor used • Deterministic (easy to operate or • ECMP up to hypervisor troubleshoot) • Limited ToR interswitch link • Correct bandwidth within rack (no usage interswitch link used) • Optimal bandwidth in/out rack (both ToR used) • Optimal bandwidth within rack (both ToR used)
Bandwidth in/out rack not • Bandwidth in/out rack not optimal (all • More complex to implement optimal (all out-rack goes out-rack goes through ToR1) (requires MLAG and VARP through ToR1) • Bandwidth within rack not optimal (some configuration on fabric) use interswitch link) • Not deterministic (harder to • No KVM operate or troubleshoot) • Requires more TEP IP addresses • Not deterministic (harder to operate or troubleshoot)
HY1303-PHYS-NET-001 Use an L3 transport network to offer a neutral data center network. HY1303-PHYS-NET-002 Use redundant physical switches. HY1303-PHYS-NET-003 Use minimum 2 x 10 GigE (or faster) ports for host uplinks. Use 4 x 10 GigE (or faster) uplinks if you want infrastructure and overlay traffic isolation. HY1303-PHYS-NET-004 Use VLAN to segment physical network functions. HY1303-PHYS-NET-005 Configure switch ports that connect to transport nodes as trunk ports and configure all necessary VLANs. HY1303-PHYS-NET-006 Configure the Spanning Tree Protocol (STP) on any port that is connected to a transport node to reduce the time taken to transition ports to the forwarding state.
HY1303-PHYS-NET-007 Configure the MTU size to at least 9,000 on the physical switch ports, vSphere Distributed Switches, vSphere Distributed Switch port groups, and N-VDS that support Geneve traffic. HY1303-PHYS-NET-008 Use a physical network that is configured for BGP routing adjacency. HY1303-PHYS-NET-009 Use an NTP time source to maintain accurate and synchronized time in the infrastructure. HY1303-PHYS-NET-010 Create DNS records (forward and reverse) for all management nodes and VMs.
Review of Learner Objectives • Identify the components of a physical design • Describe the requirements of the underlay network • Recognize policies and configurations to peer with the physical infrastructure • Describe the Spine and Leaf design • Describe the connectivity and design for the host uplinks • Provide examples of decisions in a physical design
Network Virtualization Conceptual Design (1) The conceptual design typically includes a high-level view of the network, focusing on functions that support business requirements: • External networks: Connectivity to and from external networks is through the perimeter firewall. • Perimeter firewall: The firewall exists at the perimeter of the data center to filter external data center traffic. • Upstream L3 devices: These devices are behind the perimeter firewall and handle North-South traffic that is entering and leaving the NSX-T Data Center environment. Usually, this layer includes a pair of ToR switches or redundant upstream L3 devices, such as core routers.
Network Virtualization Conceptual Design (2) • NSX-T Data Center logical service router (SR): The SR component of the NSX-T Data Center Tier-0 logical router is responsible for establishing eBGP peering with the upstream Layer 3 device and enabling North- South routing. • NSX-T Data Center logical distributed router (DR): The DR component of the NSX-T Data Center logical router is responsible for East-West routing. • Management network: This network is VLAN-backed and supports all management components, such as NSX Manager instances. • Internal workload networks: The NSX-T Data Center logical switches provide connectivity for the tenant workloads. Workloads are directly connected to these networks. Internal workload networks are then connected to a DR.
Deployment Considerations NSX-T Data Center management components require only VLANs and IP connectivity. These components can coexist with any supported hypervisor in each release. For predictable operational consistency, resources of all management and edge node VM elements must be reserved, including vCenter Server, NSX Manager, and NSX Edge node VMs. An NSX Virtual Distributed Switch can have only one teaming policy. An NSX Edge node VM has an embedded NSX Virtual Distributed Switch that encapsulates overlay traffic for the guest VMs: • The NSX Edge node VM does not require a hypervisor to be prepared for the NSX-T Data Center overlay network. • A VLAN and proper MTU are the only requirements. • This requirement allows flexibility to deploy the NSX Edge node VM in either a dedicated or shared cluster.
Learner Objectives • Describe cluster design • Explain cluster design with collapsed management and edge resources • Describe the multiple vCenter Server domains in collapsed management and edge resources design • Describe the multiple vCenter Server domains with shared edge and compute resources
Collapsed Compute and Edge Cluster: Edge Node VMs on VDS Compute TEP and edge TEP must be in different VLAN. VLAN 75 is the transport VLAN used for the compute host. VLAN 73 is the transport VLAN used for edge.
Review of Learner Objectives • Describe cluster design • Explain cluster design with collapsed management and edge resources • Describe the multiple vCenter Server domains in collapsed management and edge resources design • Describe the multiple vCenter Server domains with shared edge and compute resources
Summary of Design Decisions: Virtual Infrastructure
Design Decision ID Design Decision Description
HY1303-VI-VC-001 Place the management and edge functions in the same rack. HY1303-VI-VC-002 Create a shared management and edge cluster with a minimum of four ESXi hosts. HY1303-VI-VC-003 NSX Manager cluster and NSX Edge VMs reside in a shared management and edge cluster. HY1303-VI-VC-004 Create a resource pool for the three NSX Manager nodes and two large sized edge VMs with a CPU share level of high, a memory share of normal, and a 104 GB memory reservation. HY1303-VI-VC-005 Create a resource pool for all other management VMs on the shared management and edge cluster with a CPU share value of normal and a memory share value of normal. HY1303-VI-VC-006 Use vSphere HA to protect the shared management and edge cluster against failures. HY1303-VI-VC-007 Create a host profile for the shared management and edge cluster. HY1303-VI-VC-008 Create a host profile for the compute clusters.
Summary of Design Decisions: Virtual Infrastructure Transport Node
Design Decision ID Design Decision Description
HY1303-VI-SDN-020 Define a transport node profile to capture the configuration required to create the ESXi host transport nodes in the vSphere compute clusters. Using a transport node profile provides configuration consistency across all ESXi host transport nodes. HY1303-VI-SDN-021 Add all hypervisors in the compute clusters in the NSX-T Data Center fabric. HY1303-VI-SDN-022 Add all ESXi hosts as transport nodes in the compute cluster by applying the transport node profile to the vSphere cluster objects. HY1303-VI-SDN-023 Add all KVM fabric nodes as transport nodes. HY1303-VI-SDN-024 Add all edge VMs as transport nodes. HY1303-VI-SDN-025 Edge functions and services are provided by the NSX Edge VM. HY1303-VI-SDN-026 Use large-size NSX Edge VMs.
Summary of Design Decisions: Virtual Infrastructure Cluster
Design Decision ID Design Decision Description
HY1303-VI-SDN-001 Deploy a three-node NSX Manager cluster to provide high availability and scale. This NSX Manager cluster is used to configure and manage all compute clusters based on NSX-T Data Center in a single region. HY1303-VI-SDN-002 Deploy a three-node NSX Manager cluster by using the medium size virtual appliance. HY1303-VI-SDN-003 Create a virtual IP for the NSX Manager cluster to provide high availability for the NSX Manager UI and API. HY1303-VI-SDN-004 Replace the NSX Manager certificate with a certificate signed by a third-party public key infrastructure. HY1303-VI-SDN-005 Use the internal configuration backup of NSX-T Data Center and schedule an automatic backup with a frequency interval of one hour. HY1303-VI-SDN-006 Configure the vCenter Server systems as compute managers in NSX-T Data Center.
Review of Learner Objectives • Describe dedicated management and edge cluster design • Describe the North-South routing and edge node physical placement • Explain the recommended design for high-performance clusters
Key Points • Though the minimum requirement for functionality is 1,600 bytes, an MTU size of 9,000 bytes is required to support future NSX-T Data Center releases. • A Spine and Leaf design is easily scalable and has deterministic latency. • Active-standby, active-active (HSRP/VRRP), and active-active (VARP) are the options available to connect to the fabric. • Because NSX-T Data Center management components require only VLANs and IP connectivity, they can co-exist with any supported hypervisor. • Bridging services do not affect the active-standby service mode. • Bridging services are enabled per logical switch. Questions?