Professional Documents
Culture Documents
VCF 50 Design
VCF 50 Design
Design Guide
01 JUN 2023
VMware Cloud Foundation 5.0
VMware Cloud Foundation Design Guide
You can find the most up-to-date technical documentation on the VMware website at:
https://docs.vmware.com/
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
©
Copyright 2021-2023 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc. 2
Contents
VMware, Inc. 3
VMware Cloud Foundation Design Guide
High Availability Design for vCenter Server for VMware Cloud Foundation 64
vCenter Server Design Requirements and Recommendations for VMware Cloud Foundation
65
vCenter Single Sign-On Design Requirements for VMware Cloud Foundation 68
vSphere Cluster Design for VMware Cloud Foundation 73
Logical vSphere Cluster Design for VMware Cloud Foundation 73
vSphere Cluster Life Cycle Method Design for VMware Cloud Foundation 76
vSphere Cluster Design Requirements and Recommendations for VMware Cloud Foundation
76
vSphere Networking Design for VMware Cloud Foundation 81
Logical vSphere Networking Design for VMware Cloud Foundation 81
vSphere Networking Design Recommendations for VMware Cloud Foundation 83
VMware, Inc. 4
VMware Cloud Foundation Design Guide
SDDC Manager Design Requirements and Recommendations for VMware Cloud Foundation
140
10 vRealize Suite Lifecycle Manager Design for VMware Cloud Foundation 142
Logical Design for vRealize Suite Lifecycle Manager for VMware Cloud Foundation 143
Network Design for vRealize Suite Lifecycle Manager 145
Data Center and Environment Design for vRealize Suite Lifecycle Manager 146
Locker Design for vRealize Suite Lifecycle Manager 148
vRealize Suite Lifecycle Manager Design Requirements and Recommendations for VMware
Cloud Foundation 149
VMware, Inc. 5
VMware Cloud Foundation Design Guide
Workspace ONE Access Design Elements for VMware Cloud Foundation 242
Life Cycle Management Design Elements for VMware Cloud Foundation 249
Information Security Design Elements for VMware Cloud Foundation 252
VMware, Inc. 6
About VMware Cloud Foundation Design
Guide
The VMware Cloud Foundation Design Guide contains a design model for VMware Cloud
Foundation (also called VCF) that is based on industry best practices for SDDC implementation.
The VMware Cloud Foundation Design Guide provides the supported design options for VMware
Cloud Foundation, and a set of decision points, justifications, implications, and considerations for
building each component.
Intended Audience
This VMware Cloud Foundation Design Guide is intended for cloud architects who are familiar
with and want to use VMware Cloud Foundation to deploy and manage an SDDC that meets the
requirements for capacity, scalability, backup and restore, and extensibility for disaster recovery
support.
To apply this VMware Cloud Foundation Design Guide, you must be acquainted with the Getting
Started with VMware Cloud Foundation documentation and with the VMware Cloud Foundation
Release Notes. See VMware Cloud Foundation documentation.
For performance best practices for vSphere, see Performance Best Practices for VMware
vSphere 8.0 Update 1.
Design Elements
This VMware Cloud Foundation Design Guide contains requirements and recommendations for
the design of each component of the SDDC. In situations where a configuration choice exists,
requirements and recommendations are available for each choice. Implement only those of them
that are relevant to your target configuration.
VMware, Inc. 7
VMware Cloud Foundation Design Guide
n Single VMware Cloud Foundation instance with multiple availability zones (also known as
stretched deployment). The default vSphere cluster of the workload domain is stretched
between two availability zones by using Chapter 6 vSAN Design for VMware Cloud
Foundation and configuring vSphere Cluster Design Requirements and Recommendations
for VMware Cloud Foundation and BGP Routing Design for VMware Cloud Foundation
accordingly.
n Multiple VMware Cloud Foundation instances. You deploy several instances of VMware Cloud
Foundation to address requirements for scale and co-location of users and resources.
n Multiple VMware Cloud Foundation instances with multiple availability zones. You apply the
configuration for stretched clusters for a single VMware Cloud Foundation instance to one or
more additional VMware Cloud Foundation instances in your environment.
VMware, Inc. 8
VMware Cloud Foundation Design Guide
VMware, Inc. 9
VMware Cloud Foundation
Concepts 1
To design a VMware Cloud Foundation deployment, you need to understand certain VMware
Cloud Foundation concepts.
Read the following topics next:
Architecture Models
Decide on a model according to your organization's requirements and your environment's
resource capabilities. Implement a standard architecture for workload provisioning and mobility
across VMware Cloud Foundation instances according to production best practices. If you plan to
deploy a small-scale environment, or if you are working on an SDDC proof-of-concept, implement
a consolidated architecture.
VMware, Inc. 10
VMware Cloud Foundation Design Guide
Do you need to
No
minimize hardware
requirements?
Yes
VMware, Inc. 11
VMware Cloud Foundation Design Guide
Management Domain n First domain deployed. n Guaranteed sufficient n You must carefully
n Contains the resources for size the domain to
following management management accommodate planned
appliances for all components deployment of VI
workload domains: workload domain and
additional management
n vCenter Server
components.
n NSX Manager
n Hardware might not be
n SDDC Manager
fully utilized until full-
n Optional. vRealize scale deployment has
Suite components been reached.
n Optional.
Management
domain NSX Edge
nodes
n Has dedicated ESXi
hosts
n First domain to
upgrade.
VMware, Inc. 12
VMware Cloud Foundation Design Guide
VMware, Inc. 13
VMware Cloud Foundation Design Guide
Figure 1-2. Choosing a VMware Cloud Foundation Workload Domain Type for Customer
Workloads
Start choosing
a workload domain type
No
No
No
Is a VI workload
No domain with Yes
a dedicated
vCenter Single
Sign-on domain
required?
VMware, Inc. 14
VMware Cloud Foundation Design Guide
You create multiple availability zones for the purpose of creating vSAN stretched clusters.
Using multiple availability zones can improve availability of management components and
workloads running within the SDDC, minimize downtime of services, and improve SLAs.
Availability zones are typically located either within the same data center, but in different
racks, chassis, rooms, or in different data centers with low-latency high-speed links
connecting them. One availability zone can contain several fault domains.
Note VMware Cloud Foundation considers and treats as stretched clusters only clusters
created by using the Stretch Cluster API because such clusters are using vSAN storage.
VMware, Inc. 15
VMware Cloud Foundation Design Guide
Topology Description
Single Instance - Single Availability Zone Workload domains are deployed in a single availability
zone.
Single Instance - Multiple Availability Zones Workload domains might be stretched between two
availability zones.
Multiple Instances - Single Availability Zone per VMware Workload domains in each instance are deployed in a
Cloud Foundation instance single availability zone.
Multiple Instances - Multiple Availability Zones per Workload domains in each instance might be stretched
VMware Cloud Foundation instance between two availability zones.
Start choosing
a VMware Cloud Foundation topology
Do you need
No disaster recovery Yes
or more than 24
VI workload
domains?
Use the Single Instance - Use the Single Instance - Use the Multiple Instance - Use the Multiple Instance -
Single Avalibility Multiple Avalibility Single Avalibility Multiple Avalibility
Zone topology Zones topology Zones topology Zones topology
VMware, Inc. 16
VMware Cloud Foundation Design Guide
The Single Instance - Single Availability Zone topology relies on vSphere HA to protect against
host failures.
Figure 1-4. Single VMware Cloud Foundation Instance with a Single Availability Zone
VCF Instance
VI Workload
Domain
Management
Domain
Attributes Detail
VMware, Inc. 17
VMware Cloud Foundation Design Guide
Incorporating multiple availability zones in your design can help reduce the blast radius of a
failure and can increase application availability. You usually deploy multiple availability zones
across two independent data centers.
VCF Instance
VI Workload
Domain
VI Workload
Domain
VI Workload
Domain
VI Workload
Domain
Management
Domain
VMware, Inc. 18
VMware Cloud Foundation Design Guide
Attributes Detail
Incorporating multiple VMware Cloud Foundation instances in your design can help reduce
the blast radius of a failure and can increase application availability across larger geographical
distances than cannot be achieved by using multiple availability zones. You usually deploy this
topology in the same data center for scale or across independent data centers for resilience.
VMware, Inc. 19
VMware Cloud Foundation Design Guide
Figure 1-6. Multiple Instance - Single Availability Zone Topology for VMware Cloud Foundation
VCF Instance 1
VI Workload
Domain
Management
Domain
VCF Instance 2
VI Workload
Domain
Management
Domain
VMware, Inc. 20
VMware Cloud Foundation Design Guide
Attributes Detail
Incorporating multiple VMware Cloud Foundation instances into your design can help reduce
the blast radius of a failure and can increase application availability across larger geographical
distances that cannot be achieved using multiple availability zones.
VMware, Inc. 21
VMware Cloud Foundation Design Guide
Figure 1-7. Multiple Instance - Multiple Availability Zones Topology for VMware Cloud
Foundation
VCF Instance 1
VI Workload
Domain
VI Workload
Domain
VI Workload
Domain
VI Workload
Domain
Management
Domain
VCF Instance 2
VI Workload
VMware, Inc.
Domain 22
VMware Cloud Foundation Design Guide
Attributes Detail
VMware, Inc. 23
VMware Cloud Foundation Design Guide
Multiple Instances - Single Availability Zone per Instance Multiple Instances - Multiple Availability Zones per
instance
Chapter 11 Workspace ONE Access Design for VMware Standard Workspace ONE Access
Cloud Foundation
External Services Design Elements for VMware Cloud External Services Design Requirements
Foundation
Physical Network Design Elements for VMware Cloud Leaf-Spine Physical Network Design Requirements
Foundation
Leaf-Spine Physical Network Design Requirements for
NSX Federation
VMware, Inc. 24
VMware Cloud Foundation Design Guide
vSAN Design Elements for VMware Cloud Foundation vSAN Design Requirements
ESXi Design Elements for VMware Cloud Foundation ESXi Server Design Requirements
vCenter Single Sign-On Design Elements vCenter Single Sign-on Design Requirements for Multiple
vCenter - Single vCenter Single Sign-On Domain Topology
vSphere Cluster Design Elements for VMware Cloud vSphere Cluster Design Requirements
Foundation
vSphere Cluster Design Requirements for Stretched
Clusters
vSphere Networking Design Elements for VMware Cloud vSphere Networking Design Recommendations
Foundation
NSX Global Manager Design Elements NSX Global Manager Design Requirements for NSX
Federation
VMware, Inc. 25
VMware Cloud Foundation Design Guide
BGP Routing Design Elements for VMware Cloud BGP Routing Design Requirements
Foundation
BGP Routing Design Requirements for Stretched Clusters
Overlay Design Elements for VMware Cloud Foundation Overlay Design Requirements
Application Virtual Network Design Elements for VMware Application Virtual Network Design Requirements
Cloud Foundation
Application Virtual Network Design Requirements for NSX
Federation
Load Balancing Design Elements for VMware Cloud Load Balancing Design Requirements
Foundation
Load Balancing Design Requirements for NSX Federation
SDDC Manager Design Elements for VMware Cloud SDDC Manager Design Requirements
Foundation
SDDC Manager Design Recommendations
vSAN Design Elements for VMware Cloud Foundation vSAN Design Requirements
ESXi Design Elements for VMware Cloud Foundation ESXi Server Design Requirements
vCenter Single Sign-On Design Elements vCenter Single Sign-on Design Requirements for Multiple
vCenter - Single SSO Domain Topology
VMware, Inc. 26
VMware Cloud Foundation Design Guide
vSphere Cluster Design Elements for VMware Cloud vSphere Cluster Design Requirements VMware Cloud
Foundation Foundation
vSphere Networking Design Elements for VMware Cloud vSphere Networking Design Recommendations
Foundation
NSX Global Manager Design Elements NSX Global Manager Design Requirements for NSX
Federation
BGP Routing Design Elements for VMware Cloud BGP Routing Design Requirements
Foundation
BGP Routing Design Requirements for Stretched Clusters
Overlay Design Elements for VMware Cloud Foundation Overlay Design Requirements
VMware, Inc. 27
VMware Cloud Foundation Design Guide
Table 1-14. vRealize Suite Lifecycle Manager and Workspace ONE Access Design Elements
vRealize Suite Lifecycle Manager Design Elements for vRealize Suite Lifecycle Manager Design Requirements
VMware Cloud Foundation
vRealize Suite Lifecycle Manager Design Requirements for
Stretched Clusters
Workspace ONE Access Design Elements for VMware Workspace ONE Access Design Requirements
Cloud Foundation
Workspace ONE Access Design Requirements for
Stretched Clusters
Life Cycle Management Design Elements for VMware Life Cycle Management Design Requirements
Cloud Foundation
Information Security Design Elements for VMware Cloud Account and Password Management Design
Foundation Recommendations
Information Security Design Elements for VMware Cloud Certificate Management Design Recommendations
Foundation
VMware, Inc. 28
Workload Domain to Rack
Mapping in VMware Cloud
Foundation
2
VMware Cloud Foundation distributes the functionality of the software-defined data center
(SDDC) across multiple workload domains and vSphere clusters. A workload domain, whether
it is the management workload domain or a VI workload domain, is a logical abstraction of
compute, storage, and network capacity, and consists of one or more clusters. Each cluster can
exist vertically in a single rack or be spanned horizontally across multiple racks.
The relationship between workload domains and data center racks in VMware Cloud Foundation
is not one-to-one. While a workload domain is an atomic unit of repeatable building blocks, a rack
is a unit of size. Because workload domains can have different sizes, you map workload domains
to data center racks according to your requirements and physical infrastructure constraints. You
determine the total number of racks for each cluster type according to your scalability needs.
Workload domain in a single rack The workload domain occupies a single rack, whether it
consists of a single or multiple clusters.
Workload domain spanning multiple racks n The management domain can span multiple racks if
the data center fabric can provide Layer 2 adjacency,
such as BGP EVPN, between racks. If the Layer 3
fabric does not support this requirement, then the
management cluster should be mapped to a single
rack.
n A VI workload domain can span multiple racks. If you
are using a Layer 3 network fabric, NSX Edge clusters
cannot be hosted on clusters that span racks.
Workload domain with multiple availability zones, each To span multiple availability zones, the network fabric
zone in a single rack must support stretched Layer 2 networks and Layer 3
routed networks between the availability zones.
Workload domain with multiple availability zones, each To span multiple racks, the network fabric must support
zone spanning multiple racks stretched Layer 2 networks between these racks. To
span multiple availability zones, the network fabric must
support stretched Layer 2 networks and Layer 3 routed
networks between the availability zones.
VMware, Inc. 29
VMware Cloud Foundation Design Guide
Data Center
Fabric
ToR ToR
Switch Switch
VI Workload
Domain Cluster
Management
Domain Cluster
Rack
VMware, Inc. 30
VMware Cloud Foundation Design Guide
Data Center
Fabric
VI Workload
Domain Cluster
Management
Domain Cluster
Rack Rack
VMware, Inc. 31
VMware Cloud Foundation Design Guide
Figure 2-3. Workload Domains with Multiple Availability Zones, Each Zone in One Rack
Data Center
Fabric
VI Workload Domain
Stretched Cluster
Management Domain
Stretched Cluster
Rack Rack
VMware, Inc. 32
Supported Storage Types for
VMware Cloud Foundation 3
Storage design for VMware Cloud Foundation includes the design for principal and supplemental
storage.
Principal storage is used during the creation of a workload domain and is capable of running
workloads. Supplemental storage can be added after the creation of a workload domain and can
be capable of running workloads or be used for data at rest storage such as virtual machine
templates, backup data, and ISO images. VMware Cloud Foundation supports the following
principal and supplemental storage combinations:
HCI Mesh (Remote vSAN Datastores) Supplemental n Principal (additional clusters only)
n Supplemental
Note For a consolidated VMware Cloud Foundation architecture model, the storage types that
are supported for the management domain apply.
VMware, Inc. 33
External Services Design for
VMware Cloud Foundation 4
IP addressing scheme, name resolution, and time synchronization must support the requirements
for VMware Cloud Foundation deployments.
Table 4-1. External Services Design Requirements for VMware Cloud Foundation
VCF-EXT-REQD-NET-001 Allocate statically assigned Ensures stability across the You must provide precise
IP addresses and host VMware Cloud Foundation IP address management.
names for all workload instance, and makes it
domain components. simpler to maintain, track,
and implement a DNS
configuration.
VCF-EXT-REQD-NET-002 Configure forward and Ensures that all You must provide
reverse DNS records components are accessible DNS records for each
for all workload domain by using a fully qualified component.
components. domain name instead of by
using IP addresses only. It
is easier to remember and
connect to components
across the VMware Cloud
Foundation instance.
VMware, Inc. 34
Physical Network Infrastructure
Design for VMware Cloud
Foundation
5
Design of the physical data center network includes defining the network topology for
connecting physical switches and ESXi hosts, determining switch port settings for VLANs and
link aggregation, and designing routing.
A software-defined network (SDN) both integrates with and uses components of the physical
data center. SDN integrates with your physical network to support east-west transit in the data
center and north-south transit to and from the SDDC networks.
n Core-Aggregation-Access
n Leaf-Spine
n Hardware SDN
Note Leaf-Spine is the default data center network deployment topology used for VMware
Cloud Foundation
n Leaf-Spine Physical Network Design Requirements and Recommendations for VMware Cloud
Foundation
When designing the VLAN and subnet configuration for your VMware Cloud Foundation
deployment, consider the following guidelines:
VMware, Inc. 35
VMware Cloud Foundation Design Guide
Table 5-1. VLAN and Subnet Guidelines for VMware Cloud Foundation
NSX Federation Between Multiple
All Deployment Topologies Multiple Availability Zones VMware Cloud Foundation Instances
n Ensure your subnets are scaled n For network segments which n An RTEP network segment should
appropriately to allow for are stretched between availability have a VLAN ID and Layer
expansion as expanding at a later zones, the VLAN ID must meet 3 range that are specific to
time can be disruptive. the following requirements: the VMware Cloud Foundation
n Use the IP address of the floating n Be the same in both instance.
interface for Virtual Router availability zones with the n In a VMware Cloud Foundation
Redundancy Protocol (VRPP) or same Layer 3 network instance with multiple availability
Hot Standby Routing Protocol segments. zones, the RTEP network
(HSRP) as the gateway. n Have a Layer 3 gateway segment must be stretched
n Use the RFC 1918 IPv4 address at the first hop that is between the zones and assigned
space for these subnets and highly available such that it the same VLAN ID and IP range.
allocate one octet by VMware tolerates the failure of an n All Edge RTEP networks must be
Cloud Foundation instance and entire availability zone. reachable from each other.
another octet by function. n For network segments of the
same type which are not
stretched between availability
zones, the VLAN ID can be the
same or different between the
zones.
When deploying VLANs and subnets for VMware Cloud Foundation, they must conform to the
following requirements according to the VMware Cloud Foundation topology:
VMware, Inc. 36
VMware Cloud Foundation Design Guide
Table 5-2. VLANs and Subnets for VMware Cloud Foundation (continued)
VMware Cloud Foundation Instances VMware Cloud Foundation Instances
Function with a Single Availability Zone with Multiple Availability Zones
Edge RTEP n Required for NSX Federation only n Required for NSX Federation only
n Highly available gateway within n Must be stretched within the
the instance instance
n Highly available gateway across
availability zones within the
instance
VMware, Inc. 37
VMware Cloud Foundation Design Guide
Data Center
Fabric Spine
ESXi ESXi
Hosts Hosts
VMware, Inc. 38
VMware Cloud Foundation Design Guide
Table 5-3. Leaf-Spine Physical Network Design Requirements for VMware Cloud Foundation
VCF-NET-REQD-CFG-004 Set the MTU size to at least n Improves traffic When adjusting the MTU
1,700 bytes (recommended throughput. packet size, you must
9,000 bytes for jumbo n Supports Geneve by also configure the entire
frames) on the physical increasing the MTU size network path (VMkernel
switch ports, vSphere to a minimum of 1,600 network adapters, virtual
Distributed Switches, bytes. switches, physical switches,
vSphere Distributed Switch and routers) to support the
n Geneve is an extensible
port groups, and N-VDS same MTU packet size.
protocol. The MTU
switches that support the In an environment with
size might increase
following traffic types: multiple availability zones,
with future capabilities.
n Overlay (Geneve) While 1,600 bytes the MTU must be
n vSAN is sufficient, an MTU configured on the entire
size of 1,700 bytes network path between the
n vSphere vMotion
provides more room for zones.
increasing the Geneve
MTU size without the
need to change the
MTU size of the
physical infrastructure.
VMware, Inc. 39
VMware Cloud Foundation Design Guide
Table 5-4. Leaf-Spine Physical Network Design Requirements for NSX Federation in VMware
Cloud Foundation
VCF-NET-REQD-CFG-005 Set the MTU size to at least n Jumbo frames are When adjusting the MTU
1,500 bytes (1,700 bytes not required between packet size, you must
preferred; 9,000 bytes VMware Cloud also configure the entire
recommended for jumbo Foundation instances. network path, that is,
frames) on the components However, increased virtual interfaces, virtual
of the physical network MTU improves traffic switches, physical switches,
between the VMware throughput. and routers to support the
Cloud Foundation instances n Increasing the RTEP same MTU packet size.
for the following traffic MTU to 1,700
types. bytes minimizes
n NSX Edge RTEP fragmentation for
standard-size workload
packets between
VMware Cloud
Foundation instances.
VCF-NET-REQD-CFG-006 Ensure that the latency A latency lower than 150 None.
between VMware Cloud ms is required for the
Foundation instances that following features:
are connected in an NSX n Cross vCenter Server
Federation is less than 150 vMotion
ms. n NSX Federation
VMware, Inc. 40
VMware Cloud Foundation Design Guide
Table 5-5. Leaf-Spine Physical Network Design Recommendations for VMware Cloud Foundation
VCF-NET-RCMD-CFG-001 Use two ToR switches for Supports the use of Requires two ToR switches
each rack. two 10-GbE (25-GbE or per rack which might
greater recommended) increase costs.
links to each server,
provides redundancy and
reduces the overall design
complexity.
VMware, Inc. 41
VMware Cloud Foundation Design Guide
Table 5-5. Leaf-Spine Physical Network Design Recommendations for VMware Cloud Foundation
(continued)
VCF-NET-RCMD-CFG-005 Set the lease duration for The IP addresses of the Requires configuration and
the DHCP scope for the host overlay VMkernel management of a DHCP
host overlay network to at ports are assigned by using server.
least 7 days. a DHCP server.
n Host overlay VMkernel
ports do not have an
administrative endpoint.
As a result, they can
use DHCP for automatic
IP address assignment.
If you must change
or expand the subnet,
changing the DHCP
scope is simpler than
creating an IP pool and
assigning it to the ESXi
hosts.
n DHCP simplifies the
configuration of a
default gateway for
host overlay VMkernel
ports if hosts within
same cluster are on
separate Layer 2
domains.
VCF-NET-RCMD-CFG-006 Designate the trunk ports Reduces the time to Although this design
connected to ESXi NICs as transition ports over to the does not use the STP,
trunk PortFast. forwarding state. switches usually have STP
configured by default.
VCF-NET-RCMD-CFG-007 Configure VRRP, HSRP, or Ensures that the VLANs Requires configuration of a
another Layer 3 gateway that are stretched high availability technology
availability method for between availability zones for the Layer 3 gateways in
these networks. are connected to a the data center.
n Management highly- available gateway.
Otherwise, a failure in the
n Edge overlay
Layer 3 gateway will cause
disruption in the traffic in
the SDN setup.
VMware, Inc. 42
VMware Cloud Foundation Design Guide
Table 5-6. Leaf-Spine Physical Network Design Recommendations for Stretched Clusters with
VMware Cloud Foundation
VCF-NET-RCMD-CFG-008 Provide high availability for Ensures that a failover Requires configuration of
the DHCP server or DHCP operation of a single highly available DHCP
Helper used for VLANs availability zone will not server.
requiring DHCP services impact DHCP availability.
that are stretched between
availability zones.
Table 5-7. Leaf-Spine Physical Network Design Recommendations for NSX Federation in
VMware Cloud Foundation
VMware, Inc. 43
vSAN Design for VMware Cloud
Foundation 6
VMware Cloud Foundation uses VMware vSAN as the principal storage type for the management
domain and recommended for use as principal storage in VI workload domains. You must
determine the size of the compute and storage resources for the vSAN storage, and the
configuration of the network carrying vSAN traffic. For multiple availability zones, you extend
the resource size and determine the configuration of the vSAN witness host.
VMware, Inc. 44
VMware Cloud Foundation Design Guide
vSphere Cluster
ESXi Hosts
Virtual
Machines
Software-Defined Storage
Storage
vSAN
Physical Disks
VMware, Inc. 45
VMware Cloud Foundation Design Guide
Management domain (default cluster) Four nodes minimum n Must be stretched first
n 8 node minimum, equally
distributed across availability
zones
n vSAN witness appliance in a third
fault domain
Management domain (additional n Three nodes minimum n Six nodes minimum, equally
clusters) n Four nodes minimum is distributed across availability
recommended for higher zones
availability n Eight nodes minimum is
recommended for higher
availability
n vSAN witness appliance in a third
fault domain
VI workload domain (all clusters) n Three nodes minimum n Six nodes minimum, equally
n Four nodes minimum is distributed across availability
recommended for higher zones
availability n Eight nodes minimum is
recommended for higher
availability
n vSAN witness appliance in a third
fault domain
n A vSAN hybrid storage configuration requires both magnetic devices and flash caching
devices. The cache tier must be at least 10% of the size of the capacity tier.
n An all-flash vSAN configuration requires flash devices for both the caching and capacity
tiers.
n VMware vSAN ReadyNodes or hardware from the VMware Compatibility Guide to build
your own.
VMware, Inc. 46
VMware Cloud Foundation Design Guide
n All storage devices claimed by vSAN contribute to capacity and performance. Each host's
storage devices claimed by vSAN form a storage pool. The storage pool represents the
amount of caching and capacity provided by the host to the vSAN datastore.
n ESXi hosts must be on the vSAN ESA Ready Node HCL with a minimum of 512 GB RAM
per host.
Note vSAN Express Storage Architecture is not supported with VMware Cloud Foundation.
For best practices, capacity considerations, and general recommendations about designing and
sizing a vSAN cluster, see the VMware vSAN Design and Sizing Guide.
Consider the overall traffic bandwidth and decide how to isolate storage traffic.
n Consider how much vSAN data traffic is running between ESXi hosts.
n The amount of storage traffic depends on the number of VMs that are running in the cluster,
and on how write-intensive the I/O process is for the applications running in the VMs.
For information on the physical network setup for vSAN traffic, and other system traffic, see
Chapter 5 Physical Network Infrastructure Design for VMware Cloud Foundation.
For information on the virtual network setup for vSAN traffic, and other system traffic, see
Logical vSphere Networking Design for VMware Cloud Foundation.
Physical NIC speed For best and most predictable performance (IOPS) for
the environment, this design uses a minimum of a 10-
GbE connection, with 25-GbE recommended, for use with
vSAN all-flash configurations.
VMkernel network adapters for vSAN The vSAN VMkernel network adapter on each ESXi
host is created when you enable vSAN on the cluster.
Connect the vSAN VMkernel network adapters on all ESXi
hosts in a cluster to a dedicated distributed port group
including also ESXi hosts that are not contributing storage
resources to the cluster.
VMware, Inc. 47
VMware Cloud Foundation Design Guide
Tiny 10 750
VMware, Inc. 48
VMware Cloud Foundation Design Guide
VMware Cloud Foundation uses vSAN witness traffic separation where you can use a VMkernel
adapter for vSAN witness traffic that is different from the adapter for vSAN data traffic. In this
design, you configure vSAN witness traffic in the following way:
n On each ESXi host in both availability zones, place the vSAN witness traffic on the
management VMkernel adapter.
n On the vSAN witness appliance, use the same VMkernel adapter for both management and
witness traffic.
For information about vSAN witness traffic separation, see vSAN Stretched Cluster Guide on
VMware Cloud Platform Tech Zone.
Management network
Routed to the management networks in both availability zones. Connect the first VMkernel
adapter of the vSAN witness appliance to this network. The second VMkernel adapter on the
vSAN witness appliance is not used.
n Management traffic
Witness
Appliance Physical
Upstream
Router
Witness
Management
Network
AZ 1: Management
Domain Management
Network
AZ 1: VI Workload AZ 2: VI Workload
Domain Management Domain
Network Management
Network
AZ 1: VI Workload AZ 2: VI Workload
Domain vSAN Domain vSAN
Network Network
VMware, Inc. 49
VMware Cloud Foundation Design Guide
For related vSphere cluster requirements and recommendations, see vSphere Cluster Design
Requirements and Recommendations for VMware Cloud Foundation.
Table 6-5. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
Requireme
nt ID Design Requirement Justification Implication
VCF- Add the following Provides the necessary You might need additional policies if third-
VSAN- setting to the default protection for virtual party virtual machines are to be hosted in
REQD- vSAN storage policy: machines in each availability these clusters because their performance or
CFG-003 Site disaster tolerance zone, with the ability to availability requirements might differ from
= Site mirroring - recover from an availability what the default VMware vSAN policy
stretched cluster zone outage. supports.
VCF- Configure two fault Fault domains are mapped to You must provide additional raw storage when
VSAN- domains, one for availability zones to provide the site mirroring - stretched cluster option is
REQD- each availability zone. logical host separation and selected and fault domains are enabled.
CFG-004 Assign each host ensure a copy of vSAN
to their respective data is always available even
availability zone fault when an availability zone
domain. goes offline.
VMware, Inc. 50
VMware Cloud Foundation Design Guide
Table 6-5. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
(continued)
Requireme
nt ID Design Requirement Justification Implication
VCF- Deploy a vSAN witness Ensures availability of vSAN You must provide a third physically-separate
VSAN- appliance in a location witness components in the location that runs a vSphere environment.
WTN- that is not local to the event of a failure of one of You might use a VMware Cloud Foundation
REQD- ESXi hosts in any of the availability zones. instance in a separate physical location.
CFG-001 the availability zones.
VCF- Deploy a witness Ensures the witness The vSphere environment at the witness
VSAN- appliance of a size appliance is sized to support location must satisfy the resource
WTN- that corresponds to the projected workload requirements of the witness appliance.
REQD- the required capacity storage consumption.
CFG-002 of the cluster.
VCF- Connect the first Enables connecting the The management networks in both availability
VSAN- VMkernel adapter of witness appliance to the zones must be routed to the management
WTN- the vSAN witness workload domain vCenter network in the witness site.
REQD- appliance to the Server.
CFG-003 management network
in the witness site.
VCF- Allocate a statically Simplifies maintenance and Requires precise IP address management.
VSAN- assigned IP address tracking, and implements a
WTN- and host name to the DNS configuration.
REQD- management adapter
CFG-004 of the vSAN witness
appliance.
VCF- Configure forward Enables connecting the vSAN You must provide DNS records for the vSAN
VSAN- and reverse DNS witness appliance to the witness appliance.
WTN- records for the vSAN workload domain vCenter
REQD- witness appliance for Server by FQDN instead of IP
CFG-005 the VMware Cloud address.
Foundation instance.
VCF- Configure time Prevents any failures n An operational NTP service must be
VSAN- synchronization by in the stretched cluster available in the environment.
WTN- using an internal NTP configuration that are caused n All firewalls between the vSAN witness
REQD- time for the vSAN by time mismatch between appliance and the NTP servers must allow
CFG-006 witness appliance. the vSAN witness appliance NTP traffic on the required network ports.
and the ESXi hosts in
both availability zones and
workload domain vCenter
Server.
VCF- Configure an individual The vSAN storage policy of You must configure additional vSAN storage
VSAN- vSAN storage policy a stretched cluster cannot be policies.
WTN- for each stretched shared with other clusters.
REQD- cluster.
CFG-007
VMware, Inc. 51
VMware Cloud Foundation Design Guide
VCF- Ensure that the storage I/O Storage controllers with lower queue depths can Limits the
VSAN- controller that is running the vSAN cause performance and stability problems when number of
RCMD- disk groups is capable and has a running vSAN. compatible
CFG-001 minimum queue depth of 256 set. vSAN ReadyNode servers are configured with the I/O
right queue depths for vSAN. controllers
that can be
used for
storage.
VCF- Do not use the storage I/O Running non-vSAN disks, for example, VMFS, on a If non-vSAN
VSAN- controllers that are running vSAN storage I/O controller that is running a vSAN disk disks are
RCMD- disk groups for another purpose. group can impact vSAN performance. required in
CFG-002 ESXi hosts,
you must
have an
additional
storage I/O
controller in
the host.
VCF- Configure vSAN in all-flash Meets the performance needs of the default All vSAN
VSAN- configuration in the default workload domain cluster. disks must
RCMD- workload domain cluster. be flash
CFG-003 disks, which
might cost
more than
magnetic
disks.
VCF- Provide sufficient raw capacity to Ensures that sufficient resources are present in the None.
VSAN- meet the planned needs of the workload domain cluster, preventing the need to
RCMD- workload domain cluster. expand the vSAN datastore in the future.
CFG-004
VCF- On the vSAN datastore, ensure that This free space, called slack space, is set aside Increases
VSAN- at least 30% of free space is always for host maintenance mode data evacuation, the amount
RCMD- available. component rebuilds, rebalancing operations, and of available
CFG-005 VM snapshots. storage
needed.
VCF- Configure vSAN with a minimum of Reduces the size of the fault domain and Using
VSAN- two disk groups per ESXi host. spreads the I/O load over more disks for better multiple disk
RCMD- performance. groups
CFG-006 requires
more disks
in each ESXi
host.
VMware, Inc. 52
VMware Cloud Foundation Design Guide
Table 6-6. vSAN Design Recommendations for VMware Cloud Foundation (continued)
Recommen
dation ID Design Recommendation Justification Implication
VCF- For the cache tier in each disk Provides enough cache for both hybrid or all-flash Using larger
VSAN- group, use a flash-based drive that vSAN configurations to buffer I/O and ensure disk flash disks
RCMD- is at least 600 GB large. group performance. can increase
CFG-007 Additional space in the cache tier does not the initial
increase performance. host cost.
VCF- Use the default VMware vSAN n Provides the level of redundancy that is You might
VSAN- storage policy. needed in the workload domain cluster. need
RCMD- n Provides the level of performance that is additional
CFG-008 enough for the individual workloads. policies
for third-
party virtual
machines
hosted in
these
clusters
because
their
performanc
e or
availability
requirement
s might
differ from
what the
default
VMware
vSAN policy
supports.
VCF- Leave the default virtual machine Sparse virtual swap files consume capacity on None.
VSAN- swap file as a sparse object on vSAN only as they are accessed. As a result,
RCMD- vSAN. you can reduce the consumption on the vSAN
CFG-009 datastore if virtual machines do not experience
memory over-commitment which would require
the use of the virtual swap file.
VCF- Use the existing vSphere Provides guaranteed performance for vSAN traffic All traffic
VSAN- Distributed Switch instance in the in a connection-free network by using existing paths are
RCMD- workload domain cluster. networking components. shared over
CFG-010 common
uplinks.
VCF- Configure jumbo frames on the n Simplifies configuration because jumbo frames Every
VSAN- VLAN for vSAN traffic. are also used to improve the performance of device in
RCMD- vSphere vMotion and NFS storage traffic. the network
CFG-011 n Reduces the CPU overhead resulting high must
network usage. support
jumbo
frames.
VMware, Inc. 53
VMware Cloud Foundation Design Guide
Table 6-7. vSAN Design Recommendations for Stretched Clusters with VMware Cloud
Foundation
Recommen
dation ID Design Recommendation Justification Implication
VCF- Configure the vSAN witness Removes the requirement to have static routes on The
VSAN- appliance to use the first VMkernel the witness appliance as witness traffic is routed managemen
WTN- adapter, that is the management over the management network. t networks
RCMD- interface, for vSAN witness traffic. in both
CFG-001 availability
zones must
be routed to
the
managemen
t network in
the witness
site.
VCF- Place witness traffic on the Separates the witness traffic from the vSAN data The
VSAN- management VMkernel adapter of traffic. Witness traffic separation provides the managemen
WTN- all the ESXi hosts in the workload following benefits: t networks
RCMD- domain. n Removes the requirement to have static routes in both
CFG-002 from the vSAN networks in both availability availability
zones to the witness site. zones must
be routed to
n Removes the requirement to have jumbo
the
frames enabled on the path between each
managemen
availability zone and the witness site because
t network in
witness traffic can use a regular MTU size of
the witness
1500 bytes.
site.
VMware, Inc. 54
vSphere Design for VMware Cloud
Foundation 7
The vSphere design includes determining the configuration of the vCenter Server instances, ESXi
hosts, vSphere clusters,and vSphere networking for a VMware Cloud Foundation environment.
VMware, Inc. 55
VMware Cloud Foundation Design Guide
To provide the resources required to run the management and workload components of the
VMware Cloud Foundation instance, each ESXi host consists of the following elements:
n Storage devices
n Network interfaces
ESXi Host
Compute
CPU Memory
Storage
Network
For detailed sizing based on the overall profile of the VMware Cloud Foundation instance you
plan to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.
VMware, Inc. 56
VMware Cloud Foundation Design Guide
The configuration and assembly process for each system should be standardized, with all
components installed in the same manner on each ESXi host. Because standardization of
the physical configuration of the ESXi hosts removes variability, the infrastructure is easily
managed and supported. ESXi hosts are deployed with identical configuration across all cluster
members, including storage and networking configurations. For example, consistent PCIe card
slot placement, especially for network interface controllers, is essential for accurate mapping
of physical network interface controllers to virtual network resources. By using identical
configurations, you have an even balance of virtual machine storage components across storage
and compute resources.
VMware, Inc. 57
VMware Cloud Foundation Design Guide
VMware, Inc. 58
VMware Cloud Foundation Design Guide
and certificate management. Similar best practices help you design optimal environment
operation.
VCF-ESX-REQD-CFG-002 Ensure each ESXi host n Ensures workloads Assemble the server
matches the required will run without specification and number
CPU, memory and storage contention even during according to the sizing in
specification. failure and maintenance VMware Cloud Foundation
conditions. Planning and Preparation
Workbook which is based
on projected deployment
size.
VCF-ESX-REQD-NET-001 Place the ESXi hosts Reduces the number of Separation of the physical
in each management VLANs needed because VLAN between ESXi hosts
domain cluster on the a single VLAN can be and other management
VLAN-backed management allocated to both the ESXi components for security
network segment for hosts, vCenter Server, and reasons is missing.
vCenter Server, and management components
management components for NSX.
for NSX.
VCF-ESX-REQD-NET-002 Place the ESXi hosts Physical VLAN security A new VLAN and a new
in each VI workload separation between subnet are required for
domain cluster on a VI workload domain the VI workload domain
VLAN-backed management ESXi hosts and other management network.
network segment other management components
than that used by the in the management domain
management domain. is achieved.
VMware, Inc. 59
VMware Cloud Foundation Design Guide
VCF-ESX-RCMD-CFG-001 Use vSAN ReadyNodes Your management domain Hardware choices might be
with vSAN storage for is fully compatible with limited.
each ESXi host in the vSAN at deployment. If you plan to use a
management domain. For information about the server configuration that is
models of physical servers not a vSAN ReadyNode,
that are vSAN-ready, see your CPU, disks and I/O
vSAN Compatibility Guide modules must be listed on
for vSAN ReadyNodes. the VMware Compatibility
Guide under CPU Series
and vSAN Compatibility List
aligned to the ESXi version
specified in VMware Cloud
Foundation 5.0 Release
Notes.
VCF-ESX-RCMD-CFG-002 Allocate hosts with uniform A balanced cluster has You must apply vendor
configuration across the these advantages: sourcing, budgeting,
default management n Predictable and procurement
vSphere cluster. performance even considerations for uniform
during hardware server nodes on a per
failures cluster basis.
n Minimal impact of
resynchronization or
rebuild operations on
performance
VCF-ESX-RCMD-CFG-003 When sizing CPU, do Although multithreading Because you must provide
not consider multithreading technologies increase more physical CPU
technology and associated CPU performance, the cores, costs increase and
performance gains. performance gain depends hardware choices become
on running workloads and limited.
differs from one case to
another.
VCF-ESX-RCMD-CFG-004 Install and configure all Provides hosts that have None
ESXi hosts in the default large memory, that is,
management cluster to greater than 512 GB,
boot using a 128 GB device with enough space for
or greater. the scratch partition when
using vSAN.
VMware, Inc. 60
VMware Cloud Foundation Design Guide
VCF-ESX-RCMD-CFG-006 For workloads running in Simplifies the configuration Increases the amount
the default management process. of replication traffic for
cluster, save the virtual management workloads
machine swap file at the that are recovered as part
default location. of the disaster recovery
process.
VCF-ESX-RCMD-NET-001 Place ESXi Hosts for Physical VLAN security A new workload domain
each VI Workload Domain separation between ESXi management and subnet
on separate VLAN-backed hosts in different VI are required for each new
management network workload domains is VI workload domain.
segments achieved.
VCF-ESX-RCMD-SEC-001 Deactivate SSH access on Ensures compliance with You must activate SSH
all ESXi hosts in the the vSphere Security access manually for
management domain by Configuration Guide and troubleshooting or support
having the SSH service with security best activities as VMware Cloud
stopped and using the practices. Foundation deactivates
default SSH service policy Disabling SSH access SSH on ESXi hosts
Start and stop manually . reduces the risk of security after workload domain
attacks on the ESXi hosts deployment.
through the SSH interface.
VCF-ESX-RCMD-SEC-002 Set the advanced setting n Ensures compliance You must suppress
UserVars.SuppressShellWa with the vSphere SSH enablement warning
rning to 0 across all ESXi Security Configuration messages manually when
hosts in the management Guide and with security performing troubleshooting
domain. best practices or support activities.
n Turns off the warning
message that appears
in the vSphere Client
every time SSH access
is activated on an ESXi
host.
VMware, Inc. 61
VMware Cloud Foundation Design Guide
n High Availability Design for vCenter Server for VMware Cloud Foundation
Protecting vCenter Server is important because it is the central point of management and
monitoring for each workload domain.
n vCenter Server Design Requirements and Recommendations for VMware Cloud Foundation
Each workload domain in VMware Cloud Foundation is managed by a single vCenter
Server instance. You determine the size of this vCenter Server instance and its storage
requirements according to the number of ESXi hosts per cluster and the number of virtual
machines you plan to run on these clusters.
VMware, Inc. 62
VMware Cloud Foundation Design Guide
VCF Instance
Access
User Interface
API
Management Domain
Management
Domain vCenter Server
VI Workload Domain
vCenter Server
VI Workload Domain
vCenter Server
Supporting
Infrastructure: Shared
Storage, DNS, NTP
Virtual Infrastructure
ESXi
VMware, Inc. 63
VMware Cloud Foundation Design Guide
n One vCenter Server instance for the management n One vCenter Server instance for the management
domain that manages the management components domain that manages the management components
of the SDDC, such as the vCenter Server instances for of the SDDC, such as vCenter Server instances for
the VI workload domains, NSX Manager cluster nodes, the VI workload domains, NSX Manager cluster nodes,
SDDC Manager, and other solutions. SDDC Manager, and other solutions.
n Optionally, additional vCenter Server instances for the n Optionally, additional vCenter Server instances for the
VI workload domains to support customer workloads. VI workload domains to support customer workloads.
n vSphere HA protecting all vCenter Server appliances. n vSphere HA protecting all vCenter Server appliances.
n A should-run-on-host-in-group VM-Host affinity rule
in vSphere DRS specifying that the vCenter Server
appliances should run in the primary availability zone
unless an outage in this zone occurs.
When you deploy a workload domain, you select a vCenter Server appliance size that is suitable
for the scale of your environment. The option that you select determines the number of CPUs
and the amount of memory of the appliance. For detailed sizing according to a collective profile
of the VMware Cloud Foundation instance you plan to deploy, refer to the VMware Cloud
Foundation Planning and Preparation Workbook .
VMware, Inc. 64
VMware Cloud Foundation Design Guide
VMware Cloud Foundation supports only vSphere HA as a high availability method for vCenter
Server.
VMware, Inc. 65
VMware Cloud Foundation Design Guide
Table 7-7. vCenter Server Design Requirements for VMware Cloud Foundation
VCF-VCS-REQD-NET-001 Place all workload domain Reduces the number of required None.
vCenters Server appliances VLANs because a single VLAN can
VMware, Inc. 66
VMware Cloud Foundation Design Guide
Table 7-7. vCenter Server Design Requirements for VMware Cloud Foundation (continued)
Table 7-8. vCenter Server Design Recommendations for VMware Cloud Foundation
VCF-VCS-RCMD-CFG-002 Deploy a vCenter Server Ensures resource The default size for a
appliance with the availability and usage management domain is
approproate storage size. efficiency per workload Small and for VI Workload
domain. Domains is Medium. To
override these values you
must use the API.
VCF-VCS-RCMD-CFG-003 Protect workload domain vSphere HA is the only vCenter Server becomes
vCenter Server appliances supported method to unavailable during a
by using vSphere HA. protect vCenter Server vSphere HA failover.
availability in VMware
Cloud Foundation.
VCF-VCS-RCMD-CFG-004 In vSphere HA, set the vCenter Server is the If the restart priority for
restart priority policy management and control another virtual machine
for the vCenter Server plane for physical and is set to highest, the
appliance to high. virtual infrastructure. In a connectivity delay for the
vSphere HA event, to management components
ensure the rest of the will be longer.
SDDC management stack
comes up faultlessly, the
workload domain vCenter
Server must be available
first, before the other
management components
come online.
VMware, Inc. 67
VMware Cloud Foundation Design Guide
Table 7-9. vCenter Server Design Recommendations for vSAN Stretched Clusters with VMware
Cloud Foundation
You select the vCenter Single Sign-On topology according to the needs and design objectives of
your deployment.
VMware, Inc. 68
VMware Cloud Foundation Design Guide
Table 7-10. vCenter Single Sign-On Topologies for VMware Cloud Foundation
VMware Cloud Foundation vCenter Single Sign-On Benefits Drawbacks
Topology Domain Topology
Multiple vCenter Server Instances One vCenter Single Sign- Enables sharing of Limited to 15 workload
- Single vCenter Single Sign-On On domain with the vCenter Server roles, domains per VMware
Domain management domain and tags and licenses Cloud Foundation
all VI workload domain between all workload instance including the
vCenter Server instances domain instances. management domain.
in enhanced linked mode
(ELM) using a ring
topology.
Multiple vCenter Server Instances n One vCenter Single n Enables isolation at Additional password
- Multiple vCenter Single Sign-On Sign-On domain the vCenter Single management overhead
Domains with at least Sign-On domain layer per vCenter Single Sign-
the management for increased security On domain.
domain vCenter Server separation.
instance n Supports up to 25
n Additional VI workload workload domains
domains, each with per VMware Cloud
their own isolated Foundation instance.
vCenter Single Sign-On
domains.
Figure 7-3. Single vCenter Server Instance - Single vCenter Single Sign-On Domain
VMware Cloud
Foundation Instance
Management Domain
vCenter Server
Because the Single vCenter Server Instance - Single vCenter Single Sign-On Domain topology
contains a single vCenter Server instance by definition, no relevant design requirements or
recommendations for vCenter Single Sign-On are needed.
VMware, Inc. 69
VMware Cloud Foundation Design Guide
Figure 7-4. Multiple vCenter Server Instances - Single vCenter Single Sign-On Domain
Ring topology
Replication formed during Replication
Partner domain creation Partner
VMware, Inc. 70
VMware Cloud Foundation Design Guide
Table 7-11. Design Requirements for the Multiple vCenter Server Instance - Single vCenter Single
Sign-on Domain Topology for VMware Cloud Foundation
VCF-VCS-REQD-SSO- Join all vCenter Server When all vCenter Server n Only one vCenter Single
STD-001 instances within aVMware instances are in the Sign-On domain exists.
Cloud Foundation instance same vCenter Single Sign- n The number of
to a single vCenter Single On domain, they can linked vCenter Server
Sign-On domain. share authentication and instances in the
license data across all same vCenter Single
components. Sign-On domain is
limited to 15 instances.
Because each workload
domain uses a
dedicated vCenter
Server instance, you
can deploy up to
15 domains within
each VMware Cloud
Foundation instance.
VMware, Inc. 71
VMware Cloud Foundation Design Guide
Figure 7-5. Multiple vCenter Server Instances - Multiple vCenter Single Sign-On Domain
Ring topology
Replication formed during Replication
Partner domain creation Partner
VMware, Inc. 72
VMware Cloud Foundation Design Guide
Table 7-12. Design Requirements for Multiple vCenter Server Instance - Multiple vCenter Single
Sign-On Domain Topology for VMware Cloud Foundation
VCF-VCS-REQD-SSO- Create all vCenter Server n Enables isolation at n Each vCenter server
ISO-001 instances within a VMware the vCenter Single instance is managed
Cloud Foundation instance Sign-On domain layer through its own pane of
in their own unique vCenter for increased security glass using a different
Single Sign-On domains. separation. set of administrative
n Supports up to 25 credentials.
workload domains. n You must manage
password rotation
for each vCenter
Single Sign-On domain
separately.
n vSphere Cluster Life Cycle Method Design for VMware Cloud Foundation
vSphere Lifecycle Manager is used to manage the vSphere clusters in each VI workload
domain.
n vSphere Cluster Design Requirements and Recommendations for VMware Cloud Foundation
The design of a vSphere cluster is a subject to a minimum number of hosts, design
requirements, and design recommendations.
When you design the cluster layout in vSphere, consider the following guidelines:
n Compare the capital costs of purchasing fewer, larger ESXi hosts with the costs of purchasing
more, smaller ESXi hosts. Costs vary between vendors and models. Evaluate the risk of losing
one larger host in a scaled-up cluster and the impact on the business with the higher chance
of losing one or more smaller hosts in a scale-out cluster.
n Evaluate the operational costs of managing a few ESXi hosts with the costs of managing
more ESXi hosts.
VMware, Inc. 73
VMware Cloud Foundation Design Guide
Figure 7-6. Logical vSphere Cluster Layout with a Single Availability Zone for VMware Cloud
Foundation
VCF Instance
vSphere Cluster
Workload Domain
vCenter Server
VMware, Inc. 74
VMware Cloud Foundation Design Guide
Figure 7-7. Logical vSphere Cluster Layout for Multiple Availability Zones for VMware Cloud
Foundation
VCF Instance
vSphere Cluster
Workload Domain
vCenter Server
Availability Zone 1
Availability Zone 2
Cluster types per VI workload domain A VI workload domain can include either local clusters or
a remote cluster.
Latency between the central site and the remote site n Maximum: 100 ms
Bandwidth between the central site and the remote site n Minimum: 10 Mbps
VMware, Inc. 75
VMware Cloud Foundation Design Guide
When you deploy a workload domain, you choose a vSphere cluster life cycle method according
to your requirements.
vSphere Lifecycle Manager vSphere Lifecycle Manager n Supports VI workload n Not supported for the
images Desired State images domains with vSphere management domain.
contain base images, with Tanzu. n Supports only
vendor add-ons, firmware, n Supports NVIDIA GPU- environments with a
and drivers. enabled clusters. single availability zone.
n Supports 2-node NFS,
FC, or vVols clusters.
vSphere Lifecycle Manager An upgrade baseline n Supports vSAN n Not supported for
baselines contains the ESXi image, stretched clusters. NVIDIA GPU-enabled
and a patch baseline n Supports VI workload clusters.
contains the respective domains with vSphere n Not supported for 2-
patches for ESXi host. with Tanzu. node NFS, FC, or vVols
This is the required method clusters.
for the management
domain.
For vSAN design requirements and recommendations, see vSAN Design Requirements and
Recommendations for VMware Cloud Foundation.
The requirements for the ESXi hosts in a workload domain in VMware Cloud Foundation
are related to the system requirements of the workloads hosted in the domain. The ESXi
requirements include number, server configuration, amount of hardware resources, networking,
and certificate management. Similar best practices help you design optimal environment
operation
VMware, Inc. 76
VMware Cloud Foundation Design Guide
vSAN (two 8 6
availability zones)
Reserved capacity Single availability n 25% CPU and n 33% CPU and memory
for handling ESXi zone memory n Tolerates one host failure
host failures per n Tolerates one
cluster host failure
Table 7-16. vSphere Cluster Design Requirements for VMware Cloud Foundation
VMware, Inc. 77
VMware Cloud Foundation Design Guide
Table 7-16. vSphere Cluster Design Requirements for VMware Cloud Foundation (continued)
VCF-CLS-REQD-LCM-001 Use baselines as the life vSphere Lifecycle Manager You must manage the
cycle management method images are not supported life cycle of firmware and
for the management for the management vendor add-ons manually.
domain. domain.
Table 7-17. vSphere Cluster Design Requirements for vSAN Stretched Clusters with VMware
Cloud Foundation
VCF-CLS-REQD-CFG-007 Enable the Override Enables routing the vSAN vSAN networks across
default gateway for this data traffic through the availability zones must
adapter setting on the vSAN network gateway have a route to each other.
vSAN VMkernel adapters rather than through the
on all ESXi hosts. management gateway.
VMware, Inc. 78
VMware Cloud Foundation Design Guide
Table 7-17. vSphere Cluster Design Requirements for vSAN Stretched Clusters with VMware
Cloud Foundation (continued)
VCF-CLS-REQD-CFG-008 Create a host group for Makes it easier to manage You must create and
each availability zone and which virtual machines run maintain VM-Host DRS
add the ESXi hosts in in which availability zone. group rules.
the zone to the respective
group.
VCF-CLS-REQD-LCM-002 Create workload domains vSphere Lifecycle Manager You must manage the
selecting baselines as the images are not supported life cycle of firmware and
life cycle management for vSAN stretched vendor add-ons manually.
method. clusters.
Table 7-18. vSphere Cluster Design Recommendations for VMware Cloud Foundation
VCF-CLS-RCMD-CFG-001 Use vSphere HA to protect vSphere HA supports a You must provide sufficient
all virtual machines against robust level of protection resources on the remaining
failures. for both ESXi host and hosts so that virtual
virtual machine availability. machines can be restarted
on those hosts in the event
of a host outage.
VCF-CLS-RCMD-CFG-002 Set host isolation response vSAN requires that the If a false positive event
to Power Off and restart host isolation response be occurs, virtual machines are
VMs in vSphere HA. set to Power Off and to powered off and an ESXi
restart virtual machines on host is declared isolated
available ESXi hosts. incorrectly.
VMware, Inc. 79
VMware Cloud Foundation Design Guide
Table 7-18. vSphere Cluster Design Recommendations for VMware Cloud Foundation (continued)
VCF-CLS-RCMD-CFG-006 Enable vSphere DRS on all Provides the best If a vCenter Server outage
clusters, using the default trade-off between load occurs, the mapping from
fully automated mode with balancing and unnecessary virtual machines to ESXi
medium threshold. migrations with vSphere hosts might be difficult to
vMotion. determine.
VCF-CLS-RCMD-CFG-007 Enable Enhanced vMotion Supports cluster upgrades You must enable EVC only
Compatibility (EVC) on all without virtual machine if the clusters contain hosts
clusters in the management downtime. with CPUs from the same
domain. vendor.
You must enable EVC on
the default management
domain cluster during
bringup.
VCF-CLS-RCMD-CFG-008 Set the cluster EVC mode Supports cluster upgrades None.
to the highest available without virtual machine
baseline that is supported downtime.
for the lowest CPU
architecture on the hosts in
the cluster.
VCF-CLS-RCMD-LCM-001 Use images as the life cycle vSphere Lifecycle Manager You cannot add any
management method for VI images simplify the stretched cluster to a
workload domains that do management of firmware VI workload domains that
not require vSAN stretched and vendor add-ons uses vSphere Lifecycle
clusters. manually. Manager images.
VMware, Inc. 80
VMware Cloud Foundation Design Guide
Table 7-19. vSphere Cluster Design Recommendations for vSAN Stretched Clusters with VMware
Cloud Foundation
VCF-CLS-RCMD-CFG-009 Increase admission control Allocating only half of a In a cluster of 8 ESXi hosts,
percentage to the half stretched cluster ensures the resources of only 4
of the ESXi hosts in the that all VMs have enough ESXi hosts are available for
cluster. resources if an availability use.
zone outage occurs. If you add more ESXi hosts
to the default management
cluster, add them in pairs,
one per availability zone.
VCF-CLS-RCMD-CFG-010 Create a virtual machine Ensures that virtual You must add virtual
group for each availability machines are located machines to the allocated
zone and add the VMs in only in the assigned group manually.
the zone to the respective availability zone to
group. avoid unnecessary vSphere
vMotion migrations.
VCF-CLS-RCMD-CFG-011 Create a should-run-on- Ensures that virtual You must manually create
hosts-in-group VM-Host machines are located the rules.
affinity rule to run each only in the assigned
group of virtual machines availability zone to
on the respective group avoid unnecessary vSphere
of hosts in the same vMotion migrations.
availability zone.
VMware Cloud Foundation supports NSX traffic over a single vSphere Distributed Switch per
cluster. Additional distributed switches are supported for other traffic types.
When using vSAN ReadyNodes, you must define the number of vSphere Distributed Switches at
workload domain deployment time. You cannot add additional vSphere Distributed Switches post
deployment.
VMware, Inc. 81
VMware Cloud Foundation Design Guide
Table 7-20. Configuration Options for vSphere Distributed Switch for VMware Cloud Foundation
vSphere Distributed
Switch Configuration Management Domain VI Workload Domain Benefits Drawbacks
Single vSphere n One vSphere n vSphere Requires least All traffic shares the
Distributed Switch Distributed Distributed number of physical same uplinks.
Switch for all Switch VDS for all NICs and switch
traffic traffic ports.
Note To use physical NICs other than vminc0 and vmnic1 as the uplinks for the first vSphere
Distributed Switch, you must use a JSON file for management domain deployment in Cloud
Builder or use the SDDC Manager API for VI workload domain deployment.
In a deployment that has multiple availability zones, you must provide new or extend the existing
networks.
VMware, Inc. 82
VMware Cloud Foundation Design Guide
Table 7-21. Distributed Port Group Configuration for VMware Cloud Foundation
Number of Physical NIC
Function Teaming Policy Ports MTU Size (Bytes)
Table 7-22. Default VMkernel Adapters for a Workload Domain per Availability Zone
Recommended MTU Size
VMkernel Adapter Service Connected Port Group Activated Services (Bytes)
VMware, Inc. 83
VMware Cloud Foundation Design Guide
Table 7-23. vSphere Networking Design Recommendations for VMware Cloud Foundation
VCF-VDS-RCMD-CFG-001 Use a single vSphere n Reduces the complexity Increases the number
Distributed Switch per of the network design. of vSphere Distributed
cluster. n Reduces the size of the Switches that must be
fault domain. managed.
VCF-VDS-RCMD-CFG-002 Configure the MTU size n Supports the MTU size When adjusting the MTU
of the vSphere Distributed required by system packet size, you must
Switch to 9000 for jumbo traffic types. also configure the entire
frames. n Improves traffic network path (VMkernel
throughput. ports, virtual switches,
physical switches, and
routers) to support the
same MTU packet size.
VCF-VDS-RCMD-DPG-001 Use ephemeral port binding Using ephemeral port Port-level permissions and
for the management port binding provides the option controls are lost across
group. for recovery of the vCenter power cycles, and no
Server instance that is historical context is saved.
managing the distributed
switch.
VCF-VDS-RCMD-DPG-002 Use static port binding for Static binding ensures a None.
all non-management port virtual machine connects
groups. to the same port on
the vSphere Distributed
Switch. This allows for
historical data and port
level monitoring.
VCF-VDS-RCMD-NIO-001 Enable Network I/O Control Increases resiliency and If configured incorrectly,
on vSphere Distributed performance of the Network I/O Control
Switch of the management network. might impact network
domain cluster performance for critical
traffic types.
VMware, Inc. 84
VMware Cloud Foundation Design Guide
Table 7-23. vSphere Networking Design Recommendations for VMware Cloud Foundation
(continued)
VCF-VDS-RCMD-NIO-002 Set the share value for By keeping the default None.
management traffic to setting of Normal,
Normal. management traffic is
prioritized higher than
vSphere vMotion but
lower than vSAN traffic.
Management traffic is
important because it
ensures that the hosts
can still be managed
during times of network
contention.
VCF-VDS-RCMD-NIO-003 Set the share value for During times of network During times of network
vSphere vMotion traffic to contention, vSphere contention, vMotion takes
Low. vMotion traffic is not longer than usual to
as important as virtual complete.
machine or storage traffic.
VCF-VDS-RCMD-NIO-004 Set the share value for Virtual machines are the None.
virtual machines to High. most important asset in the
SDDC. Leaving the default
setting of High ensures that
they always have access to
the network resources they
need.
VCF-VDS-RCMD-NIO-006 Set the share value for By default, VMware Cloud None.
other traffic types to Low. Foundation does not use
other traffic types, like
vSphere FT traffic. Hence,
these traffic types can be
set the lowest priority.
VMware, Inc. 85
NSX Design for VMware Cloud
Foundation 8
In VMware Cloud Foundation, you use NSX for connecting management and customer virtual
machines by using virtual network segments and routing. You also create constructs for solutions
that are deployed for a single VMware Cloud Foundation instance or are available across multiple
VMware Cloud Foundation instances. These constructs provide routing to the data center and
load balancing.
Component Description
NSX Manager n Provides the user interface and the REST API for creating,
configuring, and monitoring NSX components, such as segments,
and Tier-0 and Tier-1 gateways.
n In a deployment with NSX Federation, NSX Manager is called NSX
Local Manager.
NSX Edge nodes n Is a special type of transport node which contains service router
components.
n Provides north-south traffic connectivity between the physical
data center networks and the NSX SDN networks. Each NSX Edge
node has multiple interfaces where traffic flows.
n Can provide east-west traffic flow between virtualized workloads.
They provide stateful services such as load balancers and
DHCP. In a deployment with multiple VMware Cloud Foundation
instances, east-west traffic between the VMware Cloud
Foundation instances flows through the NSX Edge nodes too.
NSX Federation (optional design extension) n Propagates configurations that span multiple NSX instances in
a single VMware Cloud Foundation instance or across multiple
VMware Cloud Foundation instances. You can stretch overlay
segments, activate failover of segment ingress and egress traffic
between VMware Cloud Foundation instances, and implement a
unified firewall configuration.
n In a deployment with multiple VMware Cloud Foundation
instances, you use NSX to provide cross-instance services to
SDDC management components that do not have native support
for availability at several locations, such as vRealize Automation
and vRealize Operations.
n Connect only workload domains of matching types (management
domain to management domain or VI workload domain to VI
workload domain).
VMware, Inc. 86
VMware Cloud Foundation Design Guide
Component Description
NSX Global Manager (Federation only) n Is part of deployments with multiple VMware Cloud Foundation
instances where NSX Federation is required. NSX Global Manager
can connect multiple NSX Local Manager instances under a single
global management plane.
n Provides the user interface and the REST API for creating,
configuring, and monitoring NSX global objects, such as global
virtual network segments, and global Tier-0 and Tier-1 gateways.
n Connected NSX Local Manager instances create the global objects
on the underlying software-defined network that you define
from NSX Global Manager. An NSX Local Manager instance
directly communicates with other NSX Local Manager instances to
synchronize configuration and state needed to implement a global
policy.
n NSX Global Manager is a deployment-time role that you assign to
an NSX Manager appliance.
NSX Manager instance shared between VI n An NSX Manager instance can be shared between up to 14 VI
workload domains workload domains that are part of the same vCenter Single Sign-
On domain.
n VI workload domains sharing an NSX Manager instance must use
the same vSphere cluster life cycle method.
n Using a shared NSX Manager instance reduces resource
requirements for the management domain.
n A single transport zone is shared across all clusters in all VI
workload domains that share the NSX Manager instance.
n The management domain NSX instance cannot be shared.
n Isolated workload domain NSX instances cannot be shared.
VMware, Inc. 87
VMware Cloud Foundation Design Guide
NSX Manager Cluster n Three appropriately sized nodes n Three appropriately sized nodes
with a virtual IP (VIP) address with with a VIP address with an anti-
an Anti-affinity rule to separate. affinity rule to separate the nodes
n vSphere HA protects the cluster on different hosts.
nodes applying high restart n vSphere HA protects the cluster
priority nodes applying high restart
priority
n vSphere DRS rule should-run-on-
hosts-in-group keeps the NSX
Manager VMs in the first
availability zone.
NSX Global Manager Cluster n Manually deployed three n Manually deployed three
(Conditional) appropriately sized nodes with a appropriately sized nodes with a
VIP address with an anti-affinity VIP address with an anti-affinity
rule to separate them on different rule to separate them on different
hosts. hosts.
n One active and one standby n One active and one standby
cluster. cluster.
n vSphere HA protects the cluster n vSphere HA protects the cluster
nodes applying high restart nodes applying high restart
priority. priority.
n vSphere DRS rule should run on
hosts in group keeps the NSX
Global Manager VMs in the first
availability zone.
NSX Edge Cluster n Two appropriately sized NSX n Two appropriately sized NSX
Edge nodes with an anti-affinity Edge nodes in the first availability
rule to separate them on different zone with an anti-affinity rule to
hosts. separate them on different hosts.
n vSphere HA protects the cluster n vSphere HA protects the cluster
nodes applying high restart nodes applying high restart
priority. priority.
n vSphere DRS rule should run on
hosts in group keeps the NSX
Edge VMs in the first availability
zone.
Transport Nodes n Each ESXi host acts as a host n Each ESXi host acts as a host
transport node. transport node.
n Two Edge transport nodes. n Two Edge transport nodes in the
first availability zone.
Transport zones n One VLAN transport zone for n One VLAN transport zone for
north-south traffic. north-south traffic.
n Maximum one overlay transport n Maximum one overlay transport
zone for overlay segments per zone for overlay segments per
NSX instance. NSX instance.
VMware, Inc. 88
VMware Cloud Foundation Design Guide
VLANs and IP subnets allocated to See VLANs and Subnets for VMware See Chapter 5 Physical Network
NSX Cloud Foundation. Infrastructure Design for VMware
For information about the networks Cloud Foundation.
for virtual infrastructure management,
see Distributed Port Group Design.
Routing configuration n BGP for a single VMware Cloud n BGP with path prepend to
Foundation instance. control ingress traffic and local
n In a VMware Cloud Foundation preference to control egress
deployment with NSX Federation, traffic through the first availability
BGP with ingress and egress zone during normal operating
traffic to the first VMware condition.
Cloud Foundation instance during n In a VMware Cloud Foundation
normal operating conditions. deployment with NSX Federation,
BGP with ingress and egress
traffic to the first instance during
normal operating conditions.
For a description of the NSX logical component in this design, see Table 8-1. NSX Logical
Concepts and Components.
VMware, Inc. 89
VMware Cloud Foundation Design Guide
Figure 8-1. NSX Logical Design for a Single Instance - Single Availability Zone Topology
Access Supporting
NSX Edge
Cluster Infrastructure
User Interface
DNS NTP
API NSX Edge NSX Edge
Node 1 Node 2
NSX
Manager Cluster
Workload Domain
vCenter Server
NSX NSX NSX
Manager Manager Manager
1 2 3
NSX
Transport Nodes
n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.
n NSX Edge nodes in the workload domain that provide advanced services such as load
balancing, and north-south connectivity.
n ESXi hosts in the workload domain that are registered as NSX transport nodes to provide
distributed routing and firewall services to workloads.
VMware, Inc. 90
VMware Cloud Foundation Design Guide
Figure 8-2. NSX Logical Design for a Single Instance - Multiple Availability Zone Topology
Access Supporting
NSX Edge
Cluster Infrastructure
User Interface
DNS NTP
API NSX Edge NSX Edge
Node 1 Node 2
NSX
Manager Cluster
Workload Domain
vCenter Server
NSX NSX NSX
Manager Manager Manager
1 2 3
n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.
n NSX Edge nodes that provide advanced services such as load balancing, and north-south
connectivity.
n ESXi hosts that are distributed evenly across availability zones in the workload domain and
are registered as NSX transport nodes to provide distributed routing and firewall services to
workloads.
VMware, Inc. 91
VMware Cloud Foundation Design Guide
Figure 8-3. NSX Logical Design for a Multiple Instance - Single Availability Zone Topology
VCF Instance A VCF Instance B
NSX NSX
NSX Edge NSX Edge
Transport Nodes Transport Nodes
Node Cluster Node Cluster
n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.
n NSX Edge nodes that provide advanced services such as load balancing, and north-south
connectivity.
n ESXi hosts in the workload domain that are registered as NSX transport nodes to provide
distributed routing and firewall services to workloads.
n NSX Global Manager cluster in each of the first two VMware Cloud Foundation instances.
You deploy the NSX Global Manager cluster in each VMware Cloud Foundation instance so
that you can use NSX Federation for global management of networking and security services.
VMware, Inc. 92
VMware Cloud Foundation Design Guide
Figure 8-4. NSX Logical Design for Multiple Instance - Multiple Availability Zone Topology
VCF Instance A VCF Instance B
ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi
n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.
n NSX Edge nodes that provide advanced services such as load balancing, and north-south
connectivity.
n ESXi hosts that are distributed evenly across availability zones in the workload domain in a
VMware Cloud Foundation instance, and are registered as NSX transport nodes to provide
distributed routing and firewall services to workloads.
n NSX Global Manager cluster in each of the first two VMware Cloud Foundation instances.
You deploy the NSX Global Manager cluster in each VMware Cloud Foundation instance so
that you can use NSX Federation for global management of networking and security services.
VMware, Inc. 93
VMware Cloud Foundation Design Guide
When you deploy NSX Manager appliances, either with a local or global scope, you select to
deploy the appliance with a size that is suitable for the scale of your environment. The option
that you select determines the number of CPUs and the amount of memory of the appliance. For
detailed sizing according to the overall profile of the VMware Cloud Foundation instance you plan
to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.
Note To deploy an NSX Manager appliance with a size different from the default one, you must
use the API.
VMware, Inc. 94
VMware Cloud Foundation Design Guide
Table 8-4. NSX Manager Design Requirements for VMware Cloud Foundation
VCF-NSX-LM-REQD- Deploy three NSX Manager Supports high availability of You must have sufficient
CFG-002 nodes in the default the NSX manager cluster. resources in the default
vSphere cluster in the cluster of the management
management domain for domain to run three NSX
configuring and managing Manager nodes.
the network services for
the workload domain.
Table 8-5. NSX Manager Design Recommendations for VMware Cloud Foundation
VCF-NSX-LM-RCMD- Deploy appropriately sized Ensures resource The default size for
CFG-001 nodes in the NSX Manager availability and usage a management domain
cluster for the workload efficiency per workload is Medium, and for VI
domain. domain. workload domains is Large.
VCF-NSX-LM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-002 address for the NSX the user interface and API feature provides high
Manager cluster for the of NSX Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Manager nodes must
be deployed on the
same Layer 2 network.
VMware, Inc. 95
VMware Cloud Foundation Design Guide
Table 8-5. NSX Manager Design Recommendations for VMware Cloud Foundation (continued)
VCF-NSX-LM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Manager You must allocate at
CFG-003 rules in vSphere Distributed appliances running on least four physical hosts
Resource Scheduler different ESXi hosts for so that the three
(vSphere DRS) to the NSX high availability. NSX Manager appliances
Manager appliances. continue running if an ESXi
host failure occurs.
VCF-NSX-LM-RCMD- In vSphere HA, set the n NSX Manager If the restart priority
CFG-004 restart priority policy implements the control for another management
for each NSX Manager plane for virtual appliance is set to highest,
appliance to high. network segments. the connectivity delay for
vSphere HA restarts management appliances
the NSX Manager will be longer.
appliances first so that
other virtual machines
that are being powered
on or migrated by using
vSphere vMotion while
the control plane is
offline lose connectivity
only until the control
plane quorum is re-
established.
n Setting the restart
priority to high reserves
the highest priority for
flexibility for adding
services that must be
started before NSX
Manager.
Table 8-6. NSX Manager Design Recommendations for Stretched Clusters in VMware Cloud
Foundation
VCF-NSX-LM-RCMD- Add the NSX Manager Ensures that, by default, Done automatically by
CFG-006 appliances to the virtual the NSX Manager VMware Cloud Foundation
machine group for the first appliances are powered when configuring a vSAN
availability zone. on a host in the primary stretched cluster.
availability zone.
VMware, Inc. 96
VMware Cloud Foundation Design Guide
Table 8-7. NSX Global Manager Design Requirements for VMware Cloud Foundation
Table 8-8. NSX Global Manager Design Recommendations for VMware Cloud Foundation
VCF-NSX-GM-RCMD- Deploy three NSX Global Provides high availability You must have sufficient
CFG-001 Manager nodes for the for the NSX Global resources in the default
workload domain to Manager cluster. cluster of the management
support NSX Federation domain to run three NSX
across VMware Cloud Global Manager nodes.
Foundation instances.
VCF-NSX-GM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-003 address for the NSX Global the user interface and API feature provides high
Manager cluster for the of NSX Global Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Global Manager nodes
must be deployed on
the same Layer 2
network.
VCF-NSX-GM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Global You must allocate at
CFG-004 rules in vSphere DRS to Manager appliances least four physical hosts
the NSX Global Manager running on different ESXi so that the three
appliances. hosts for high availability. NSX Manager appliances
continue running if an ESXi
host failure occurs.
VMware, Inc. 97
VMware Cloud Foundation Design Guide
Table 8-8. NSX Global Manager Design Recommendations for VMware Cloud Foundation
(continued)
VCF-NSX-GM-RCMD- In vSphere HA, set the n NSX Global Manager n Management of NSX
CFG-005 restart priority policy for implements the global components will
each NSX Global Manager management plane for be unavailable until the
appliance to medium. global segments and NSX Global Manager
firewalls. virtual machines restart.
n The NSX Global
NSX Global Manager is
Manager cluster is
not required for control
deployed in the
plane and data plane
management domain,
connectivity.
where the total number
n Setting the restart
of virtual machines
priority to medium
is limited and where
reserves the high
it competes with
priority for services that
other management
impact the NSX control
components for restart
or data planes.
priority.
VCF-NSX-GM-RCMD- Set the NSX Global Enables recoverability of Must be done manually.
CFG-007 Manager cluster in the NSX Global Manager in
second VMware Cloud the second VMware Cloud
Foundation instance as Foundation instance if a
standby for the workload failure in the first instance
domain. occurs.
VMware, Inc. 98
VMware Cloud Foundation Design Guide
Table 8-9. NSX Global Manager Design Recommendations for Stretched Clusters in VMware
Cloud Foundation
VCF-NSX-GM-RCMD- Add the NSX Global Ensures that, by default, Done automatically by
CFG-008 Manager appliances to the the NSX Global Manager VMware Cloud Foundation
virtual machine group for appliances are powered when stretching a cluster.
the first availability zone. on a host in the primary
availability zone.
Deployment Model for the NSX Edge Nodes for VMware Cloud
Foundation
For NSX Edge nodes, you determine the form factor, number of nodes and placement according
to the requirements for network services in a VMware Cloud Foundation workload domain.
An NSX Edge node is an appliance that provides centralized networking services which cannot
be distributed to hypervisors, such as load balancing, NAT, VPN, and physical network uplinks.
Some services, such as Tier-0 gateways, are limited to a single instance per NSX Edge node.
However, most services can coexist in these nodes.
NSX Edge nodes are grouped in one or more edge clusters, representing a pool of capacity for
NSX services.
An NSX Edge node can be deployed as a virtual appliance, or installed on bare-metal hardware.
The edge node on bare-metal hardware can have better performance capabilities at the expense
of more difficult deployment and limited deployment topology use cases. For details on the
trade-offs of using virtual or bare-metal NSX Edges, see the NSX documentation.
VMware, Inc. 99
VMware Cloud Foundation Design Guide
NSX Edge virtual appliance deployed n Deployment and life cycle n Might not provide best
by using SDDC Manager management by using SDDC performance in individual
Manager workflows that call NSX customer scenarios
Manager
n Automated password
management by using SDDC
Manager
n Benefits from vSphere HA
recovery
n Can be used across availability
zones
n Easy to scale up by modifying
the specification of the virtual
appliance
NSX Edge virtual appliance deployed n Benefits from vSphere HA n Might not provide best
by using NSX Manager recovery performance in individual
n Can be used across availability customer scenarios
zones n Manually deployed by using NSX
n Easy to scale up by modifying Manager
the specification of the virtual n Manual password Management
appliance by using NSX Manager
n Cannot be used to support
Application Virtual Networks
(AVNs) in the management
domain
Bare-metal NSX Edge appliance n Might provide better performance n Has hardware compatibility
in individual customer scenarios requirements
n Requires individual hardware
life cycle management and
monitoring of failures, firmware
and drivers
n Manual password management
n Must be manually deployed and
connected to the environment
n Requires manual recovery after
hardware failure
n Requires deploying a bare-metal
NSX Edge appliance in each
availability zone for network
failover
n Deploying a bare-metal edge in
each availability zone requires
considering asymmetric routing
n Requires edge fault domains
if more than one edge is
deployed in each availability zone
for Active/Standby Tier-0/Tier-1
gateways
For detailed sizing according to the overall profile of the VMware Cloud Foundation instance you
plan to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.
Network Design for the NSX Edge Nodes for VMware Cloud
Foundation
In each VMware Cloud Foundation instance, you implement an NSX Edge configuration with a
single N-VDS. You connect the uplink network interfaces of the edge appliance to VLAN trunk
port groups that are connected to particular physical NICs on the host.
If you plan to deploy multiple VMware Cloud Foundation instances, apply the same network
design to the NSX Edge cluster in the second and other additional VMware Cloud Foundation
instances.
ToR
Switches
vmnic0 vmnic1
Edge N-VDS
ESXi Host
Uplink Policy Design for the NSX Edge Nodes for VMware Cloud Foundation
A transport node can participate in an overlay and VLAN network. Uplink profiles define policies
for the links from the NSX Edge transport nodes to top of rack switches. Uplink profiles are
containers for the properties or capabilities for the network adapters. Uplink profiles are applied
to the N-VDS of the edge node.
Uplink profiles can use either load balance source or failover order teaming. If using load balance
source, multiple uplinks can be active. If using failover order, only a single uplink can be active.
Teaming can be configured by using the default teaming policy or a user-defined named teaming
policy. You can use named teaming policies to pin traffic segments to designated edge uplinks.
Table 8-12. NSX Edge Design Requirements for VMware Cloud Foundation
Table 8-12. NSX Edge Design Requirements for VMware Cloud Foundation (continued)
VCF-NSX-EDGE-REQD- Use a dedicated VLAN A dedicated edge overlay n You must have routing
CFG-003 for edge overlay that is network provides support between the VLANs for
different from the host for edge mobility in edge overlay and host
overlay VLAN. support of advanced overlay.
deployments such as n You must allocate
multiple availability zones another VLAN in
or multi-rack clusters. the data center
infrastructure for edge
overlay.
Table 8-13. NSX Edge Design Requirements for NSX Federation in VMware Cloud Foundation
VCF-NSX-EDGE-REQD- Allocate a separate VLAN The RTEP network must be You must allocate another
CFG-005 for edge RTEP overlay that on a VLAN that is different VLAN in the data center
is different from the edge from the edge overlay infrastructure.
overlay VLAN. VLAN. This is an NSX
requirement that provides
support for configuring
different MTU size per
network.
Table 8-14. NSX Edge Design Recommendations for VMware Cloud Foundation
VCF-NSX-EDGE-RCMD- Use appropriately sized Ensures resource You must provide sufficient
CFG-001 NSX Edge virtual availability and usage compute resources to
appliances. efficiency per workload support the chosen
domain. appliance size.
VCF-NSX-EDGE-RCMD- Deploy the NSX Edge n Simplifies the Workloads and NSX Edges
CFG-002 virtual appliances to the configuration and share the same compute
default vSphere cluster minimizes the number resources.
of the workload domain, of ESXi hosts required
sharing the cluster between for initial deployment.
the workloads and the n Keeps the management
edge appliances. components located in
the same domain and
cluster, isolated from
customer workloads.
VCF-NSX-EDGE-RCMD- Deploy two NSX Edge Creates the NSX Edge None.
CFG-003 appliances in an edge cluster for satisfying the
cluster in the default requirements for availability
vSphere cluster of the and scale.
workload domain.
VCF-NSX-EDGE-RCMD- Apply VM-VM anti-affinity Keeps the NSX Edge nodes None.
CFG-004 rules for vSphere DRS to running on different ESXi
the virtual machines of the hosts for high availability.
NSX Edge cluster.
Table 8-14. NSX Edge Design Recommendations for VMware Cloud Foundation (continued)
VCF-NSX-EDGE-RCMD- In vSphere HA, set the n The NSX Edge nodes If the restart priority
CFG-005 restart priority policy for are part of the for another management
each NSX Edge appliance north-south data path appliance is set to highest,
to high. for overlay segments. the connectivity delays
vSphere HA restarts the for management appliances
NSX Edge appliances will be longer .
first so that other
virtual machines that
are being powered on
or migrated by using
vSphere vMotion while
the edge nodes are
offline lose connectivity
only for a short time.
n Setting the restart
priority to high reserves
highest for future
needs.
Table 8-15. NSX Edge Design Recommendations for Stretched Clusters in VMware Cloud
Foundation
North-South Routing
The routing design considers different levels of routing in the environment, such as number and
type of gateways in NSX, dynamic routing protocol, and others.
Table 8-17. Considerations for the Operating Model for North-South Service Routers
North-South Service
Router Operating Model Description Benefits Drawbacks
Figure 8-6. BGP North-South Routing for VMware Cloud Foundation Instances with a Single
Availability Zone
VCF Instance A
ToR
Switches
Data Center A
BGP ASN eBGP
ECMP
Uplink VLAN 1 BDF (Optional) Default Route
Uplink VLAN 2
Tier-0 SR SR
Gateway
DR DR
SDDC BGP ASN
SR SR
Tier-1
DR DR DR DR DR DR
Gateway
NSX NSX ESXi ESXi ESXi ESXi
Edge Edge Host 1 Host 2 Host 3 Host 4
Node 1 Node 2
Figure 8-7. BGP North-South Routing for VMware Cloud Foundation Instances with Multiple
Availability Zones
VCF Instance A
Uplink VLAN 2
Tier-0 SR SR
Gateway
DR DR
SDDC BGP ASN
SR SR
DR DR DR DR DR DR Tier-1 DR DR DR DR
Gateway
ESXi ESXi ESXi ESXi NSX NSX ESXi ESXi ESXi ESXi
Host 1 Host 2 Host 3 Host 4 Edge Edge Host 1 Host 2 Host 3 Host 4
Node 1 Node 2
Local egress allows traffic to leave any location which the network spans. The use of local-egress
would require controlling local-ingress to prevent asymmetrical routing. This design does not
use local-egress. Instead, this design uses a preferred and failover VMware Cloud Foundation
instances for all networks.
Figure 8-8. BGP North-South Routing for VMware Cloud Foundation Instances with NSX
Federation
G G
L L
Each VMware Cloud Foundation instance that is in the scope of a Tier-0 gateway can be
configured as primary or secondary. A primary instance passes traffic for any other SDN service
such as Tier-0 logical segments or Tier-1 gateways. A secondary instance routes traffic locally but
does not egress traffic outside the SDN or advertise networks in the data center.
When deploying an additional VMware Cloud Foundation instance, the Tier-0 gateway in the first
instance is extended to the new instance.
In this design, the Tier-0 gateway in each VMware Cloud Foundation instance is configured as
primary. Although the Tier-0 gateway technically supports local-egress, the design does not
recommend the use of local-egress. Ingress and egress traffic is controlled at the Tier-1 gateway
level.
Each VMware Cloud Foundation instance has its own NSX Edge cluster with associated uplink
VLANs for north-south traffic flow for that instance. The Tier-0 gateway in each instance peers
with the top of rack switches over eBGP.
Figure 8-9. BGP Peering to Top of Rack Switches for VMware Cloud Foundation Instances with
NSX Federation
Data Center
Network
ToR ToR
Management Management
VLAN VLAN
Outbound: Outbound:
Tier-0 -No segments attached NSX Edge NSX Edge Tier-0 -No segments attached
Tier-1- Global and local segments Cluster Cluster Tier-1- Global and local segments
Primary at this location Primary at this location
Edge Edge
Node Node
Tier-0 SR iBGP -Inter-SR Routing Tier-0 SR
Active/Active Active/Active
SDN G Tier-0
Active/Active
Primary/Primary
NSX SDN
SDDC BGP ASN
Any logical segments connected to the Tier-1 gateway follow the span of the Tier-1 gateway. If
the Tier-1 gateway spans several VMware Cloud Foundation instances, any segments connected
to that gateway become available in both instances.
Using a Tier-1 gateway enables more granular control on logical segments in the first and second
VVMware Cloud Foundation instances. You use three Tier-1 gateways - one in each VMware
Cloud Foundation instance for segments that are local to the instance, and one for segments
which span the two instances.
Table 8-18. Location Configuration of the Tier-1 Gateways for Multiple VMware Cloud Foundation
Instances
First VMware Cloud Second VMware Cloud
Tier-1 Gateway Foundation Instance Foundation Instance Ingress and Egress Traffic
The Tier-1 gateway advertises its networks to the connected local-instance unit of the Tier-0
gateway. In the case of primary-secondary location configuration, the Tier-1 gateway advertises
its networks only to the Tier-0 gateway unit in the location where the Tier-1 gateway is primary.
The Tier-0 gateway unit then re-advertises those networks to the data center in the sites
where that Tier-1 gateway is primary. During failover of the components in the first VMware
Cloud Foundation instance, an administrator must manually set the Tier-1 gateway in the second
VMware Cloud Foundation instance as primary. Then, networks become advertised through the
Tier-1 gateway unit in the second instance.
In a Multiple Instance-Multiple Availability Zone topology, the same Tier-0 and Tier-1 gateway
architecture applies. The ESXi transport nodes from the second availability zone are also
attached to the Tier-1 gateway as per the Figure 8-7. BGP North-South Routing for VMware
Cloud Foundation Instances with Multiple Availability Zones design.
BGP Routing
The BGP routing design has the following characteristics:
n Is a proven protocol that is designed for peering between networks under independent
administrative control - data center networks and the NSX SDN.
Note These design recommendations do not include BFD. However, if faster convergence than
BGP timers is required, you must enable BFD on the physical network and also on the NSX Tier-0
gateway.
Table 8-19. BGP Routing Design Requirements for VMware Cloud Foundation
VCF-NSX-BGP-REQD- To enable ECMP between Supports multiple equal- Additional VLANs are
CFG-001 the Tier-0 gateway and cost routes on the Tier-0 required.
the Layer 3 devices gateway and provides
(ToR switches or upstream more resiliency and better
devices), create two bandwidth use in the
VLANs . network.
The ToR switches or
upstream Layer 3 devices
have an SVI on one of the
two VLANS and each Edge
node in the cluster has an
interface on each VLAN.
Table 8-19. BGP Routing Design Requirements for VMware Cloud Foundation (continued)
VCF-NSX-BGP-REQD- Create a VLAN transport Enables the configuration Additional VLAN transport
CFG-003 zone for edge uplink traffic. of VLAN segments on the zones are required if
N-VDS in the edge nodes. the edge nodes are not
connected to the same top
of rack switch pair.
VCF-NSX-BGP-REQD- Deploy a Tier-1 gateway Creates a two-tier routing A Tier-1 gateway can only
CFG-004 and connect it to the Tier-0 architecture. be connected to a single
gateway. Abstracts the NSX logical Tier-0 gateway.
components which interact In cases where multiple
with the physical data Tier-0 gateways are
center from the logical required, you must create
components which provide multiple Tier-1 gateways.
SDN services.
Table 8-20. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation
VCF-NSX-BGP-REQD- Extend the uplink VLANs Because the NSX Edge You must configure a
CFG-006 to the top of rack switches nodes will fail over stretched Layer 2 network
so that the VLANs are between the availability between the availability
stretched between both zones, ensures uplink zones by using physical
availability zones. connectivity to the top network infrastructure.
of rack switches in
both availability zones
regardless of the zone
the NSX Edge nodes are
presently in.
VCF-NSX-BGP-REQD- Provide this SVI Enables the communication You must configure a
CFG-007 configuration on the top of of the NSX Edge nodes to stretched Layer 2 network
the rack switches. the top of rack switches in between the availability
n In the second both availability zones over zones by using the physical
availability zone, the same uplink VLANs. network infrastructure.
configure the top
of rack switches or
upstream Layer 3
devices with an SVI on
each of the two uplink
VLANs.
n Make the top of
rack switch SVI in
both availability zones
part of a common
stretched Layer 2
network between the
availability zones.
VCF-NSX-BGP-REQD- Provide this VLAN Supports multiple equal- n Extra VLANs are
CFG-008 configuration: cost routes on the Tier-0 required.
n Use two VLANs to gateway, and provides n Requires stretching
enable ECMP between more resiliency and better uplink VLANs between
the Tier-0 gateway and bandwidth use in the availability zones
the Layer 3 devices network.
(top of rack switches or
Leaf switches).
n The ToR switches
or upstream Layer 3
devices have an SVI to
one of the two VLANS
and each NSX Edge
node has an interface
to each VLAN.
Table 8-20. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)
VCF-NSX-BGP-REQD- Create an IP prefix list Used in a route map to You must manually create
CFG-009 that permits access to prepend a path to one or an IP prefix list that is
route advertisement by any more autonomous system identical to the default one.
network instead of using (AS-path prepend) for BGP
the default IP prefix list. neighbors in the second
availability zone.
VCF-NSX-BGP-REQD- Create a route map-out n Used for configuring You must manually create
CFG-010 that contains the custom neighbor relationships the route map.
IP prefix list and an AS- with the Layer 3 The two NSX Edge nodes
path prepend value set to devices in the second will route north-south
the Tier-0 local AS added availability zone. traffic through the second
twice. n Ensures that all ingress availability zone only if
traffic passes through the connection to their
the first availability BGP neighbors in the first
zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.
VCF-NSX-BGP-REQD- Create an IP prefix list that Used in a route map to You must manually create
CFG-011 permits access to route configure local-reference an IP prefix list that is
advertisement by network on learned default-route identical to the default one.
0.0.0.0/0 instead of using for BGP neighbors in the
the default IP prefix list. second availability zone.
Table 8-20. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)
VCF-NSX-BGP-REQD- Apply a route map-in that n Used for configuring You must manually create
CFG-012 contains the IP prefix list for neighbor relationships the route map.
the default route 0.0.0.0/0 with the Layer 3 The two NSX Edge nodes
and assign a lower local- devices in the second will route north-south
preference , for example, availability zone. traffic through the second
80, to the learned default n Ensures that all egress availability zone only if
route and a lower local- traffic passes through the connection to their
preference, for example, 90 the first availability BGP neighbors in the first
any routes learned. zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.
VCF-NSX-BGP-REQD- Configure the neighbors of Makes the path in and The two NSX Edge nodes
CFG-013 the second availability zone out of the second will route north-south
to use the route maps as In availability zone less traffic through the second
and Out filters respectively. preferred because the AS availability zone only if
path is longer and the the connection to their
local preference is lower. BGP neighbors in the first
As a result, all traffic passes availability zone is lost, for
through the first zone. example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.
Table 8-21. BGP Routing Design Requirements for NSX Federation in VMware Cloud Foundation
VCF-NSX-BGP-REQD- Extend the Tier-0 gateway n Supports ECMP north- The Tier-0 gateway
CFG-014 to the second VMware south routing on all deployed in second
Cloud Foundation instance. nodes in the NSX Edge instance is removed
cluster.
n Enables support for
cross-instance Tier-1
gateways and cross-
instance network
segments.
Table 8-21. BGP Routing Design Requirements for NSX Federation in VMware Cloud Foundation
(continued)
Table 8-21. BGP Routing Design Requirements for NSX Federation in VMware Cloud Foundation
(continued)
VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables cross-instance You must manually fail over
CFG-018 cluster in each VMware network span between and fail back the cross-
Cloud Foundation instance the first and instance network from
to the stretched second VMware Cloud the standby NSX Global
Tier-1 gateway. Set Foundation instances. Manager.
the firstVMware Cloud n Enables deterministic
Foundation instance as ingress and egress
primary and the second traffic for the cross-
instance as secondary. instance network.
n If a VMware Cloud
Foundation instance
failure occurs, enables
deterministic failover of
the Tier-1 traffic flow.
n During the
recovery of the
inaccessibleVMware
Cloud Foundation
instance, enables
deterministic failback of
the Tier-1 traffic flow,
preventing unintended
asymmetrical routing.
n Eliminates the need
to use BGP attributes
in the first and
second VMware Cloud
Foundation instances
to influence location
preference and failover.
VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables instance- You can use the
CFG-019 cluster in each VMware specific networks to be service router that is
Cloud Foundation instance isolated to their specific created for the Tier-1
to the local Tier-1 gateway instances. gateway for networking
for that VMware Cloud n Enables deterministic services. However, such
Foundation instance. flow of ingress and configuration is not
egress traffic for required for network
the instance-specific connectivity.
networks.
VCF-NSX-BGP-REQD- Set each local Tier-1 Prevents the need to use None.
CFG-020 gateway only as primary in BGP attributes in primary
that instance. Avoid setting and secondary instances
the gateway as secondary to influence the instance
in the other instances. ingress-egress preference.
Table 8-22. BGP Routing Design Recommendations for VMware Cloud Foundation
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication
VCF-NSX-BGP-RCMD- Configure the BGP Keep Provides a balance By using longer timers to
CFG-002 Alive Timer to 4 and Hold between failure detection detect if a router is not
Down Timer to 12 or lower between the top of responding, the data about
between the top of tack rack switches and the such a router remains in
switches and the Tier-0 Tier-0 gateway, and the routing table longer. As
gateway. overburdening the top of a result, the active router
rack switches with keep- continues to send traffic to
alive traffic. a router that is down.
These timers must be
aligned with the data
center fabric design of your
organization.
Table 8-22. BGP Routing Design Recommendations for VMware Cloud Foundation (continued)
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication
Table 8-23. BGP Routing Design Recommendations for NSX Federation in VMware Cloud
Foundation
Virtual Segments
Geneve provides the overlay capability to create isolated, multi-tenant broadcast domains in NSX
across data center fabrics, and enables customers to create elastic, logical networks that span
physical network boundaries, and physical locations.
Transport Zones
A transport zone identifies the type of traffic, VLAN or overlay, and the vSphere Distributed
Switch name. You can configure one or more VLAN transport zones and a single overlay
transport zone per virtual switch. A transport zone does not represent a security boundary.
VMware Cloud Foundation supports a single overlay transport zone per NSX Instance. All
vSphere clusters, within and across workload domains that share the same NSX instance
subsequently share the same overlay transport zone.
Edge Edge
Node 1 Node 2 ESXi
vSphere Distributed
N-VDS
Switch with NSX
Uplink profiles can use either load balance source or failover order teaming. If using load balance
source, multiple uplinks can be active. If using failover order, only a single uplink can be active.
Hierarchical Two-Tier The ESXi host transport nodes are grouped according
to their TEP IP subnet. One ESXi host in each subnet
is responsible for replication to an ESXi host in another
subnet. The receiving ESXi host replicates the traffic to
the ESXi hosts in its local subnet.
The source ESXi host transport node knows about the
groups based on information it has received from the
control plane. The system can select an arbitrary ESXi
host transport node as the mediator for the source subnet
if the remote mediator ESXi host node is available.
Head-End The ESXi host transport node at the origin of the frame
to be flooded on a segment sends a copy to every
other ESXi host transport node that is connected to this
segment.
VCF-NSX-OVERLAY-REQD- Configure each ESXi host n Enables the You must configure each
CFG-002 as a transport node participation of ESXi transport node with an
without using transport hosts and the virtual uplink profile individually.
node profiles. machines on them in
NSX overlay and VLAN
networks.
n Transport node profiles
can only be applied
at the cluster
level. Because in
an environment with
multiple availability
zones each availability
zone is connected to a
different set of VLANs,
you cannot use a
transport node profile.
VCF-NSX-OVERLAY-REQD- Create a single overlay n Ensures that overlay All clusters in all workload
CFG-004 transport zone for all segments are domains that share the
overlay traffic across the connected to an same NSX Manager share
workload domain and NSX NSX Edge node for the same transport zone.
Edge nodes. services and north-
south routing.
n Ensures that all
segments are available
to all ESXi hosts
and NSX Edge nodes
configured as transport
nodes.
Table 8-25. Overlay Design Requirements for VMware Cloud Foundation (continued)
VCF-NSX-OVERLAY-REQD- Create a VLAN transport Ensures that uplink VLAN If VLAN segments are
CFG-005 zone for uplink VLAN traffic segments are configured required on hosts, you
that is applied only to NSX on the NSX Edge transport must create another VLAN
Edge nodes. nodes. transport zone for the host
transport nodes only.
VCF-NSX-OVERLAY-RCMD- Use DHCP to assign IP Required for deployments DHCP server is required for
CFG-001 addresses to the host TEP where a cluster spans the host overlay VLANs.
interfaces. Layer 3 network domains
such as multiple availability
zones and management
clusters that span Layer 3
domains.
Benefits n Supports IP mobility with dynamic routing. Uses the data center fabric
n Limits the number of VLANs needed in the data center fabric. for the network segment
and the next-hop gateway.
n In an environment with multiple availability zones, limits the
number of VLANs needed to expand from an architecture
with one availability zone to an architecture with two
availability zones.
Requirement Requires routing between the data center fabric and the NSX
Edge nodes.
ToR Switches
ECMP
Cross-Instance Local-Instance
Tier-1 Gateway Tier-1 Gateway
Cross-Instance Local-Instance
IP Subnet IP Subnet
Cross-Instance Local-Instance
NSX Segment NSX Segment
vRSCLM Local-Instance vRealize Suite
Clustered WSA Component
vRealize Suite Component
with Cross-Instance Mobility
For the design for specific vRealize Suite components, see this design and VMware Validated
Solutions. For identity and access management design for NSX, see Identity and Access
Management for VMware Cloud Foundation.
Important If you plan to use NSX Federation in the management domain, create the AVNs
before you enable the federation. Creating AVNs in an environment where NSX Federation is
already active is not supported.
With NSX Federation, an NSX segment can span multiple instances of NSX and VMware Cloud
Foundation. A single network segment can be available in different physical locations over
the NSX SDN. In an environment with multiple VMware Cloud Foundation instances, the cross-
instance NSX network in the management domain is extended between the first two instances.
This configuration provides IP mobility for management components which fail over from the first
to the second instance.
Table 8-28. Application Virtual Network Design Requirements for VMware Cloud Foundation
VCF-NSX-AVN-REQD- Create one cross-instance Prepares the environment Each NSX segment requires
CFG-001 NSX segment for the for the deployment of a unique IP address space.
components of a vRealize solutions on top of VMware
Suite application or Cloud Foundation, such as
another solution that vRealize Suite, without a
requires mobility between complex physical network
VMware Cloud Foundation configuration.
instances. The components of
the vRealize Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.
VCF-NSX-AVN-REQD- Create one or more local- Prepares the environment Each NSX segment requires
CFG-002 instance NSX segments for the deployment of a unique IP address space.
for the components of a solutions on top of VMware
vRealize Suite application Cloud Foundation, such as
or another solution that vRealize Suite, without a
are assigned to a specific complex physical network
VMware Cloud Foundation configuration.
instance.
Table 8-29. Application Virtual Network Design Requirements for NSX Federation in VMware
Cloud Foundation
VCF-NSX-AVN-REQD- Extend the cross-instance Enables workload mobility Each NSX segment requires
CFG-003 NSX segment to the without a complex physical a unique IP address space.
second VMware Cloud network configuration.
Foundation instance. The components of
a vRealize Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.
VCF-NSX-AVN-REQD- In each VMware Cloud Enables workload mobility Each NSX segment requires
CFG-004 Foundation instance, create within a VMware a unique IP address space.
additional local-instance Cloud Foundation instance
NSX segments. without complex physical
network configuration.
Each VMware Cloud
Foundation instance should
have network segments to
support workloads which
are isolated to that VMware
Cloud Foundation instance.
Table 8-30. Application Virtual Network Design Recommendations for VMware Cloud
Foundation
A standalone Tier-1 gateway is created to provide load balancing services with a service interface
on the cross-instance application virtual network.
Figure 8-12. NSX Logical Load Balancing Design for VMware Cloud Foundation
NSX Edge
Cluster
NSX
Load Balancer
Service
NSX Tier-1
Gateway
NSX
Manager Cluster Supporting
Infrastructure
NSX
Transport Nodes
Management Cluster
Table 8-31. Load Balancing Design Requirements for VMware Cloud Foundation
VCF-NSX-LB-REQD- When creating load Provides load balancing to You must connect
CFG-002 balancing services for applications connected to the gateway to each
Application Virtual the cross-instance network. network that requires load
Networks, connect the balancing.
standalone Tier-1 gateway
to the cross-instance NSX
segments.
Table 8-32. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation
VCF-NSX-LB-REQD- Connect the standalone Provides load balancing to You must connect
CFG-005 Tier-1 gateway in the applications connected to the gateway to each
second VMware Cloud the cross-instance network network that requires load
Foundationinstance to in the second VMware balancing.
the cross-instance NSX Cloud Foundation instance.
segment.
Table 8-32. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)
n SDDC Manager Design Requirements and Recommendations for VMware Cloud Foundation
API API
VMware Depot VMware Depot
vCenter vCenter
Server Server
vCenter vCenter
Server Server
vRealize vRealize
Suite Lifecycle Suite Lifecycle
Manager Manager
Solution and Solution and
User Authentication User Authentication
n Life cycle management of the virtual infrastructure components in all workload domains and
of vRealize Suite Lifecycle Manager
n Certificate management
n Backup configuration
n A single SDDC Manager appliance is deployed on the n A single SDDC Manager appliance is deployed on the
management network. management network.
n vSphere HA protects the SDDC Manager appliance. n vSphere HA protects the SDDC Manager appliance.
n A vSphere DRS rule specifies that the SDDC Manager
appliance should run on an ESXi host in the first
availability zone.
Table 9-2. SDDC Manager Design Requirements for VMware Cloud Foundation
Table 9-3. SDDC Manager Design Recommendations for VMware Cloud Foundation
VCF-SDDCMGR-RCMD- Connect SDDC Manager SDDC Manager must be The rules of your
CFG-001 to the Internet for able to download install organization might not
downloading software and upgrade software permit direct access to the
bundles. bundles for deployment of Internet. In this case, you
VI workload domains and must download software
solutions, and for upgrade bundles for SDDC Manager
from a repository. manually.
VCF-SDDCMGR-RCMD- Configure a network proxy To protect SDDC Manager The proxy must not
CFG-002 to connect SDDC Manager against external attacks use authentication because
to the Internet. from the Internet. SDDC Manager does
not support proxy with
authentication.
VCF-SDDCMGR-RCMD- To check for and Software bundles for Requires the use of a
CFG-003 download software VMware Cloud Foundation VMware Customer Connect
bundles, configure SDDC are stored in a repository user account with access to
Manager with a VMware that is secured with access VMware Cloud Foundation
Customer Connect account controls. licensing.
with VMware Cloud Sites without internet
Foundation entitlement. connection can use local
upload option instead.
You deploy vRealize Suite Lifecycle Manager by using SDDC Manager. SDDC Manager deploys
vRealize Suite Lifecycle Manager in VMware Cloud Foundation mode. In this mode, vRealize Suite
Lifecycle Manager is integrated with SDDC Manager, providing the following benefits:
n Integration with the SDDC Manager inventory to retrieve infrastructure details when creating
environments for Workspace ONE Access and vRealize Suite components, such as NSX
segments and vCenter Server details.
n Automation of the load balancer configuration when deploying Workspace ONE Access,
vRealize Operations, and vRealize Automation.
n Deployment details for vRealize Suite Lifecycle Manager environments are populated in the
SDDC Manager inventory and can be queried using the SDDC Manager API.
n Day-two workflows in SDDC Manager to connect vRealize Log Insight and vRealize
Operations to workload domains.
n The ability to manage password life cycle for Workspace ONE Access and vRealize Suite
components.
For information about deploying vRealize Suite components, see VMware Validated Solutions.
n Logical Design for vRealize Suite Lifecycle Manager for VMware Cloud Foundation
n Data Center and Environment Design for vRealize Suite Lifecycle Manager
n vRealize Suite Lifecycle Manager Design Requirements and Recommendations for VMware
Cloud Foundation
Logical Design
In a VMware Cloud Foundation environment, you use vRealize Suite Lifecycle Manager in VMware
Cloud Foundation mode. In this mode, vRealize Suite Lifecycle Manager is integrated with
VMware Cloud Foundation in the following way:
n SDDC Manager deploys the vRealize Suite Lifecycle Manager appliance. Then, you deploy the
vRealize Suite products that are supported by VMware Cloud Foundation by using vRealize
Suite Lifecycle Manager.
n Supported versions are controlled by the vRealize Suite Lifecycle Manager appliance and
Product Support Packs. See the VMware Interoperability Matrix.
n To orchestrate the deployment, patching, and upgrade of Workspace ONE Access and the
vRealize Suite products, vRealize Suite Lifecycle Manager communicates with SDDC Manager
and the management domain vCenter Server in the environment.
n SDDC Manager configures the load balancer for Workspace ONE Access, vRealize
Operations, and vRealize Automation.
VCF Instance A
Identity Management
Access Workspace
ONE Access
Integration
User Interface
SDDC
REST API Manager
vRealize Suite
Lifecycle Manager
in VMware Cloud
Life Cycle Management Foundation Mode
Endpoint
vRealize Suite
Components
vCenter
Server
Workspace
ONE Access
Shared Storage
VCF Instance B
Access Integration
Shared Storage
According to the VMware Cloud Foundation topology deployed, vRealize Suite Lifecycle
Manager is deployed in one or more locations and is responsible for the life cycle of the vRealize
Suite components in one or more VMware Cloud Foundation instances.
VMware Cloud Foundation instances might be connected for the following reasons:
n Over-arching management of those instances from the same vRealize Suite deployments.
n A single vRealize Suite Lifecycle n A single vRealize Suite Lifecycle The vRealize Suite Lifecycle Manager
Manager appliance deployed on Manager appliance deployed on instance in the first VMware Cloud
the cross-instance NSX segment. the cross-instance NSX segment. Foundation instance provides life
n vSphere HA protects the n vSphere HA protects the cycle management for:
vRealize Suite Lifecycle Manager vRealize Suite Lifecycle Manager n Workspace ONE Access
appliance. appliance. n vRealize Suite
Life cycle management for: n A should-run vSphere DRS vRealize Suite Lifecycle Manager
n Workspace ONE Access rule specifies that the vRealize in each additional VMware Cloud
Suite Lifecycle Manager appliance Foundation instance provides life
n vRealize Suite
should run on an ESXi host in the cycle management for:
first availability zone. n vRealize Log Insight
Life cycle management for:
n Workspace ONE Access
n vRealize Suite
vRealize Suite Lifecycle Manager must have routed access to the management VLAN through the
Tier-0 gateway in the NSX instance for the management domain.
Local-Instance Local-Instance
Cross-Instance NSX Tier-1 Gateway
NSX Tier-1 Gateway NSX Tier-1 Gateway
APP APP
OS OS
Product Support
vRealize Suite Lifecycle Manager provides several methods to obtain and store product binaries
for the install, patch, and upgrade of the vRealize Suite products.
Method Description
Product Upload n You can upload and discover product binaries to the vRealize Suite
Lifecycle Manager appliance.
VMware Customer Connect n You can integrate vvRealize Suite Lifecycle Manager with VMware
Customer Connect to access and download vRealize Suite product
entitlements from an online depot over the Internet. This method
simplifies, automates, and organizes the repository.
You create data centers and environments in vRealize Suite Lifecycle Manager to manage the life
cycle operations on the vRealize Suite products and to support the growth of the SDDC.
Construct Definition
Table 10-4. Logical Datacenter to vCenter Server Mappings in vRealize Suite Lifecycle Manager
Cross-instance n Management domain vCenter Server for Supports the deployment of cross-instance
the local VMware Cloud Foundation components, such as Workspace ONE
instance. Access, vRealize Operations, and vRealize
n Management domain vCenter Server for Automation, including any per-instance
an additional VMware Cloud Foundation collector components.
instance.
Local-instance Management domain vCenter Server for the Supports the deployment of vRealize Log
local VMware Cloud Foundation instance. Insight.
VMware Cloud Foundation Mode n Infrastructure details for the deployed products,
including vCenter Server, networking, DNS and NTP
information are retrieved from the SDDC Manager
inventory.
n Successful deployment details are synced back to the
SDDC Manager inventory.
n Limited to one instance of each vRealize Suite
product.
Note You can deploy new vRealize Suite products to the SDDC environment or import existing
product deployments.
Passwords
vRealize Suite Lifecycle Manager stores passwords in the locker repository which are referenced
during life cycle operations on data centers, environments, products, and integrations.
Table 10-7. Life Cycle Operations Use of Locker Passwords in vRealize Suite Lifecycle Manager
Products n Product administrator password, for example, the admin password for
an individual product.
n Product appliance password, for example, the root password for an
individual product.
Certificates
vRealize Suite Lifecycle Manager stores certificates in the Locker repository which can be
referenced during product life cycle operations. Externally provided certificates, such as
Certificate Authority-signed certificates, can be imported or certificates can be generated by
the vRealize Suite Lifecycle Manager appliance.
Licenses
vRealize Suite Lifecycle Manager stores licenses in the Locker repository which can be
referenced during product life cycle operations. Licenses can be validated and added to the
repository directory or imported through an integration with VMware Customer Connect.
Table 10-8. vRealize Suite Lifecycle Manager Design Requirements for VMware Cloud
Foundation
VCF-vRSLCM- Deploy a vRealize Suite Provides life cycle You must ensure that
REQD-CFG-001 Lifecycle Manager instance management operations for the required resources are
in the management domain vRealize Suite applications and available.
of each VMware Cloud Workspace ONE Access.
Foundation instance to provide
life cycle management for
vRealize Suite and Workspace
ONE Access.
VCF-vRSLCM- Place the vRealize Suite Provides a consistent You must use an
REQD-CFG-004 Lifecycle Manager appliance deployment model for implementation in NSX to
on an overlay-backed management applications. support this networking
(recommended) or VLAN- configuration.
backed NSX network segment.
VCF-vRSLCM- Import vRealize Suite product n You can review the validity, When using the API, you must
REQD-CFG-005 licenses to the Locker details, and deployment specify the Locker ID for the
repository for product life cycle usage for the license across license to be used in the JSON
operations. the vRealize Suite products. payload.
n You can reference and
use licenses during product
life cycle operations, such
as deployment and license
replacement.
Table 10-8. vRealize Suite Lifecycle Manager Design Requirements for VMware Cloud
Foundation (continued)
VCF-vRSLCM- Configure datacenter objects You can deploy and manage You must manage a separate
REQD-ENV-001 in vRealize Suite Lifecycle the integrated vRealize Suite datacenter object for the
Manager for local and components across the SDDC products that are specific to
cross-instance vRealize Suite as a group. each instance.
deployments and assigns the
management domain vCenter
Server instance to each data
center.
VCF-vRSLCM- If deploying vRealize n Supports deployment and You can manage instance-
REQD-ENV-003 Operations or vRealize management of the specific components, such as
Automation, create a cross- integrated vRealize Suite remote collectors, only in
instance environment in products across VMware an environment that is cross-
vRealize Suite Lifecycle Cloud Foundation instances instance.
Manager as a group.
n Enables the deployment
of instance-specific
components, such as
vRealize Operations remote
collectors. In vRealize Suite
Lifecycle Manager, you
can deploy and manage
vRealize Operations remote
collector objects only
in an environment that
contains the associated
cross-instance components.
Table 10-8. vRealize Suite Lifecycle Manager Design Requirements for VMware Cloud
Foundation (continued)
VCF-vRSLCM- Use the custom vCenter Server vRealize Suite Lifecycle You must maintain the
REQD-SEC-001 role for vRealize Suite Lifecycle Manager accesses vSphere permissions required by the
Manager that has the minimum with the minimum set of custom role.
privileges required to support permissions that are required
the deployment and upgrade to support the deployment
of vRealize Suite products. and upgrade of vRealize Suite
products.
SDDC Manager automates the
creation of the custom role.
VCF-vRSLCM- Use the service account in n Provides the following n You must maintain the life
REQD-SEC-002 vCenter Server for application- access control features: cycle and availability of the
to-application communication n vvRealize Suite service account outside of
from vRealize Suite Lifecycle Lifecycle Manager SDDC manager password
Manager to vSphere. Assign accesses vSphere with rotation.
global permissions using the the minimum set of
custom role. required permissions.
n You can introduce
improved accountability
in tracking request-
response interactions
between the
components of the
SDDC.
n SDDC Manager automates
the creation of the service
account.
Table 10-9. vRealize Suite Lifecycle Manager Design Requirements for Stretched Clusters in
VMware Cloud Foundation
VCF-vRSLCM- For multiple availability zones, Ensures that, by default, If vRealize Suite Lifecycle
REQD-CFG-006 add the vRealize Suite Lifecycle the vRealize Suite Lifecycle Manager is deployed after
Manager appliance to the VM Manager appliance is powered the creation of the stretched
group for the first availability on a host in the first availability management cluster, you must
zone. zone. add the vRealize Suite Lifecycle
Manager appliance to the VM
group manually.
Table 10-10. vRealize Suite Lifecycle Manager Design Requirements for NSX Federation in
VMware Cloud Foundation
VCF-vRSLCM- Configure the DNS settings Improves resiliency in the event As you scale from a
REQD-CFG-007 for the vRealize Suite Lifecycle of an outage of external deployment with a single
Manager appliance to use DNS services for a VMware Cloud VMware Cloud Foundation
servers in each instance. Foundation instance. instance to one with multiple
VMware Cloud Foundation
instances, the DNS settings
of the vRealize Suite Lifecycle
Manager appliance must be
updated.
VCF-vRSLCM- Configure the NTP settings for Improves resiliency if an outage As you scale from a
REQD-CFG-008 the vRealize Suite Lifecycle of external services for a deployment with a single
Manager appliance to use NTP VMware Cloud Foundation VMware Cloud Foundation
servers in each VMware Cloud instance occurs. instance to one with multiple
Foundation instance. VMware Cloud Foundation
instances, the NTP settings
on the vRealize Suite Lifecycle
Manager appliance must be
updated.
Table 10-11. vRealize Suite Lifecycle Manager Design Recommendations for VMware Cloud
Foundation
Recommendati
on ID Design Recommendation Justification Implication
VCF-vRSLCM- Obtain product binaries for n You can upgrade The site must have an Internet
RCMD-LCM-001 install, patch, and upgrade vRealize Suite products connection to use VMware
in vRealize Suite Lifecycle based on their general Customer Connect.
Manager from VMware availability and endpoint Sites without an Internet
Customer Connect. interoperability rather than connection should use the local
being listed as part of upload option instead.
VMware Cloud Foundation
bill of materials (BOM).
n You can deploy and
manage binaries in an
environment that does not
allow access to the Internet
or are dark sites.
VCF-vRSLCM- Enable integration between n Enables authentication to You must deploy and configure
RCMD-SEC-001 vRealize Suite Lifecycle vRealize Suite Lifecycle Workspace ONE Access
Manager and your corporate Manager by using your to establish the integration
identity source by using corporate identity source. between vRealize Suite
the Workspace ONE Access n Enables authorization Lifecycle Manager and your
instance. through the assignment corporate identity sources.
of organization and cloud
services roles to enterprise
users and groups defined
in your corporate identity
source.
VCF-vRSLCM- Create corresponding security Streamlines the management n You must create the
RCMD-SEC-002 groups in your corporate of vRealize Suite Lifecycle security groups outside of
directory services for vRealize Manager roles for users. the SDDC stack.
Suite Lifecycle Manager roles: n You must set the desired
n VCF directory synchronization
n Content Release Manager interval in Workspace
ONE Access to ensure
n Content Developer
that changes are available
within a reasonable period.
n Directory integration to authenticate users against an identity provider (IdP), such as Active
Directory or LDAP.
n Access policies that consist of rules to specify criteria that users must meet to authenticate.
The Workspace ONE Access instance that is integrated with vRealize Suite Lifecycle Manager
provides identity and access management services to vRealize Suite solutions that either run in
a VMware Cloud Foundation instance or must be available across VMware Cloud Foundation
instances.
For identity management design for a vRealize Suite product, see VMware Cloud Foundation
Validated Solutions .
For identity and access management for components other than vRealize Suite, such as NSX, you
can deploy a standalone Workspace ONE Access instance. See Identity and Access Management
for VMware Cloud Foundation.
n Sizing Considerations for Workspace ONE Access for VMware Cloud Foundation
n Integration Design for Workspace ONE Access with VMware Cloud Foundation
n Workspace ONE Access Design Requirements and Recommendations for VMware Cloud
Foundation
VCF Instance A
Identity Provider
Directory Services
eg. AD, LDAP
Access
User Interface
Rest API
Standard
Workspace
ONE Access
Primary
Supporting Components:
Postgres
Supporting Infrastructure:
Shared Storage
DNS, NTP, SMTP
vRealize Suite
Lifecycle Manager
vRealize Suite
Component
n A single-node Workspace ONE Access instance n A single-node Workspace ONE Access instance
deployed on an overlay-backed (recommended) or deployed on an overlay-backed (recommended) or
VLAN-backed NSX segment. VLAN-backed NSX segment.
n SDDC solutions that are portable across VMware n SDDC solutions that are portable across VMware
Cloud Foundation instances are integrated with the Cloud Foundation instances are integrated with the
Workspace ONE Access instance in the first VMware Workspace ONE Access instance in the first VMware
Cloud Foundation instance. Cloud Foundation instance.
n A should-run-on-hosts-in-group vSphere DRS rule
ensures that, under normal operating conditions, the
Workspace ONE Access node runs on a management
ESXi host in the first availability zone.
VCF Instance A
Identity Provider
Directory Services
e.g. AD, LDAP
Access
User Interface
REST API
NSX
Load Balancer
Clustered
Workspace
ONE Access
Supporting Components:
Postgres
Supporting Infrastructure:
Shared Storage,
DNS, NTP, SMTP
Cross - and
Local-Instance Solutions
vRealize Suite
Lifecycle Manager
vRealize Suite
Component
n A three-node Workspace ONE Access cluster n A three-node Workspace ONE Access cluster
behind an NSX load balancer and deployed on an behind an NSX load balancer and deployed on an
overlay-backed (recommended) or VLAN-backed NSX overlay-backed (recommended) or VLAN-backed NSX
segment is deployed in the first VMware Cloud segment.
Foundation instance. n All Workspace ONE Access services and databases
n All Workspace ONE Access services and databases are configured for high availability using a native
are configured for high availability using a native cluster configuration. SDDC solutions that are portable
cluster configuration. SDDC solutions that are portable across VMware Cloud Foundation instances are
across VMware Cloud Foundation instances are integrated with this Workspace ONE Access cluster.
integrated with this Workspace ONE Access cluster. n Each node of the three-node cluster is configured as a
n Each node of the three node cluster is configured as a connector to any relevant identity providers
connector to any relevant identity providers n vSphere HA protects the Workspace ONE Access
n vSphere HA protects the Workspace ONE Access nodes.
nodes. n A vSphere DRS anti-affinity rule ensures that the
n vSphere DRS anti-affinity rules ensure that the Workspace ONE Access nodes run on different ESXi
Workspace ONE Access nodes run on different ESXi hosts.
hosts. n A should-run-on-hosts-in-group vSphere DRS rule
n Additional single-node Workspace ONE Access ensures that, under normal operating conditions, the
instance is deployed on an overlay-backed Workspace ONE Access nodes run on management
(recommended) or VLAN-backed NSX segment in all ESXi hosts in the first availability zone.
other VMware Cloud Foundation instances. n Additional single-node Workspace ONE Access
instance is deployed on an overlay-backed
(recommended) or VLAN-backed NSX segment in all
other VMware Cloud Foundation instances.
For detailed sizing based on the overall profile of the VMware Cloud Foundation instance you
plan to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.
Network Segment
This network design has the following features:
n All Workspace ONE Access components have routed access to the management VLAN
through the Tier-0 gateway in the NSX instance for the management domain.
n Routing to the management network and other external networks is dynamic and is based on
the Border Gateway Protocol (BGP).
APP
OS
WSA Node 1
VCF Instance A
VCF Instance A
Load Balancing
A Workspace ONE Access cluster deployment requires a load balancer to manage connections
to the Workspace ONE Access services.
Load-balancing services are provided by NSX. During the deployment of the Workspace ONE
Access cluster or scale-out of a standard deployment, vRealize Suite Lifecycle Manager and
SDDC Manager coordinate to automate the configuration of the NSX load balancer. The load
balancer is configured with the following settings:
Table 11-4. Clustered Workspace ONE Access Load Balancer Configuration (continued)
After the integration, information security and access control configurations for the integrated
SDDC products can be configured.
See VMware Cloud Foundation Validated Solutions for the design for specific vRealize Suite
components including identity management.
Deployment Type
You consider the deployment type, standard or cluster, according to the design objectives for
the availability and number of users that the system and integrated SDDC solutions must support.
You deploy Workspace ONE Access on the default management vSphere cluster.
Standard (Recommended) n Single node n Can be scaled out to n Does not provide high
n NSX load balancer a 3-node cluster behind availability for Identity
automatically deployed. an NSX load balancer Provider connectors.
n Can leverage vSphere
HA for recovery after a
failure occurs.
n Consumes less
resources.
Table 11-7. Workspace ONE Access Design Requirements for VMware Cloud Foundation
VCF-WSA-REQD-SEC-001 Import certificate authority- n You can reference When using the API, you
signed certificates to and use certificate must specify the Locker ID
the Locker repository authority-signed for the certificate to be
for Workspace ONE certificates during used in the JSON payload.
Access product life cycle product life cycle
operations. operations, such
as deployment and
certificate replacement.
Table 11-7. Workspace ONE Access Design Requirements for VMware Cloud Foundation
(continued)
Table 11-7. Workspace ONE Access Design Requirements for VMware Cloud Foundation
(continued)
VCF-WSA-REQD-CFG-007 If using clustered n During the deployment You must use the load
Workspace ONE Access, of Workspace ONE balancer that is configured
use the NSX load balancer Access by using by SDDC Manager and the
that is configured by SDDC vRealize Suite Lifecycle integration with vRealize
Manager on a dedicated Manager, SDDC Suite Lifecycle Manager.
Tier-1 gateway. Manager automates the
configuration of an
NSX load balancer
for Workspace ONE
Access to facilitate
scale-out.
Table 11-8. Workspace ONE Access Design Requirements for Stretched Clusters in VMware
Cloud Foundation
VCF-WSA-REQD-CFG-008 Add the Workspace ONE Ensures that, by default, n If the Workspace
Access appliances to the the Workspace ONE ONE Access instance
VM group for the first Access cluster nodes are is deployed after
availability zone. powered on a host in the the creation of the
first availability zone. stretched management
cluster, you must add
the appliances to the
VM group manually.
n ClusteredWorkspace
ONE Access might
require manual
intervention after a
failure of the active
availability zone occurs.
Table 11-9. Workspace ONE Access Design Requirements for NSX Federation in VMware Cloud
Foundation
VCF-WSA-REQD-CFG-010 Configure the NTP settings Improves resiliency if If you scale from a
on Workspace ONE Access an outage of external deployment with a single
cluster nodes to use NTP services for a VMware VMware Cloud Foundation
servers in each VMware Cloud Foundation instance instance to one with
Cloud Foundation instance. occurs. multiple VMware Cloud
Foundation instances,
the NTP settings on
Workspace ONE Access
must be updated.
Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
VCF-WSA-RCMD-CFG-001 Protect all Workspace Supports high availability None for standard
ONE Access nodes using for Workspace ONE deployments.
vSphere HA. Access. Clustered Workspace ONE
Access deployments might
require intervention if an
ESXi host failure occurs.
VCF-WSA-RCMD-CFG-003 When using Active Provides the following n You must manage the
Directory as an Identity access control features: password life cycle of
Provider, use an Active n Workspace ONE this account.
Directory user account with Access connects to the n If authentication to
the minimum of read-only Active Directory with more than one Active
access to Base DNs for the minimum set of Directory domain is
users and groups as the required permissions to required, additional
service account for the bind and query the accounts are required
Active Directory bind. directory. for the Workspace
n You can introduce an ONE Access connector
improved accountability bind to each Active
in tracking request- Directory domain over
response interactions LDAP.
between theWorkspace
ONE Access and Active
Directory.
Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)
VCF-WSA-RCMD-CFG-004 Configure the directory n Limits the number You must manage
synchronization to of replicated groups the groups from
synchronize only groups required for each your enterprise
required for the integrated product. directory selected for
SDDC solutions. n Reduces the replication synchronization to
interval for group Workspace ONE Access.
information.
VCF-WSA-RCMD-CFG-007 Add a filter to the Limits the number of To ensure that replicated
Workspace ONE Access replicated users for user accounts are managed
directory settings to Workspace ONE Access within the maximums, you
exclude users from the within the maximum scale. must define a filtering
directory replication. schema that works for your
organization based on your
directory attributes.
Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)
VCF-WSA-RCMD-CFG-008 Configure the mapped You can configure the User accounts in your
attributes included when minimum required and organization's enterprise
a user is added to the extended user attributes directory must have
Workspace ONE Access to synchronize directory the following required
directory. user accounts for the attributes mapped:
Workspace ONE Access n firstname, for example,
to be used as an givenname for Active
authentication source for Directory
cross-instance vRealize
n lastName, for example,
Suite solutions.
sn for Active Directory
VCF-WSA-RCMD-CFG-009 Configure the Workspace Ensures that any changes Schedule the
ONE Access directory to group memberships in synchronization interval
synchronization frequency the corporate directory to be longer than the
to a reoccurring schedule, are available for integrated time to synchronize from
for example, 15 minutes. solutions in a timely the enterprise directory.
manner. If users and groups
are being synchronized
to Workspace ONE
Access when the
next synchronization is
scheduled, the new
synchronization starts
immediately after the end
of the previous iteration.
With this schedule, the
process is continuous.
Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)
VCF-WSA-RCMD-SEC-002 Configure a password You can set a policy for You must set the policy
policy for Workspace Workspace ONE Access in accordance with your
ONE Access local local directory users that organization policies and
directory users, admin and addresses your corporate regulatory standards, as
configadmin. policies and regulatory applicable.
standards. You must apply the
The password policy is password policy on the
applicable only to the Workspace ONE Access
local directory users and cluster nodes.
does not impact your
organization directory.
Life cycle management of a VMware Cloud Foundation instance is the process of performing
patch updates or upgrades to the underlying management components.
SDDC Manager SDDC Manager performs its own life Not applicable
cycle management.
NSX Local Manager SDDC Manager uses the NSX upgrade coordinator service in the NSX Local
Manager.
NSX Edges SDDC Manager uses the NSX upgrade coordinator service in NSX Manager.
NSX Global Manager You manually use the NSX upgrade coordinator service in the NSX Global
Manager.
vCenter Server You use SDDC Manager for life cycle management of all vCenter Server
instances.
vRealize Suite Lifecycle Manager vRealize Suite Lifecycle Manager Not applicable
performs its own life cycle
management.
Table 12-2. Life Cycle Management Design Requirements for VMware Cloud Foundation
VCF-LCM-REQD-001 Use SDDC Manager to Because the deployment The operations team must
perform the life cycle scope of SDDC Manager understand and be aware
management of the covers the full VMware of the impact of a
following components: Cloud Foundation stack, patch, update, or upgrade
n SDDC Manager SDDC Manager performs operation by using SDDC
patching, update, or Manager.
n NSX Manager
upgrade of these
n NSX Edges
components across all
n vCenter Server workload domains.
n ESXi
VCF-LCM-REQD-002 Use vRealize Suite Lifecycle vRealize Suite Lifecycle n You must deploy
Manager to manage the Manager automates the vRealize Suite Lifecycle
lifecycle of the following life cycle of vRealize Suite Manager by using
components: Lifecycle Manager and SDDC Manager.
n vRealize Suite Lifecycle Workspace ONE Access. n You must manually
Manager apply patches, updates,
n Workspace ONE and hot fixes
Access for Workspace ONE
Access. Patches,
updates, and hotfixes
for Workspace ONE
Access are not
generally managed by
vRealize Suite Lifecycle
Manager.
Table 12-3. Lifecycle Management Design Requirements for NSX Federation in VMware Cloud
Foundation
VCF-LCM-REQD-003 Use the upgrade The version of SDDC n You must explicitly
coordinator in NSX Manager in this design is plan upgrades of the
to perform life cycle not currently capable of life NSX Global Manager
management on the NSX cycle operations (patching, nodes. An upgrade
Global Manager appliances. update, or upgrade) for of the NSX Global
NSX Global Manager. Manager nodes might
require a cascading
upgrade of the NSX
Local Manager nodes
and underlying SDDC
Manager infrastructure
prior to the upgrade
of the NSX Global
Manager nodes.
n You must always align
the version of the NSX
Global Manager nodes
with the rest of the
SDDC stack in VMware
Cloud Foundation.
VCF-LCM-REQD-004 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
any workload domain, the compatible with each other. using a runbook or
impact of any version Because SDDC Manager automated process to
upgrades is evaluated does not provide life ensure a fully supported
in relation to the need cycle operations (patching, and compliant bill of
to upgrade NSX Global update, or upgrade) materials prior to any
Manager. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.
VCF-LCM-REQD-005 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
the NSX Global Manager, compatible with each other. using a runbook or
the impact of any Because SDDC Manager automated process to
version change is evaluated does not provide life ensure a fully supported
against the existing NSX cycle operations (patching, and compliant bill of
Local Manager nodes and update, or upgrade) materials prior to any
workload domains. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.
After you deploy vRealize Log Insight by using vRealize Suite Lifecycle Manager in VMware
Cloud Foundation mode, SDDC Manager configures vRealize Suite Lifecycle Manager logging to
vRealize Log Insight over the log ingestion API. For information about on-premises vRealize Log
Insight logging in VMware Cloud Foundation, see Intelligent Logging and Analytics for VMware
Clod Foundation.
n SSH
n SSH
ESXi n Direct Console User Interface SSH and ESXi shell are deactivated
(DCUI) by default.
n ESXi Shell
n SSH
n VMware Host Client
Method Description
For more information on password complexity, account lockout or integration with additional
Identity Providers, refer to the Identity and Access Management for VMware Cloud Foundation.
Table 14-2. Account and Password Management in VMware Cloud Foundation (continued)
NSX Global Manager admin n Manual by using the n Local appliance account
NSX Global Manager UI n OS level, API, and
or API application access
n Default Expiry: 90 days
Table 14-2. Account and Password Management in VMware Cloud Foundation (continued)
Table 14-2. Account and Password Management in VMware Cloud Foundation (continued)
Table 14-3. Design Requirements for Account and Password Management for VMware Cloud
Foundation
VCF-ACTMGT-REQD- Enable scheduled n Increases the security You must retrieve new
SEC-001 password rotation in SDDC posture of your SDDC. passwords by using the API
Manager for all accounts n Simplifies password if you must use accounts
supporting scheduled management interactively.
rotation. across your
SDDC management
components.
Access to all management component interfaces must be over a Secure Socket Layer (SSL)
connection. During deployment, each component is assigned a certificate from a default signing
CA. To provide secure access to each component, replace the default certificate with a trusted
enterprise CA-signed certificate.
vRealize Suite Lifecycle Manager Management domain VMCA Using SDDC Manager
Note * To use enterprise CA-Signed certificates with ESXi, the initial deployment of VMware
Cloud Foundation must be done using the API providing the Trusted Root certificate.
Table 14-5. Certificate Management Design Recommendations for VMware Cloud Foundation
VCF-SDDC-RCMD-SEC-001 Replace the default VMCA- Ensures that the Replacing the default
signed certificate on communication to all certificates with trusted
all management virtual management components CA-signed certificates from
appliances with a certificate is secure. a certificate authority might
that is signed by an internal increase the deployment
certificate authority. preparation time because
you must generate and
submit certificate requests.
VCF-SDDC-RCMD-SEC-002 Use a SHA-2 algorithm The SHA-1 algorithm is Not all certificate
or higher for signed considered less secure and authorities support SHA-2
certificates. has been deprecated. or higher.
VCF-SDDC-RCMD-SEC-003 Perform SSL certificate life SDDC Manager supports Certificate management
cycle management for all automated SSL certificate for NSX Global Manager
management appliances by lifecycle management instances must be done
using SDDC Manager. rather than requiring a manually.
series of manual steps.
n vRealize Suite Lifecycle Manager Design Elements for VMware Cloud Foundation
For full design details, see Architecture Models and Workload Domain Types in VMware Cloud
Foundation.
For full design details, see Chapter 4 External Services Design for VMware Cloud Foundation.
Table 15-3. External Services Design Requirements for VMware Cloud Foundation
VCF-EXT-REQD-NET-001 Allocate statically assigned Ensures stability across the You must provide precise
IP addresses and host VMware Cloud Foundation IP address management.
names for all workload instance, and makes it
domain components. simpler to maintain, track,
and implement a DNS
configuration.
VCF-EXT-REQD-NET-002 Configure forward and Ensures that all You must provide
reverse DNS records components are accessible DNS records for each
for all workload domain by using a fully qualified component.
components. domain name instead of by
using IP addresses only. It
is easier to remember and
connect to components
across the VMware Cloud
Foundation instance.
For full design details, see Chapter 5 Physical Network Infrastructure Design for VMware Cloud
Foundation.
Table 15-4. Leaf-Spine Physical Network Design Requirements for VMware Cloud Foundation
VCF-NET-REQD-CFG-004 Set the MTU size to at least n Improves traffic When adjusting the MTU
1,700 bytes (recommended throughput. packet size, you must
9,000 bytes for jumbo n Supports Geneve by also configure the entire
frames) on the physical increasing the MTU size network path (VMkernel
switch ports, vSphere to a minimum of 1,600 network adapters, virtual
Distributed Switches, bytes. switches, physical switches,
vSphere Distributed Switch and routers) to support the
n Geneve is an extensible
port groups, and N-VDS same MTU packet size.
protocol. The MTU
switches that support the In an environment with
size might increase
following traffic types: multiple availability zones,
with future capabilities.
n Overlay (Geneve) While 1,600 bytes the MTU must be
n vSAN is sufficient, an MTU configured on the entire
size of 1,700 bytes network path between the
n vSphere vMotion
provides more room for zones.
increasing the Geneve
MTU size without the
need to change the
MTU size of the
physical infrastructure.
Table 15-5. Leaf-Spine Physical Network Design Requirements for NSX Federation in VMware
Cloud Foundation
VCF-NET-REQD-CFG-005 Set the MTU size to at least n Jumbo frames are When adjusting the MTU
1,500 bytes (1,700 bytes not required between packet size, you must
preferred; 9,000 bytes VMware Cloud also configure the entire
recommended for jumbo Foundation instances. network path, that is,
frames) on the components However, increased virtual interfaces, virtual
of the physical network MTU improves traffic switches, physical switches,
between the VMware throughput. and routers to support the
Cloud Foundation instances n Increasing the RTEP same MTU packet size.
for the following traffic MTU to 1,700
types. bytes minimizes
n NSX Edge RTEP fragmentation for
standard-size workload
packets between
VMware Cloud
Foundation instances.
VCF-NET-REQD-CFG-006 Ensure that the latency A latency lower than 150 None.
between VMware Cloud ms is required for the
Foundation instances that following features:
are connected in an NSX n Cross vCenter Server
Federation is less than 150 vMotion
ms. n NSX Federation
Table 15-6. Leaf-Spine Physical Network Design Recommendations for VMware Cloud
Foundation
VCF-NET-RCMD-CFG-001 Use two ToR switches for Supports the use of Requires two ToR switches
each rack. two 10-GbE (25-GbE or per rack which might
greater recommended) increase costs.
links to each server,
provides redundancy and
reduces the overall design
complexity.
Table 15-6. Leaf-Spine Physical Network Design Recommendations for VMware Cloud
Foundation (continued)
VCF-NET-RCMD-CFG-005 Set the lease duration for The IP addresses of the Requires configuration and
the DHCP scope for the host overlay VMkernel management of a DHCP
host overlay network to at ports are assigned by using server.
least 7 days. a DHCP server.
n Host overlay VMkernel
ports do not have an
administrative endpoint.
As a result, they can
use DHCP for automatic
IP address assignment.
If you must change
or expand the subnet,
changing the DHCP
scope is simpler than
creating an IP pool and
assigning it to the ESXi
hosts.
n DHCP simplifies the
configuration of a
default gateway for
host overlay VMkernel
ports if hosts within
same cluster are on
separate Layer 2
domains.
VCF-NET-RCMD-CFG-006 Designate the trunk ports Reduces the time to Although this design
connected to ESXi NICs as transition ports over to the does not use the STP,
trunk PortFast. forwarding state. switches usually have STP
configured by default.
VCF-NET-RCMD-CFG-007 Configure VRRP, HSRP, or Ensures that the VLANs Requires configuration of a
another Layer 3 gateway that are stretched high availability technology
availability method for between availability zones for the Layer 3 gateways in
these networks. are connected to a the data center.
n Management highly- available gateway.
Otherwise, a failure in the
n Edge overlay
Layer 3 gateway will cause
disruption in the traffic in
the SDN setup.
Table 15-7. Leaf-Spine Physical Network Design Recommendations for Stretched Clusters with
VMware Cloud Foundation
VCF-NET-RCMD-CFG-008 Provide high availability for Ensures that a failover Requires configuration of
the DHCP server or DHCP operation of a single highly available DHCP
Helper used for VLANs availability zone will not server.
requiring DHCP services impact DHCP availability.
that are stretched between
availability zones.
Table 15-8. Leaf-Spine Physical Network Design Recommendations for NSX Federation in
VMware Cloud Foundation
After you set up the physical storage infrastructure, the configuration tasks for most design
decisions are automated in VMware Cloud Foundation. You must perform the configuration
manually only for a limited number of design elements as noted in the design implication.
For full design details, see Chapter 6 vSAN Design for VMware Cloud Foundation.
Table 15-10. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
Requireme
nt ID Design Requirement Justification Implication
VCF- Add the following Provides the necessary You might need additional policies if third-
VSAN- setting to the default protection for virtual party virtual machines are to be hosted in
REQD- vSAN storage policy: machines in each availability these clusters because their performance or
CFG-003 Site disaster tolerance zone, with the ability to availability requirements might differ from
= Site mirroring - recover from an availability what the default VMware vSAN policy
stretched cluster zone outage. supports.
VCF- Configure two fault Fault domains are mapped to You must provide additional raw storage when
VSAN- domains, one for availability zones to provide the site mirroring - stretched cluster option is
REQD- each availability zone. logical host separation and selected and fault domains are enabled.
CFG-004 Assign each host ensure a copy of vSAN
to their respective data is always available even
availability zone fault when an availability zone
domain. goes offline.
VCF- Deploy a vSAN witness Ensures availability of vSAN You must provide a third physically-separate
VSAN- appliance in a location witness components in the location that runs a vSphere environment.
WTN- that is not local to the event of a failure of one of You might use a VMware Cloud Foundation
REQD- ESXi hosts in any of the availability zones. instance in a separate physical location.
CFG-001 the availability zones.
VCF- Deploy a witness Ensures the witness The vSphere environment at the witness
VSAN- appliance of a size appliance is sized to support location must satisfy the resource
WTN- that corresponds to the projected workload requirements of the witness appliance.
REQD- the required capacity storage consumption.
CFG-002 of the cluster.
VCF- Connect the first Enables connecting the The management networks in both availability
VSAN- VMkernel adapter of witness appliance to the zones must be routed to the management
WTN- the vSAN witness workload domain vCenter network in the witness site.
REQD- appliance to the Server.
CFG-003 management network
in the witness site.
VCF- Allocate a statically Simplifies maintenance and Requires precise IP address management.
VSAN- assigned IP address tracking, and implements a
WTN- and host name to the DNS configuration.
REQD- management adapter
CFG-004 of the vSAN witness
appliance.
VCF- Configure forward Enables connecting the vSAN You must provide DNS records for the vSAN
VSAN- and reverse DNS witness appliance to the witness appliance.
WTN- records for the vSAN workload domain vCenter
REQD- witness appliance for Server by FQDN instead of IP
CFG-005 the VMware Cloud address.
Foundation instance.
Table 15-10. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
(continued)
Requireme
nt ID Design Requirement Justification Implication
VCF- Configure time Prevents any failures n An operational NTP service must be
VSAN- synchronization by in the stretched cluster available in the environment.
WTN- using an internal NTP configuration that are caused n All firewalls between the vSAN witness
REQD- time for the vSAN by time mismatch between appliance and the NTP servers must allow
CFG-006 witness appliance. the vSAN witness appliance NTP traffic on the required network ports.
and the ESXi hosts in
both availability zones and
workload domain vCenter
Server.
VCF- Configure an individual The vSAN storage policy of You must configure additional vSAN storage
VSAN- vSAN storage policy a stretched cluster cannot be policies.
WTN- for each stretched shared with other clusters.
REQD- cluster.
CFG-007
VCF- Ensure that the storage I/O Storage controllers with lower queue depths can Limits the
VSAN- controller that is running the vSAN cause performance and stability problems when number of
RCMD- disk groups is capable and has a running vSAN. compatible
CFG-001 minimum queue depth of 256 set. vSAN ReadyNode servers are configured with the I/O
right queue depths for vSAN. controllers
that can be
used for
storage.
VCF- Do not use the storage I/O Running non-vSAN disks, for example, VMFS, on a If non-vSAN
VSAN- controllers that are running vSAN storage I/O controller that is running a vSAN disk disks are
RCMD- disk groups for another purpose. group can impact vSAN performance. required in
CFG-002 ESXi hosts,
you must
have an
additional
storage I/O
controller in
the host.
VCF- Configure vSAN in all-flash Meets the performance needs of the default All vSAN
VSAN- configuration in the default workload domain cluster. disks must
RCMD- workload domain cluster. be flash
CFG-003 disks, which
might cost
more than
magnetic
disks.
Table 15-11. vSAN Design Recommendations for VMware Cloud Foundation (continued)
Recommen
dation ID Design Recommendation Justification Implication
VCF- Provide sufficient raw capacity to Ensures that sufficient resources are present in the None.
VSAN- meet the planned needs of the workload domain cluster, preventing the need to
RCMD- workload domain cluster. expand the vSAN datastore in the future.
CFG-004
VCF- On the vSAN datastore, ensure that This free space, called slack space, is set aside Increases
VSAN- at least 30% of free space is always for host maintenance mode data evacuation, the amount
RCMD- available. component rebuilds, rebalancing operations, and of available
CFG-005 VM snapshots. storage
needed.
VCF- Configure vSAN with a minimum of Reduces the size of the fault domain and Using
VSAN- two disk groups per ESXi host. spreads the I/O load over more disks for better multiple disk
RCMD- performance. groups
CFG-006 requires
more disks
in each ESXi
host.
VCF- For the cache tier in each disk Provides enough cache for both hybrid or all-flash Using larger
VSAN- group, use a flash-based drive that vSAN configurations to buffer I/O and ensure disk flash disks
RCMD- is at least 600 GB large. group performance. can increase
CFG-007 Additional space in the cache tier does not the initial
increase performance. host cost.
VCF- Use the default VMware vSAN n Provides the level of redundancy that is You might
VSAN- storage policy. needed in the workload domain cluster. need
RCMD- n Provides the level of performance that is additional
CFG-008 enough for the individual workloads. policies
for third-
party virtual
machines
hosted in
these
clusters
because
their
performanc
e or
availability
requirement
s might
differ from
what the
default
VMware
vSAN policy
supports.
Table 15-11. vSAN Design Recommendations for VMware Cloud Foundation (continued)
Recommen
dation ID Design Recommendation Justification Implication
VCF- Leave the default virtual machine Sparse virtual swap files consume capacity on None.
VSAN- swap file as a sparse object on vSAN only as they are accessed. As a result,
RCMD- vSAN. you can reduce the consumption on the vSAN
CFG-009 datastore if virtual machines do not experience
memory over-commitment which would require
the use of the virtual swap file.
VCF- Use the existing vSphere Provides guaranteed performance for vSAN traffic All traffic
VSAN- Distributed Switch instance in the in a connection-free network by using existing paths are
RCMD- workload domain cluster. networking components. shared over
CFG-010 common
uplinks.
VCF- Configure jumbo frames on the n Simplifies configuration because jumbo frames Every
VSAN- VLAN for vSAN traffic. are also used to improve the performance of device in
RCMD- vSphere vMotion and NFS storage traffic. the network
CFG-011 n Reduces the CPU overhead resulting high must
network usage. support
jumbo
frames.
Table 15-12. vSAN Design Recommendations for Stretched Clusters with VMware Cloud
Foundation
Recommen
dation ID Design Recommendation Justification Implication
VCF- Configure the vSAN witness Removes the requirement to have static routes on The
VSAN- appliance to use the first VMkernel the witness appliance as witness traffic is routed managemen
WTN- adapter, that is the management over the management network. t networks
RCMD- interface, for vSAN witness traffic. in both
CFG-001 availability
zones must
be routed to
the
managemen
t network in
the witness
site.
VCF- Place witness traffic on the Separates the witness traffic from the vSAN data The
VSAN- management VMkernel adapter of traffic. Witness traffic separation provides the managemen
WTN- all the ESXi hosts in the workload following benefits: t networks
RCMD- domain. n Removes the requirement to have static routes in both
CFG-002 from the vSAN networks in both availability availability
zones to the witness site. zones must
be routed to
n Removes the requirement to have jumbo
the
frames enabled on the path between each
managemen
availability zone and the witness site because
t network in
witness traffic can use a regular MTU size of
the witness
1500 bytes.
site.
For full design details, see ESXi Design for VMware Cloud Foundation.
VCF-ESX-REQD-CFG-002 Ensure each ESXi host n Ensures workloads Assemble the server
matches the required will run without specification and number
CPU, memory and storage contention even during according to the sizing in
specification. failure and maintenance VMware Cloud Foundation
conditions. Planning and Preparation
Workbook which is based
on projected deployment
size.
VCF-ESX-REQD-NET-001 Place the ESXi hosts Reduces the number of Separation of the physical
in each management VLANs needed because VLAN between ESXi hosts
domain cluster on the a single VLAN can be and other management
VLAN-backed management allocated to both the ESXi components for security
network segment for hosts, vCenter Server, and reasons is missing.
vCenter Server, and management components
management components for NSX.
for NSX.
VCF-ESX-REQD-NET-002 Place the ESXi hosts Physical VLAN security A new VLAN and a new
in each VI workload separation between subnet are required for
domain cluster on a VI workload domain the VI workload domain
VLAN-backed management ESXi hosts and other management network.
network segment other management components
than that used by the in the management domain
management domain. is achieved.
VCF-ESX-RCMD-CFG-001 Use vSAN ReadyNodes Your management domain Hardware choices might be
with vSAN storage for is fully compatible with limited.
each ESXi host in the vSAN at deployment. If you plan to use a
management domain. For information about the server configuration that is
models of physical servers not a vSAN ReadyNode,
that are vSAN-ready, see your CPU, disks and I/O
vSAN Compatibility Guide modules must be listed on
for vSAN ReadyNodes. the VMware Compatibility
Guide under CPU Series
and vSAN Compatibility List
aligned to the ESXi version
specified in VMware Cloud
Foundation 5.0 Release
Notes.
VCF-ESX-RCMD-CFG-002 Allocate hosts with uniform A balanced cluster has You must apply vendor
configuration across the these advantages: sourcing, budgeting,
default management n Predictable and procurement
vSphere cluster. performance even considerations for uniform
during hardware server nodes on a per
failures cluster basis.
n Minimal impact of
resynchronization or
rebuild operations on
performance
VCF-ESX-RCMD-CFG-003 When sizing CPU, do Although multithreading Because you must provide
not consider multithreading technologies increase more physical CPU
technology and associated CPU performance, the cores, costs increase and
performance gains. performance gain depends hardware choices become
on running workloads and limited.
differs from one case to
another.
VCF-ESX-RCMD-CFG-004 Install and configure all Provides hosts that have None
ESXi hosts in the default large memory, that is,
management cluster to greater than 512 GB,
boot using a 128 GB device with enough space for
or greater. the scratch partition when
using vSAN.
VCF-ESX-RCMD-CFG-006 For workloads running in Simplifies the configuration Increases the amount
the default management process. of replication traffic for
cluster, save the virtual management workloads
machine swap file at the that are recovered as part
default location. of the disaster recovery
process.
VCF-ESX-RCMD-NET-001 Place ESXi Hosts for Physical VLAN security A new workload domain
each VI Workload Domain separation between ESXi management and subnet
on separate VLAN-backed hosts in different VI are required for each new
management network workload domains is VI workload domain.
segments achieved.
VCF-ESX-RCMD-SEC-001 Deactivate SSH access on Ensures compliance with You must activate SSH
all ESXi hosts in the the vSphere Security access manually for
management domain by Configuration Guide and troubleshooting or support
having the SSH service with security best activities as VMware Cloud
stopped and using the practices. Foundation deactivates
default SSH service policy Disabling SSH access SSH on ESXi hosts
Start and stop manually . reduces the risk of security after workload domain
attacks on the ESXi hosts deployment.
through the SSH interface.
VCF-ESX-RCMD-SEC-002 Set the advanced setting n Ensures compliance You must suppress
UserVars.SuppressShellWa with the vSphere SSH enablement warning
rning to 0 across all ESXi Security Configuration messages manually when
hosts in the management Guide and with security performing troubleshooting
domain. best practices or support activities.
n Turns off the warning
message that appears
in the vSphere Client
every time SSH access
is activated on an ESXi
host.
The configuration tasks for most design requirements and recommendations are automated
in VMware Cloud Foundation. You must perform the configuration manually only for a limited
number of decisions as noted in the design implication.
For full design details, see vCenter Server Design for VMware Cloud Foundation.
Table 15-15. vCenter Server Design Requirements for VMware Cloud Foundation (continued)
VCF-VCS-REQD-NET-001 Place all workload domain Reduces the number of required None.
vCenters Server appliances VLANs because a single VLAN can
on the management VLAN be allocated to both, vCenter Server
network segment of the and NSX management components.
management domain.
Table 15-16. vCenter Server Design Recommendations for VMware Cloud Foundation
VCF-VCS-RCMD-CFG-002 Deploy a vCenter Server Ensures resource The default size for a
appliance with the availability and usage management domain is
approproate storage size. efficiency per workload Small and for VI Workload
domain. Domains is Medium. To
override these values you
must use the API.
VCF-VCS-RCMD-CFG-003 Protect workload domain vSphere HA is the only vCenter Server becomes
vCenter Server appliances supported method to unavailable during a
by using vSphere HA. protect vCenter Server vSphere HA failover.
availability in VMware
Cloud Foundation.
VCF-VCS-RCMD-CFG-004 In vSphere HA, set the vCenter Server is the If the restart priority for
restart priority policy management and control another virtual machine
for the vCenter Server plane for physical and is set to highest, the
appliance to high. virtual infrastructure. In a connectivity delay for the
vSphere HA event, to management components
ensure the rest of the will be longer.
SDDC management stack
comes up faultlessly, the
workload domain vCenter
Server must be available
first, before the other
management components
come online.
Table 15-17. vCenter Server Design Recommendations for vSAN Stretched Clusters with VMware
Cloud Foundation
VCF-VCS-REQD-SSO- Join all vCenter Server When all vCenter Server n Only one vCenter Single
STD-001 instances within aVMware instances are in the Sign-On domain exists.
Cloud Foundation instance same vCenter Single Sign- n The number of
to a single vCenter Single On domain, they can linked vCenter Server
Sign-On domain. share authentication and instances in the
license data across all same vCenter Single
components. Sign-On domain is
limited to 15 instances.
Because each workload
domain uses a
dedicated vCenter
Server instance, you
can deploy up to
15 domains within
each VMware Cloud
Foundation instance.
Table 15-19. Design Requirements for Multiple vCenter Server Instance - Multiple vCenter Single
Sign-On Domain Topology for VMware Cloud Foundation
VCF-VCS-REQD-SSO- Create all vCenter Server n Enables isolation at n Each vCenter server
ISO-001 instances within a VMware the vCenter Single instance is managed
Cloud Foundation instance Sign-On domain layer through its own pane of
in their own unique vCenter for increased security glass using a different
Single Sign-On domains. separation. set of administrative
n Supports up to 25 credentials.
workload domains. n You must manage
password rotation
for each vCenter
Single Sign-On domain
separately.
For full design details, see Logical vSphere Cluster Design for VMware Cloud Foundation.
Table 15-20. vSphere Cluster Design Requirements for VMware Cloud Foundation
Table 15-20. vSphere Cluster Design Requirements for VMware Cloud Foundation (continued)
VCF-CLS-REQD-LCM-001 Use baselines as the life vSphere Lifecycle Manager You must manage the
cycle management method images are not supported life cycle of firmware and
for the management for the management vendor add-ons manually.
domain. domain.
Table 15-21. vSphere Cluster Design Requirements for vSAN Stretched Clusters with VMware
Cloud Foundation
VCF-CLS-REQD-CFG-007 Enable the Override Enables routing the vSAN vSAN networks across
default gateway for this data traffic through the availability zones must
adapter setting on the vSAN network gateway have a route to each other.
vSAN VMkernel adapters rather than through the
on all ESXi hosts. management gateway.
Table 15-21. vSphere Cluster Design Requirements for vSAN Stretched Clusters with VMware
Cloud Foundation (continued)
VCF-CLS-REQD-CFG-008 Create a host group for Makes it easier to manage You must create and
each availability zone and which virtual machines run maintain VM-Host DRS
add the ESXi hosts in in which availability zone. group rules.
the zone to the respective
group.
VCF-CLS-REQD-LCM-002 Create workload domains vSphere Lifecycle Manager You must manage the
selecting baselines as the images are not supported life cycle of firmware and
life cycle management for vSAN stretched vendor add-ons manually.
method. clusters.
Table 15-22. vSphere Cluster Design Recommendations for VMware Cloud Foundation
VCF-CLS-RCMD-CFG-001 Use vSphere HA to protect vSphere HA supports a You must provide sufficient
all virtual machines against robust level of protection resources on the remaining
failures. for both ESXi host and hosts so that virtual
virtual machine availability. machines can be restarted
on those hosts in the event
of a host outage.
VCF-CLS-RCMD-CFG-002 Set host isolation response vSAN requires that the If a false positive event
to Power Off and restart host isolation response be occurs, virtual machines are
VMs in vSphere HA. set to Power Off and to powered off and an ESXi
restart virtual machines on host is declared isolated
available ESXi hosts. incorrectly.
Table 15-22. vSphere Cluster Design Recommendations for VMware Cloud Foundation
(continued)
VCF-CLS-RCMD-CFG-006 Enable vSphere DRS on all Provides the best If a vCenter Server outage
clusters, using the default trade-off between load occurs, the mapping from
fully automated mode with balancing and unnecessary virtual machines to ESXi
medium threshold. migrations with vSphere hosts might be difficult to
vMotion. determine.
VCF-CLS-RCMD-CFG-007 Enable Enhanced vMotion Supports cluster upgrades You must enable EVC only
Compatibility (EVC) on all without virtual machine if the clusters contain hosts
clusters in the management downtime. with CPUs from the same
domain. vendor.
You must enable EVC on
the default management
domain cluster during
bringup.
VCF-CLS-RCMD-CFG-008 Set the cluster EVC mode Supports cluster upgrades None.
to the highest available without virtual machine
baseline that is supported downtime.
for the lowest CPU
architecture on the hosts in
the cluster.
VCF-CLS-RCMD-LCM-001 Use images as the life cycle vSphere Lifecycle Manager You cannot add any
management method for VI images simplify the stretched cluster to a
workload domains that do management of firmware VI workload domains that
not require vSAN stretched and vendor add-ons uses vSphere Lifecycle
clusters. manually. Manager images.
Table 15-23. vSphere Cluster Design Recommendations for vSAN Stretched Clusters with
VMware Cloud Foundation
VCF-CLS-RCMD-CFG-009 Increase admission control Allocating only half of a In a cluster of 8 ESXi hosts,
percentage to the half stretched cluster ensures the resources of only 4
of the ESXi hosts in the that all VMs have enough ESXi hosts are available for
cluster. resources if an availability use.
zone outage occurs. If you add more ESXi hosts
to the default management
cluster, add them in pairs,
one per availability zone.
VCF-CLS-RCMD-CFG-010 Create a virtual machine Ensures that virtual You must add virtual
group for each availability machines are located machines to the allocated
zone and add the VMs in only in the assigned group manually.
the zone to the respective availability zone to
group. avoid unnecessary vSphere
vMotion migrations.
VCF-CLS-RCMD-CFG-011 Create a should-run-on- Ensures that virtual You must manually create
hosts-in-group VM-Host machines are located the rules.
affinity rule to run each only in the assigned
group of virtual machines availability zone to
on the respective group avoid unnecessary vSphere
of hosts in the same vMotion migrations.
availability zone.
For full design details, see vSphere Networking Design for VMware Cloud Foundation.
Table 15-24. vSphere Networking Design Recommendations for VMware Cloud Foundation
VCF-VDS-RCMD-CFG-001 Use a single vSphere n Reduces the complexity Increases the number
Distributed Switch per of the network design. of vSphere Distributed
cluster. n Reduces the size of the Switches that must be
fault domain. managed.
VCF-VDS-RCMD-CFG-002 Configure the MTU size n Supports the MTU size When adjusting the MTU
of the vSphere Distributed required by system packet size, you must
Switch to 9000 for jumbo traffic types. also configure the entire
frames. n Improves traffic network path (VMkernel
throughput. ports, virtual switches,
physical switches, and
routers) to support the
same MTU packet size.
Table 15-24. vSphere Networking Design Recommendations for VMware Cloud Foundation
(continued)
VCF-VDS-RCMD-DPG-001 Use ephemeral port binding Using ephemeral port Port-level permissions and
for the management port binding provides the option controls are lost across
group. for recovery of the vCenter power cycles, and no
Server instance that is historical context is saved.
managing the distributed
switch.
VCF-VDS-RCMD-DPG-002 Use static port binding for Static binding ensures a None.
all non-management port virtual machine connects
groups. to the same port on
the vSphere Distributed
Switch. This allows for
historical data and port
level monitoring.
VCF-VDS-RCMD-NIO-001 Enable Network I/O Control Increases resiliency and If configured incorrectly,
on vSphere Distributed performance of the Network I/O Control
Switch of the management network. might impact network
domain cluster performance for critical
traffic types.
VCF-VDS-RCMD-NIO-002 Set the share value for By keeping the default None.
management traffic to setting of Normal,
Normal. management traffic is
prioritized higher than
vSphere vMotion but
lower than vSAN traffic.
Management traffic is
important because it
ensures that the hosts
can still be managed
during times of network
contention.
VCF-VDS-RCMD-NIO-003 Set the share value for During times of network During times of network
vSphere vMotion traffic to contention, vSphere contention, vMotion takes
Low. vMotion traffic is not longer than usual to
as important as virtual complete.
machine or storage traffic.
Table 15-24. vSphere Networking Design Recommendations for VMware Cloud Foundation
(continued)
VCF-VDS-RCMD-NIO-004 Set the share value for Virtual machines are the None.
virtual machines to High. most important asset in the
SDDC. Leaving the default
setting of High ensures that
they always have access to
the network resources they
need.
VCF-VDS-RCMD-NIO-006 Set the share value for By default, VMware Cloud None.
other traffic types to Low. Foundation does not use
other traffic types, like
vSphere FT traffic. Hence,
these traffic types can be
set the lowest priority.
For full design details, see Chapter 8 NSX Design for VMware Cloud Foundation.
VCF-NSX-LM-REQD- Deploy three NSX Manager Supports high availability of You must have sufficient
CFG-002 nodes in the default the NSX manager cluster. resources in the default
vSphere cluster in the cluster of the management
management domain for domain to run three NSX
configuring and managing Manager nodes.
the network services for
the workload domain.
Table 15-26. NSX Manager Design Recommendations for VMware Cloud Foundation
VCF-NSX-LM-RCMD- Deploy appropriately sized Ensures resource The default size for
CFG-001 nodes in the NSX Manager availability and usage a management domain
cluster for the workload efficiency per workload is Medium, and for VI
domain. domain. workload domains is Large.
VCF-NSX-LM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-002 address for the NSX the user interface and API feature provides high
Manager cluster for the of NSX Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Manager nodes must
be deployed on the
same Layer 2 network.
Table 15-26. NSX Manager Design Recommendations for VMware Cloud Foundation (continued)
VCF-NSX-LM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Manager You must allocate at
CFG-003 rules in vSphere Distributed appliances running on least four physical hosts
Resource Scheduler different ESXi hosts for so that the three
(vSphere DRS) to the NSX high availability. NSX Manager appliances
Manager appliances. continue running if an ESXi
host failure occurs.
VCF-NSX-LM-RCMD- In vSphere HA, set the n NSX Manager If the restart priority
CFG-004 restart priority policy implements the control for another management
for each NSX Manager plane for virtual appliance is set to highest,
appliance to high. network segments. the connectivity delay for
vSphere HA restarts management appliances
the NSX Manager will be longer.
appliances first so that
other virtual machines
that are being powered
on or migrated by using
vSphere vMotion while
the control plane is
offline lose connectivity
only until the control
plane quorum is re-
established.
n Setting the restart
priority to high reserves
the highest priority for
flexibility for adding
services that must be
started before NSX
Manager.
Table 15-27. NSX Manager Design Recommendations for Stretched Clusters in VMware Cloud
Foundation
VCF-NSX-LM-RCMD- Add the NSX Manager Ensures that, by default, Done automatically by
CFG-006 appliances to the virtual the NSX Manager VMware Cloud Foundation
machine group for the first appliances are powered when configuring a vSAN
availability zone. on a host in the primary stretched cluster.
availability zone.
Table 15-28. NSX Global Manager Design Requirements for VMware Cloud Foundation
Table 15-29. NSX Global Manager Design Recommendations for VMware Cloud Foundation
VCF-NSX-GM-RCMD- Deploy three NSX Global Provides high availability You must have sufficient
CFG-001 Manager nodes for the for the NSX Global resources in the default
workload domain to Manager cluster. cluster of the management
support NSX Federation domain to run three NSX
across VMware Cloud Global Manager nodes.
Foundation instances.
VCF-NSX-GM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-003 address for the NSX Global the user interface and API feature provides high
Manager cluster for the of NSX Global Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Global Manager nodes
must be deployed on
the same Layer 2
network.
VCF-NSX-GM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Global You must allocate at
CFG-004 rules in vSphere DRS to Manager appliances least four physical hosts
the NSX Global Manager running on different ESXi so that the three
appliances. hosts for high availability. NSX Manager appliances
continue running if an ESXi
host failure occurs.
Table 15-29. NSX Global Manager Design Recommendations for VMware Cloud Foundation
(continued)
VCF-NSX-GM-RCMD- In vSphere HA, set the n NSX Global Manager n Management of NSX
CFG-005 restart priority policy for implements the global components will
each NSX Global Manager management plane for be unavailable until the
appliance to medium. global segments and NSX Global Manager
firewalls. virtual machines restart.
n The NSX Global
NSX Global Manager is
Manager cluster is
not required for control
deployed in the
plane and data plane
management domain,
connectivity.
where the total number
n Setting the restart
of virtual machines
priority to medium
is limited and where
reserves the high
it competes with
priority for services that
other management
impact the NSX control
components for restart
or data planes.
priority.
VCF-NSX-GM-RCMD- Set the NSX Global Enables recoverability of Must be done manually.
CFG-007 Manager cluster in the NSX Global Manager in
second VMware Cloud the second VMware Cloud
Foundation instance as Foundation instance if a
standby for the workload failure in the first instance
domain. occurs.
Table 15-30. NSX Global Manager Design Recommendations for Stretched Clusters in VMware
Cloud Foundation
VCF-NSX-GM-RCMD- Add the NSX Global Ensures that, by default, Done automatically by
CFG-008 Manager appliances to the the NSX Global Manager VMware Cloud Foundation
virtual machine group for appliances are powered when stretching a cluster.
the first availability zone. on a host in the primary
availability zone.
Table 15-31. NSX Edge Design Requirements for VMware Cloud Foundation (continued)
VCF-NSX-EDGE-REQD- Use a dedicated VLAN A dedicated edge overlay n You must have routing
CFG-003 for edge overlay that is network provides support between the VLANs for
different from the host for edge mobility in edge overlay and host
overlay VLAN. support of advanced overlay.
deployments such as n You must allocate
multiple availability zones another VLAN in
or multi-rack clusters. the data center
infrastructure for edge
overlay.
Table 15-32. NSX Edge Design Requirements for NSX Federation in VMware Cloud Foundation
VCF-NSX-EDGE-REQD- Allocate a separate VLAN The RTEP network must be You must allocate another
CFG-005 for edge RTEP overlay that on a VLAN that is different VLAN in the data center
is different from the edge from the edge overlay infrastructure.
overlay VLAN. VLAN. This is an NSX
requirement that provides
support for configuring
different MTU size per
network.
Table 15-33. NSX Edge Design Recommendations for VMware Cloud Foundation
VCF-NSX-EDGE-RCMD- Use appropriately sized Ensures resource You must provide sufficient
CFG-001 NSX Edge virtual availability and usage compute resources to
appliances. efficiency per workload support the chosen
domain. appliance size.
VCF-NSX-EDGE-RCMD- Deploy the NSX Edge n Simplifies the Workloads and NSX Edges
CFG-002 virtual appliances to the configuration and share the same compute
default vSphere cluster minimizes the number resources.
of the workload domain, of ESXi hosts required
sharing the cluster between for initial deployment.
the workloads and the n Keeps the management
edge appliances. components located in
the same domain and
cluster, isolated from
customer workloads.
VCF-NSX-EDGE-RCMD- Deploy two NSX Edge Creates the NSX Edge None.
CFG-003 appliances in an edge cluster for satisfying the
cluster in the default requirements for availability
vSphere cluster of the and scale.
workload domain.
VCF-NSX-EDGE-RCMD- Apply VM-VM anti-affinity Keeps the NSX Edge nodes None.
CFG-004 rules for vSphere DRS to running on different ESXi
the virtual machines of the hosts for high availability.
NSX Edge cluster.
Table 15-33. NSX Edge Design Recommendations for VMware Cloud Foundation (continued)
VCF-NSX-EDGE-RCMD- In vSphere HA, set the n The NSX Edge nodes If the restart priority
CFG-005 restart priority policy for are part of the for another management
each NSX Edge appliance north-south data path appliance is set to highest,
to high. for overlay segments. the connectivity delays
vSphere HA restarts the for management appliances
NSX Edge appliances will be longer .
first so that other
virtual machines that
are being powered on
or migrated by using
vSphere vMotion while
the edge nodes are
offline lose connectivity
only for a short time.
n Setting the restart
priority to high reserves
highest for future
needs.
Table 15-34. NSX Edge Design Recommendations for Stretched Clusters in VMware Cloud
Foundation
VCF-NSX-BGP-REQD- To enable ECMP between Supports multiple equal- Additional VLANs are
CFG-001 the Tier-0 gateway and cost routes on the Tier-0 required.
the Layer 3 devices gateway and provides
(ToR switches or upstream more resiliency and better
devices), create two bandwidth use in the
VLANs . network.
The ToR switches or
upstream Layer 3 devices
have an SVI on one of the
two VLANS and each Edge
node in the cluster has an
interface on each VLAN.
VCF-NSX-BGP-REQD- Create a VLAN transport Enables the configuration Additional VLAN transport
CFG-003 zone for edge uplink traffic. of VLAN segments on the zones are required if
N-VDS in the edge nodes. the edge nodes are not
connected to the same top
of rack switch pair.
Table 15-35. BGP Routing Design Requirements for VMware Cloud Foundation (continued)
VCF-NSX-BGP-REQD- Deploy a Tier-1 gateway Creates a two-tier routing A Tier-1 gateway can only
CFG-004 and connect it to the Tier-0 architecture. be connected to a single
gateway. Abstracts the NSX logical Tier-0 gateway.
components which interact In cases where multiple
with the physical data Tier-0 gateways are
center from the logical required, you must create
components which provide multiple Tier-1 gateways.
SDN services.
Table 15-36. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation
VCF-NSX-BGP-REQD- Extend the uplink VLANs Because the NSX Edge You must configure a
CFG-006 to the top of rack switches nodes will fail over stretched Layer 2 network
so that the VLANs are between the availability between the availability
stretched between both zones, ensures uplink zones by using physical
availability zones. connectivity to the top network infrastructure.
of rack switches in
both availability zones
regardless of the zone
the NSX Edge nodes are
presently in.
VCF-NSX-BGP-REQD- Provide this SVI Enables the communication You must configure a
CFG-007 configuration on the top of of the NSX Edge nodes to stretched Layer 2 network
the rack switches. the top of rack switches in between the availability
n In the second both availability zones over zones by using the physical
availability zone, the same uplink VLANs. network infrastructure.
configure the top
of rack switches or
upstream Layer 3
devices with an SVI on
each of the two uplink
VLANs.
n Make the top of
rack switch SVI in
both availability zones
part of a common
stretched Layer 2
network between the
availability zones.
VCF-NSX-BGP-REQD- Provide this VLAN Supports multiple equal- n Extra VLANs are
CFG-008 configuration: cost routes on the Tier-0 required.
n Use two VLANs to gateway, and provides n Requires stretching
enable ECMP between more resiliency and better uplink VLANs between
the Tier-0 gateway and bandwidth use in the availability zones
the Layer 3 devices network.
(top of rack switches or
Leaf switches).
n The ToR switches
or upstream Layer 3
devices have an SVI to
one of the two VLANS
and each NSX Edge
node has an interface
to each VLAN.
Table 15-36. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)
VCF-NSX-BGP-REQD- Create an IP prefix list Used in a route map to You must manually create
CFG-009 that permits access to prepend a path to one or an IP prefix list that is
route advertisement by any more autonomous system identical to the default one.
network instead of using (AS-path prepend) for BGP
the default IP prefix list. neighbors in the second
availability zone.
VCF-NSX-BGP-REQD- Create a route map-out n Used for configuring You must manually create
CFG-010 that contains the custom neighbor relationships the route map.
IP prefix list and an AS- with the Layer 3 The two NSX Edge nodes
path prepend value set to devices in the second will route north-south
the Tier-0 local AS added availability zone. traffic through the second
twice. n Ensures that all ingress availability zone only if
traffic passes through the connection to their
the first availability BGP neighbors in the first
zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.
VCF-NSX-BGP-REQD- Create an IP prefix list that Used in a route map to You must manually create
CFG-011 permits access to route configure local-reference an IP prefix list that is
advertisement by network on learned default-route identical to the default one.
0.0.0.0/0 instead of using for BGP neighbors in the
the default IP prefix list. second availability zone.
Table 15-36. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)
VCF-NSX-BGP-REQD- Apply a route map-in that n Used for configuring You must manually create
CFG-012 contains the IP prefix list for neighbor relationships the route map.
the default route 0.0.0.0/0 with the Layer 3 The two NSX Edge nodes
and assign a lower local- devices in the second will route north-south
preference , for example, availability zone. traffic through the second
80, to the learned default n Ensures that all egress availability zone only if
route and a lower local- traffic passes through the connection to their
preference, for example, 90 the first availability BGP neighbors in the first
any routes learned. zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.
VCF-NSX-BGP-REQD- Configure the neighbors of Makes the path in and The two NSX Edge nodes
CFG-013 the second availability zone out of the second will route north-south
to use the route maps as In availability zone less traffic through the second
and Out filters respectively. preferred because the AS availability zone only if
path is longer and the the connection to their
local preference is lower. BGP neighbors in the first
As a result, all traffic passes availability zone is lost, for
through the first zone. example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.
Table 15-37. BGP Routing Design Requirements for NSX Federation in VMware Cloud
Foundation
VCF-NSX-BGP-REQD- Extend the Tier-0 gateway n Supports ECMP north- The Tier-0 gateway
CFG-014 to the second VMware south routing on all deployed in second
Cloud Foundation instance. nodes in the NSX Edge instance is removed
cluster.
n Enables support for
cross-instance Tier-1
gateways and cross-
instance network
segments.
Table 15-37. BGP Routing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)
Table 15-37. BGP Routing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)
VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables cross-instance You must manually fail over
CFG-018 cluster in each VMware network span between and fail back the cross-
Cloud Foundation instance the first and instance network from
to the stretched second VMware Cloud the standby NSX Global
Tier-1 gateway. Set Foundation instances. Manager.
the firstVMware Cloud n Enables deterministic
Foundation instance as ingress and egress
primary and the second traffic for the cross-
instance as secondary. instance network.
n If a VMware Cloud
Foundation instance
failure occurs, enables
deterministic failover of
the Tier-1 traffic flow.
n During the
recovery of the
inaccessibleVMware
Cloud Foundation
instance, enables
deterministic failback of
the Tier-1 traffic flow,
preventing unintended
asymmetrical routing.
n Eliminates the need
to use BGP attributes
in the first and
second VMware Cloud
Foundation instances
to influence location
preference and failover.
VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables instance- You can use the
CFG-019 cluster in each VMware specific networks to be service router that is
Cloud Foundation instance isolated to their specific created for the Tier-1
to the local Tier-1 gateway instances. gateway for networking
for that VMware Cloud n Enables deterministic services. However, such
Foundation instance. flow of ingress and configuration is not
egress traffic for required for network
the instance-specific connectivity.
networks.
VCF-NSX-BGP-REQD- Set each local Tier-1 Prevents the need to use None.
CFG-020 gateway only as primary in BGP attributes in primary
that instance. Avoid setting and secondary instances
the gateway as secondary to influence the instance
in the other instances. ingress-egress preference.
Table 15-38. BGP Routing Design Recommendations for VMware Cloud Foundation
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication
VCF-NSX-BGP-RCMD- Configure the BGP Keep Provides a balance By using longer timers to
CFG-002 Alive Timer to 4 and Hold between failure detection detect if a router is not
Down Timer to 12 or lower between the top of responding, the data about
between the top of tack rack switches and the such a router remains in
switches and the Tier-0 Tier-0 gateway, and the routing table longer. As
gateway. overburdening the top of a result, the active router
rack switches with keep- continues to send traffic to
alive traffic. a router that is down.
These timers must be
aligned with the data
center fabric design of your
organization.
Table 15-38. BGP Routing Design Recommendations for VMware Cloud Foundation (continued)
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication
Table 15-39. BGP Routing Design Recommendations for NSX Federation in VMware Cloud
Foundation
VCF-NSX-OVERLAY-REQD- Configure each ESXi host n Enables the You must configure each
CFG-002 as a transport node participation of ESXi transport node with an
without using transport hosts and the virtual uplink profile individually.
node profiles. machines on them in
NSX overlay and VLAN
networks.
n Transport node profiles
can only be applied
at the cluster
level. Because in
an environment with
multiple availability
zones each availability
zone is connected to a
different set of VLANs,
you cannot use a
transport node profile.
VCF-NSX-OVERLAY-REQD- Create a single overlay n Ensures that overlay All clusters in all workload
CFG-004 transport zone for all segments are domains that share the
overlay traffic across the connected to an same NSX Manager share
workload domain and NSX NSX Edge node for the same transport zone.
Edge nodes. services and north-
south routing.
n Ensures that all
segments are available
to all ESXi hosts
and NSX Edge nodes
configured as transport
nodes.
Table 15-40. Overlay Design Requirements for VMware Cloud Foundation (continued)
VCF-NSX-OVERLAY-REQD- Create a VLAN transport Ensures that uplink VLAN If VLAN segments are
CFG-005 zone for uplink VLAN traffic segments are configured required on hosts, you
that is applied only to NSX on the NSX Edge transport must create another VLAN
Edge nodes. nodes. transport zone for the host
transport nodes only.
VCF-NSX-OVERLAY-RCMD- Use DHCP to assign IP Required for deployments DHCP server is required for
CFG-001 addresses to the host TEP where a cluster spans the host overlay VLANs.
interfaces. Layer 3 network domains
such as multiple availability
zones and management
clusters that span Layer 3
domains.
VCF-NSX-AVN-REQD- Create one cross-instance Prepares the environment Each NSX segment requires
CFG-001 NSX segment for the for the deployment of a unique IP address space.
components of a vRealize solutions on top of VMware
Suite application or Cloud Foundation, such as
another solution that vRealize Suite, without a
requires mobility between complex physical network
VMware Cloud Foundation configuration.
instances. The components of
the vRealize Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.
VCF-NSX-AVN-REQD- Create one or more local- Prepares the environment Each NSX segment requires
CFG-002 instance NSX segments for the deployment of a unique IP address space.
for the components of a solutions on top of VMware
vRealize Suite application Cloud Foundation, such as
or another solution that vRealize Suite, without a
are assigned to a specific complex physical network
VMware Cloud Foundation configuration.
instance.
Table 15-43. Application Virtual Network Design Requirements for NSX Federation in VMware
Cloud Foundation
VCF-NSX-AVN-REQD- Extend the cross-instance Enables workload mobility Each NSX segment requires
CFG-003 NSX segment to the without a complex physical a unique IP address space.
second VMware Cloud network configuration.
Foundation instance. The components of
a vRealize Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.
VCF-NSX-AVN-REQD- In each VMware Cloud Enables workload mobility Each NSX segment requires
CFG-004 Foundation instance, create within a VMware a unique IP address space.
additional local-instance Cloud Foundation instance
NSX segments. without complex physical
network configuration.
Each VMware Cloud
Foundation instance should
have network segments to
support workloads which
are isolated to that VMware
Cloud Foundation instance.
Table 15-44. Application Virtual Network Design Recommendations for VMware Cloud
Foundation
VCF-NSX-LB-REQD- When creating load Provides load balancing to You must connect
CFG-002 balancing services for applications connected to the gateway to each
Application Virtual the cross-instance network. network that requires load
Networks, connect the balancing.
standalone Tier-1 gateway
to the cross-instance NSX
segments.
Table 15-46. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation
VCF-NSX-LB-REQD- Connect the standalone Provides load balancing to You must connect
CFG-005 Tier-1 gateway in the applications connected to the gateway to each
second VMware Cloud the cross-instance network network that requires load
Foundationinstance to in the second VMware balancing.
the cross-instance NSX Cloud Foundation instance.
segment.
Table 15-46. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)
For full design details, see Chapter 9 SDDC Manager Design for VMware Cloud Foundation.
Table 15-47. SDDC Manager Design Requirements for VMware Cloud Foundation
Table 15-48. SDDC Manager Design Recommendations for VMware Cloud Foundation
VCF-SDDCMGR-RCMD- Connect SDDC Manager SDDC Manager must be The rules of your
CFG-001 to the Internet for able to download install organization might not
downloading software and upgrade software permit direct access to the
bundles. bundles for deployment of Internet. In this case, you
VI workload domains and must download software
solutions, and for upgrade bundles for SDDC Manager
from a repository. manually.
VCF-SDDCMGR-RCMD- Configure a network proxy To protect SDDC Manager The proxy must not
CFG-002 to connect SDDC Manager against external attacks use authentication because
to the Internet. from the Internet. SDDC Manager does
not support proxy with
authentication.
Table 15-48. SDDC Manager Design Recommendations for VMware Cloud Foundation
(continued)
VCF-SDDCMGR-RCMD- To check for and Software bundles for Requires the use of a
CFG-003 download software VMware Cloud Foundation VMware Customer Connect
bundles, configure SDDC are stored in a repository user account with access to
Manager with a VMware that is secured with access VMware Cloud Foundation
Customer Connect account controls. licensing.
with VMware Cloud Sites without internet
Foundation entitlement. connection can use local
upload option instead.
For full design details, see Chapter 10 vRealize Suite Lifecycle Manager Design for VMware Cloud
Foundation.
Table 15-49. vRealize Suite Lifecycle Manager Design Requirements for VMware Cloud
Foundation
VCF-vRSLCM- Deploy a vRealize Suite Provides life cycle You must ensure that
REQD-CFG-001 Lifecycle Manager instance management operations for the required resources are
in the management domain vRealize Suite applications and available.
of each VMware Cloud Workspace ONE Access.
Foundation instance to provide
life cycle management for
vRealize Suite and Workspace
ONE Access.
VCF-vRSLCM- Place the vRealize Suite Provides a consistent You must use an
REQD-CFG-004 Lifecycle Manager appliance deployment model for implementation in NSX to
on an overlay-backed management applications. support this networking
(recommended) or VLAN- configuration.
backed NSX network segment.
VCF-vRSLCM- Import vRealize Suite product n You can review the validity, When using the API, you must
REQD-CFG-005 licenses to the Locker details, and deployment specify the Locker ID for the
repository for product life cycle usage for the license across license to be used in the JSON
operations. the vRealize Suite products. payload.
n You can reference and
use licenses during product
life cycle operations, such
as deployment and license
replacement.
Table 15-49. vRealize Suite Lifecycle Manager Design Requirements for VMware Cloud
Foundation (continued)
VCF-vRSLCM- Configure datacenter objects You can deploy and manage You must manage a separate
REQD-ENV-001 in vRealize Suite Lifecycle the integrated vRealize Suite datacenter object for the
Manager for local and components across the SDDC products that are specific to
cross-instance vRealize Suite as a group. each instance.
deployments and assigns the
management domain vCenter
Server instance to each data
center.
VCF-vRSLCM- If deploying vRealize n Supports deployment and You can manage instance-
REQD-ENV-003 Operations or vRealize management of the specific components, such as
Automation, create a cross- integrated vRealize Suite remote collectors, only in
instance environment in products across VMware an environment that is cross-
vRealize Suite Lifecycle Cloud Foundation instances instance.
Manager as a group.
n Enables the deployment
of instance-specific
components, such as
vRealize Operations remote
collectors. In vRealize Suite
Lifecycle Manager, you
can deploy and manage
vRealize Operations remote
collector objects only
in an environment that
contains the associated
cross-instance components.
Table 15-49. vRealize Suite Lifecycle Manager Design Requirements for VMware Cloud
Foundation (continued)
VCF-vRSLCM- Use the custom vCenter Server vRealize Suite Lifecycle You must maintain the
REQD-SEC-001 role for vRealize Suite Lifecycle Manager accesses vSphere permissions required by the
Manager that has the minimum with the minimum set of custom role.
privileges required to support permissions that are required
the deployment and upgrade to support the deployment
of vRealize Suite products. and upgrade of vRealize Suite
products.
SDDC Manager automates the
creation of the custom role.
VCF-vRSLCM- Use the service account in n Provides the following n You must maintain the life
REQD-SEC-002 vCenter Server for application- access control features: cycle and availability of the
to-application communication n vvRealize Suite service account outside of
from vRealize Suite Lifecycle Lifecycle Manager SDDC manager password
Manager to vSphere. Assign accesses vSphere with rotation.
global permissions using the the minimum set of
custom role. required permissions.
n You can introduce
improved accountability
in tracking request-
response interactions
between the
components of the
SDDC.
n SDDC Manager automates
the creation of the service
account.
Table 15-50. vRealize Suite Lifecycle Manager Design Requirements for Stretched Clusters in
VMware Cloud Foundation
VCF-vRSLCM- For multiple availability zones, Ensures that, by default, If vRealize Suite Lifecycle
REQD-CFG-006 add the vRealize Suite Lifecycle the vRealize Suite Lifecycle Manager is deployed after
Manager appliance to the VM Manager appliance is powered the creation of the stretched
group for the first availability on a host in the first availability management cluster, you must
zone. zone. add the vRealize Suite Lifecycle
Manager appliance to the VM
group manually.
Table 15-51. vRealize Suite Lifecycle Manager Design Requirements for NSX Federation in
VMware Cloud Foundation
VCF-vRSLCM- Configure the DNS settings Improves resiliency in the event As you scale from a
REQD-CFG-007 for the vRealize Suite Lifecycle of an outage of external deployment with a single
Manager appliance to use DNS services for a VMware Cloud VMware Cloud Foundation
servers in each instance. Foundation instance. instance to one with multiple
VMware Cloud Foundation
instances, the DNS settings
of the vRealize Suite Lifecycle
Manager appliance must be
updated.
VCF-vRSLCM- Configure the NTP settings for Improves resiliency if an outage As you scale from a
REQD-CFG-008 the vRealize Suite Lifecycle of external services for a deployment with a single
Manager appliance to use NTP VMware Cloud Foundation VMware Cloud Foundation
servers in each VMware Cloud instance occurs. instance to one with multiple
Foundation instance. VMware Cloud Foundation
instances, the NTP settings
on the vRealize Suite Lifecycle
Manager appliance must be
updated.
Table 15-52. vRealize Suite Lifecycle Manager Design Recommendations for VMware Cloud
Foundation
Recommendati
on ID Design Recommendation Justification Implication
VCF-vRSLCM- Obtain product binaries for n You can upgrade The site must have an Internet
RCMD-LCM-001 install, patch, and upgrade vRealize Suite products connection to use VMware
in vRealize Suite Lifecycle based on their general Customer Connect.
Manager from VMware availability and endpoint Sites without an Internet
Customer Connect. interoperability rather than connection should use the local
being listed as part of upload option instead.
VMware Cloud Foundation
bill of materials (BOM).
n You can deploy and
manage binaries in an
environment that does not
allow access to the Internet
or are dark sites.
Table 15-52. vRealize Suite Lifecycle Manager Design Recommendations for VMware Cloud
Foundation (continued)
Recommendati
on ID Design Recommendation Justification Implication
VCF-vRSLCM- Enable integration between n Enables authentication to You must deploy and configure
RCMD-SEC-001 vRealize Suite Lifecycle vRealize Suite Lifecycle Workspace ONE Access
Manager and your corporate Manager by using your to establish the integration
identity source by using corporate identity source. between vRealize Suite
the Workspace ONE Access n Enables authorization Lifecycle Manager and your
instance. through the assignment corporate identity sources.
of organization and cloud
services roles to enterprise
users and groups defined
in your corporate identity
source.
VCF-vRSLCM- Create corresponding security Streamlines the management n You must create the
RCMD-SEC-002 groups in your corporate of vRealize Suite Lifecycle security groups outside of
directory services for vRealize Manager roles for users. the SDDC stack.
Suite Lifecycle Manager roles: n You must set the desired
n VCF directory synchronization
n Content Release Manager interval in Workspace
ONE Access to ensure
n Content Developer
that changes are available
within a reasonable period.
For full design details, see Chapter 11 Workspace ONE Access Design for VMware Cloud
Foundation.
Table 15-53. Workspace ONE Access Design Requirements for VMware Cloud Foundation
VCF-WSA-REQD-SEC-001 Import certificate authority- n You can reference When using the API, you
signed certificates to and use certificate must specify the Locker ID
the Locker repository authority-signed for the certificate to be
for Workspace ONE certificates during used in the JSON payload.
Access product life cycle product life cycle
operations. operations, such
as deployment and
certificate replacement.
Table 15-53. Workspace ONE Access Design Requirements for VMware Cloud Foundation
(continued)
VCF-WSA-REQD-CFG-007 If using clustered n During the deployment You must use the load
Workspace ONE Access, of Workspace ONE balancer that is configured
use the NSX load balancer Access by using by SDDC Manager and the
that is configured by SDDC vRealize Suite Lifecycle integration with vRealize
Manager on a dedicated Manager, SDDC Suite Lifecycle Manager.
Tier-1 gateway. Manager automates the
configuration of an
NSX load balancer
for Workspace ONE
Access to facilitate
scale-out.
Table 15-54. Workspace ONE Access Design Requirements for Stretched Clusters in VMware
Cloud Foundation
VCF-WSA-REQD-CFG-008 Add the Workspace ONE Ensures that, by default, n If the Workspace
Access appliances to the the Workspace ONE ONE Access instance
VM group for the first Access cluster nodes are is deployed after
availability zone. powered on a host in the the creation of the
first availability zone. stretched management
cluster, you must add
the appliances to the
VM group manually.
n ClusteredWorkspace
ONE Access might
require manual
intervention after a
failure of the active
availability zone occurs.
Table 15-55. Workspace ONE Access Design Requirements for NSX Federation in VMware Cloud
Foundation
VCF-WSA-REQD-CFG-010 Configure the NTP settings Improves resiliency if If you scale from a
on Workspace ONE Access an outage of external deployment with a single
cluster nodes to use NTP services for a VMware VMware Cloud Foundation
servers in each VMware Cloud Foundation instance instance to one with
Cloud Foundation instance. occurs. multiple VMware Cloud
Foundation instances,
the NTP settings on
Workspace ONE Access
must be updated.
Table 15-56. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
VCF-WSA-RCMD-CFG-001 Protect all Workspace Supports high availability None for standard
ONE Access nodes using for Workspace ONE deployments.
vSphere HA. Access. Clustered Workspace ONE
Access deployments might
require intervention if an
ESXi host failure occurs.
VCF-WSA-RCMD-CFG-003 When using Active Provides the following n You must manage the
Directory as an Identity access control features: password life cycle of
Provider, use an Active n Workspace ONE this account.
Directory user account with Access connects to the n If authentication to
the minimum of read-only Active Directory with more than one Active
access to Base DNs for the minimum set of Directory domain is
users and groups as the required permissions to required, additional
service account for the bind and query the accounts are required
Active Directory bind. directory. for the Workspace
n You can introduce an ONE Access connector
improved accountability bind to each Active
in tracking request- Directory domain over
response interactions LDAP.
between theWorkspace
ONE Access and Active
Directory.
Table 15-56. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)
VCF-WSA-RCMD-CFG-004 Configure the directory n Limits the number You must manage
synchronization to of replicated groups the groups from
synchronize only groups required for each your enterprise
required for the integrated product. directory selected for
SDDC solutions. n Reduces the replication synchronization to
interval for group Workspace ONE Access.
information.
VCF-WSA-RCMD-CFG-007 Add a filter to the Limits the number of To ensure that replicated
Workspace ONE Access replicated users for user accounts are managed
directory settings to Workspace ONE Access within the maximums, you
exclude users from the within the maximum scale. must define a filtering
directory replication. schema that works for your
organization based on your
directory attributes.
Table 15-56. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)
VCF-WSA-RCMD-CFG-008 Configure the mapped You can configure the User accounts in your
attributes included when minimum required and organization's enterprise
a user is added to the extended user attributes directory must have
Workspace ONE Access to synchronize directory the following required
directory. user accounts for the attributes mapped:
Workspace ONE Access n firstname, for example,
to be used as an givenname for Active
authentication source for Directory
cross-instance vRealize
n lastName, for example,
Suite solutions.
sn for Active Directory
VCF-WSA-RCMD-CFG-009 Configure the Workspace Ensures that any changes Schedule the
ONE Access directory to group memberships in synchronization interval
synchronization frequency the corporate directory to be longer than the
to a reoccurring schedule, are available for integrated time to synchronize from
for example, 15 minutes. solutions in a timely the enterprise directory.
manner. If users and groups
are being synchronized
to Workspace ONE
Access when the
next synchronization is
scheduled, the new
synchronization starts
immediately after the end
of the previous iteration.
With this schedule, the
process is continuous.
Table 15-56. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)
VCF-WSA-RCMD-SEC-002 Configure a password You can set a policy for You must set the policy
policy for Workspace Workspace ONE Access in accordance with your
ONE Access local local directory users that organization policies and
directory users, admin and addresses your corporate regulatory standards, as
configadmin. policies and regulatory applicable.
standards. You must apply the
The password policy is password policy on the
applicable only to the Workspace ONE Access
local directory users and cluster nodes.
does not impact your
organization directory.
For full design details, see Chapter 12 Life Cycle Management Design for VMware Cloud
Foundation.
Table 15-57. Life Cycle Management Design Requirements for VMware Cloud Foundation
VCF-LCM-REQD-001 Use SDDC Manager to Because the deployment The operations team must
perform the life cycle scope of SDDC Manager understand and be aware
management of the covers the full VMware of the impact of a
following components: Cloud Foundation stack, patch, update, or upgrade
n SDDC Manager SDDC Manager performs operation by using SDDC
patching, update, or Manager.
n NSX Manager
upgrade of these
n NSX Edges
components across all
n vCenter Server workload domains.
n ESXi
VCF-LCM-REQD-002 Use vRealize Suite Lifecycle vRealize Suite Lifecycle n You must deploy
Manager to manage the Manager automates the vRealize Suite Lifecycle
lifecycle of the following life cycle of vRealize Suite Manager by using
components: Lifecycle Manager and SDDC Manager.
n vRealize Suite Lifecycle Workspace ONE Access. n You must manually
Manager apply patches, updates,
n Workspace ONE and hot fixes
Access for Workspace ONE
Access. Patches,
updates, and hotfixes
for Workspace ONE
Access are not
generally managed by
vRealize Suite Lifecycle
Manager.
Table 15-58. Lifecycle Management Design Requirements for NSX Federation in VMware Cloud
Foundation
VCF-LCM-REQD-003 Use the upgrade The version of SDDC n You must explicitly
coordinator in NSX Manager in this design is plan upgrades of the
to perform life cycle not currently capable of life NSX Global Manager
management on the NSX cycle operations (patching, nodes. An upgrade
Global Manager appliances. update, or upgrade) for of the NSX Global
NSX Global Manager. Manager nodes might
require a cascading
upgrade of the NSX
Local Manager nodes
and underlying SDDC
Manager infrastructure
prior to the upgrade
of the NSX Global
Manager nodes.
n You must always align
the version of the NSX
Global Manager nodes
with the rest of the
SDDC stack in VMware
Cloud Foundation.
VCF-LCM-REQD-004 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
any workload domain, the compatible with each other. using a runbook or
impact of any version Because SDDC Manager automated process to
upgrades is evaluated does not provide life ensure a fully supported
in relation to the need cycle operations (patching, and compliant bill of
to upgrade NSX Global update, or upgrade) materials prior to any
Manager. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.
VCF-LCM-REQD-005 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
the NSX Global Manager, compatible with each other. using a runbook or
the impact of any Because SDDC Manager automated process to
version change is evaluated does not provide life ensure a fully supported
against the existing NSX cycle operations (patching, and compliant bill of
Local Manager nodes and update, or upgrade) materials prior to any
workload domains. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.
For full design details, see Chapter 14 Information Security Design for VMware Cloud Foundation.
Table 15-59. Design Requirements for Account and Password Management for VMware Cloud
Foundation
VCF-ACTMGT-REQD- Enable scheduled n Increases the security You must retrieve new
SEC-001 password rotation in SDDC posture of your SDDC. passwords by using the API
Manager for all accounts n Simplifies password if you must use accounts
supporting scheduled management interactively.
rotation. across your
SDDC management
components.
Table 15-60. Certificate Management Design Recommendations for VMware Cloud Foundation
VCF-SDDC-RCMD-SEC-001 Replace the default VMCA- Ensures that the Replacing the default
signed certificate on communication to all certificates with trusted
all management virtual management components CA-signed certificates from
appliances with a certificate is secure. a certificate authority might
that is signed by an internal increase the deployment
certificate authority. preparation time because
you must generate and
submit certificate requests.
VCF-SDDC-RCMD-SEC-002 Use a SHA-2 algorithm The SHA-1 algorithm is Not all certificate
or higher for signed considered less secure and authorities support SHA-2
certificates. has been deprecated. or higher.
VCF-SDDC-RCMD-SEC-003 Perform SSL certificate life SDDC Manager supports Certificate management
cycle management for all automated SSL certificate for NSX Global Manager
management appliances by lifecycle management instances must be done
using SDDC Manager. rather than requiring a manually.
series of manual steps.