You are on page 1of 253

VMware Cloud Foundation

Design Guide
01 JUN 2023
VMware Cloud Foundation 5.0
VMware Cloud Foundation Design Guide

You can find the most up-to-date technical documentation on the VMware website at:

https://docs.vmware.com/

VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com

©
Copyright 2021-2023 VMware, Inc. All rights reserved. Copyright and trademark information.

VMware, Inc. 2
Contents

About VMware Cloud Foundation Design Guide 7

1 VMware Cloud Foundation Concepts 10


Architecture Models and Workload Domain Types in VMware Cloud Foundation 10
VMware Cloud Foundation Topologies 15
Single Instance - Single Availability Zone 17
Single Instance - Multiple Availability Zones 18
Multiple Instances - Single Availability Zone per Instance 19
Multiple Instances - Multiple Availability Zones per Instance 21
VMware Cloud Foundation Design Blueprints 23
Design Blueprint One: Rainpole 24

2 Workload Domain to Rack Mapping in VMware Cloud Foundation 29

3 Supported Storage Types for VMware Cloud Foundation 33

4 External Services Design for VMware Cloud Foundation 34

5 Physical Network Infrastructure Design for VMware Cloud Foundation 35


VLANs and Subnets for VMware Cloud Foundation 35
Leaf-Spine Physical Network Design Requirements and Recommendations for VMware Cloud
Foundation 38

6 vSAN Design for VMware Cloud Foundation 44


Logical Design for vSAN for VMware Cloud Foundation 44
Hardware Configuration for vSAN for VMware Cloud Foundation 46
Network Design for vSAN for VMware Cloud Foundation 47
vSAN Witness Design for VMware Cloud Foundation 48
vSAN Design Requirements and Recommendations for VMware Cloud Foundation 50

7 vSphere Design for VMware Cloud Foundation 55


ESXi Design for VMware Cloud Foundation 55
Logical Design for ESXi for VMware Cloud Foundation 56
Sizing Considerations for ESXi for VMware Cloud Foundation 56
ESXi Design Requirements and Recommendations for VMware Cloud Foundation 58
vCenter Server Design for VMware Cloud Foundation 62
Logical Design for vCenter Server for VMware Cloud Foundation 62
Sizing Considerations for vCenter Server for VMware Cloud Foundation 64

VMware, Inc. 3
VMware Cloud Foundation Design Guide

High Availability Design for vCenter Server for VMware Cloud Foundation 64
vCenter Server Design Requirements and Recommendations for VMware Cloud Foundation
65
vCenter Single Sign-On Design Requirements for VMware Cloud Foundation 68
vSphere Cluster Design for VMware Cloud Foundation 73
Logical vSphere Cluster Design for VMware Cloud Foundation 73
vSphere Cluster Life Cycle Method Design for VMware Cloud Foundation 76
vSphere Cluster Design Requirements and Recommendations for VMware Cloud Foundation
76
vSphere Networking Design for VMware Cloud Foundation 81
Logical vSphere Networking Design for VMware Cloud Foundation 81
vSphere Networking Design Recommendations for VMware Cloud Foundation 83

8 NSX Design for VMware Cloud Foundation 86


Logical Design for NSX for VMware Cloud Foundation 87
NSX Manager Design for VMware Cloud Foundation 93
Sizing Considerations for NSX Manager for VMware Cloud Foundation 94
NSX Manager Design Requirements and Recommendations for VMware Cloud Foundation
94
NSX Global Manager Design Requirements and Recommendations for VMware Cloud
Foundation 96
NSX Edge Node Design for VMware Cloud Foundation 99
Deployment Model for the NSX Edge Nodes for VMware Cloud Foundation 99
Sizing Considerations for NSX Edges for VMware Cloud Foundation 101
Network Design for the NSX Edge Nodes for VMware Cloud Foundation 101
NSX Edge Node Requirements and Recommendations for VMware Cloud Foundation 103
Routing Design for VMware Cloud Foundation 107
BGP Routing Design for VMware Cloud Foundation 107
BGP Routing Design Requirements and Recommendations for VMware Cloud Foundation
114
Overlay Design for VMware Cloud Foundation 124
Logical Overlay Design for VMware Cloud Foundation 124
Overlay Design Requirements and Recommendations for VMware Cloud Foundation 126
Application Virtual Network Design for VMware Cloud Foundation 128
Logical Application Virtual Network Design VMware Cloud Foundation 129
Application Virtual Network Design Requirements and Recommendations forVMware Cloud
Foundation 131
Load Balancing Design for VMware Cloud Foundation 133
Logical Load Balancing Design for VMware Cloud Foundation 133
Load Balancing Design Requirements for VMware Cloud Foundation 135

9 SDDC Manager Design for VMware Cloud Foundation 138


Logical Design for SDDC Manager 138

VMware, Inc. 4
VMware Cloud Foundation Design Guide

SDDC Manager Design Requirements and Recommendations for VMware Cloud Foundation
140

10 vRealize Suite Lifecycle Manager Design for VMware Cloud Foundation 142
Logical Design for vRealize Suite Lifecycle Manager for VMware Cloud Foundation 143
Network Design for vRealize Suite Lifecycle Manager 145
Data Center and Environment Design for vRealize Suite Lifecycle Manager 146
Locker Design for vRealize Suite Lifecycle Manager 148
vRealize Suite Lifecycle Manager Design Requirements and Recommendations for VMware
Cloud Foundation 149

11 Workspace ONE Access Design for VMware Cloud Foundation 155


Logical Design for Workspace ONE Access 156
Sizing Considerations for Workspace ONE Access for VMware Cloud Foundation 160
Network Design for Workspace ONE Access 161
Integration Design for Workspace ONE Access with VMware Cloud Foundation 165
Deployment Model for Workspace ONE Access 166
Workspace ONE Access Design Requirements and Recommendations for VMware Cloud
Foundation 167

12 Life Cycle Management Design for VMware Cloud Foundation 175

13 Logging and Monitoring Design for VMware Cloud Foundation 178

14 Information Security Design for VMware Cloud Foundation 179


Access Management for VMware Cloud Foundation 179
Account Management Design for VMware Cloud Foundation 180
Certificate Management for VMware Cloud Foundation 185

15 Appendix: Design Elements for VMware Cloud Foundation 186


Architecture Design Elements for VMware Cloud Foundation 186
Workload Domain Design Elements for VMware Cloud Foundation 187
External Services Design Elements for VMware Cloud Foundation 187
Physical Network Design Elements for VMware Cloud Foundation 188
vSAN Design Elements for VMware Cloud Foundation 193
ESXi Design Elements for VMware Cloud Foundation 198
vCenter Server Design Elements for VMware Cloud Foundation 200
vSphere Cluster Design Elements for VMware Cloud Foundation 204
vSphere Networking Design Elements for VMware Cloud Foundation 208
NSX Design Elements for VMware Cloud Foundation 210
SDDC Manager Design Elements for VMware Cloud Foundation 235
vRealize Suite Lifecycle Manager Design Elements for VMware Cloud Foundation 237

VMware, Inc. 5
VMware Cloud Foundation Design Guide

Workspace ONE Access Design Elements for VMware Cloud Foundation 242
Life Cycle Management Design Elements for VMware Cloud Foundation 249
Information Security Design Elements for VMware Cloud Foundation 252

VMware, Inc. 6
About VMware Cloud Foundation Design
Guide

The VMware Cloud Foundation Design Guide contains a design model for VMware Cloud
Foundation (also called VCF) that is based on industry best practices for SDDC implementation.

The VMware Cloud Foundation Design Guide provides the supported design options for VMware
Cloud Foundation, and a set of decision points, justifications, implications, and considerations for
building each component.

Intended Audience
This VMware Cloud Foundation Design Guide is intended for cloud architects who are familiar
with and want to use VMware Cloud Foundation to deploy and manage an SDDC that meets the
requirements for capacity, scalability, backup and restore, and extensibility for disaster recovery
support.

Before You Apply This Guidance


The sequence of the VMware Cloud Foundation documentation follows the stages for
implementing and maintaining an SDDC.

To apply this VMware Cloud Foundation Design Guide, you must be acquainted with the Getting
Started with VMware Cloud Foundation documentation and with the VMware Cloud Foundation
Release Notes. See VMware Cloud Foundation documentation.
For performance best practices for vSphere, see Performance Best Practices for VMware
vSphere 8.0 Update 1.

Design Elements
This VMware Cloud Foundation Design Guide contains requirements and recommendations for
the design of each component of the SDDC. In situations where a configuration choice exists,
requirements and recommendations are available for each choice. Implement only those of them
that are relevant to your target configuration.

VMware, Inc. 7
VMware Cloud Foundation Design Guide

Design Element Description

Requirement Required for the operation of VMware Cloud Foundation.


Deviations are not permitted.

Recommendation Recommended as a best practice. Deviations are


permitted.

VMware Cloud Foundation Deployment Options in This


Design
This design guidance is for the all architecture models of VMware Cloud Foundation. By following
the guidance, you can examine the design for these deployment options:

n Single VMware Cloud Foundation instance.

n Single VMware Cloud Foundation instance with multiple availability zones (also known as
stretched deployment). The default vSphere cluster of the workload domain is stretched
between two availability zones by using Chapter 6 vSAN Design for VMware Cloud
Foundation and configuring vSphere Cluster Design Requirements and Recommendations
for VMware Cloud Foundation and BGP Routing Design for VMware Cloud Foundation
accordingly.

n Multiple VMware Cloud Foundation instances. You deploy several instances of VMware Cloud
Foundation to address requirements for scale and co-location of users and resources.

For disaster recovery, workload mobility, or propagation of common configuration to multiple


VMware Cloud Foundation instances, you can deploy Chapter 8 NSX Design for VMware
Cloud Foundation for the SDDC management and workload components.

n Multiple VMware Cloud Foundation instances with multiple availability zones. You apply the
configuration for stretched clusters for a single VMware Cloud Foundation instance to one or
more additional VMware Cloud Foundation instances in your environment.

vCenter Single Sign-On Options in This Design


This design guidance covers the topology with a single vCenter Single Sign-On domain in a
VMware Cloud Foundation instance and the topology with several isolated vCenter Single Sign-
On domains in a single instance. See vCenter Single Sign-On Design Requirements for VMware
Cloud Foundation.

VMware Cloud Foundation Design Blueprints


You can follow design blueprints for selected architecture models and topologies that list the
applicable design elements. See VMware Cloud Foundation Design Blueprints.

VMware, Inc. 8
VMware Cloud Foundation Design Guide

VMware Cloud Foundation Glossary


See the VMware Cloud Foundation Glossary for constructs, operations, and other terms specific
to VMware Cloud Foundation. It is important to understand these constructs before continuing
with this design guidance.

VMware, Inc. 9
VMware Cloud Foundation
Concepts 1
To design a VMware Cloud Foundation deployment, you need to understand certain VMware
Cloud Foundation concepts.
Read the following topics next:

n Architecture Models and Workload Domain Types in VMware Cloud Foundation

n VMware Cloud Foundation Topologies

n VMware Cloud Foundation Design Blueprints

Architecture Models and Workload Domain Types in


VMware Cloud Foundation
When you design a VMware Cloud Foundation deployment, you decide what architecture model,
that is, standard or consolidated, and what workload domain types, for example, consolidated,
isolated, or standard, to implement according to the requirements for hardware, expected
number of workloads and workload domains, co-location of management and customer
workloads, identity isolation, and other.

Architecture Models
Decide on a model according to your organization's requirements and your environment's
resource capabilities. Implement a standard architecture for workload provisioning and mobility
across VMware Cloud Foundation instances according to production best practices. If you plan to
deploy a small-scale environment, or if you are working on an SDDC proof-of-concept, implement
a consolidated architecture.

VMware, Inc. 10
VMware Cloud Foundation Design Guide

Figure 1-1. Choosing a VMware Cloud Foundation Architecture Model

Start choosing an architecture model


for VMware Cloud Foundation

Do you need to
No
minimize hardware
requirements?

Yes

No Can management Yes


Use the standard Use the consolidated
and customer
architecture model architecture model
workloads be
co-located?

Table 1-1. Architecture Model Recommendations for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-ARCH-RCMD-CFG-001 Use the standard n Aligns with the Requires additional


architecture model of VMware best hardware.
VMware Cloud Foundation. practice of separating
management workloads
from customer
workloads.
n Provides better long-
term flexibility and
expansion options.

Workload Domain Types


A workload domain represents a logical unit of application-ready infrastructure that groups ESXi
hosts managed by a vCenter Server instance with specific characteristics according to VMware
recommended practices. A workload domain can consist of one or more vSphere clusters,
provisioned by SDDC Manager.

VMware, Inc. 11
VMware Cloud Foundation Design Guide

Table 1-2. Workload Domain Types

Workload Domain Type Description Benefits Drawbacks

Management Domain n First domain deployed. n Guaranteed sufficient n You must carefully
n Contains the resources for size the domain to
following management management accommodate planned
appliances for all components deployment of VI
workload domains: workload domain and
additional management
n vCenter Server
components.
n NSX Manager
n Hardware might not be
n SDDC Manager
fully utilized until full-
n Optional. vRealize scale deployment has
Suite components been reached.
n Optional.
Management
domain NSX Edge
nodes
n Has dedicated ESXi
hosts
n First domain to
upgrade.

Consolidated Domain n Represents a n Considers the minimum n Management


management domain possible initial hardware components and
which also runs and management customer workloads
customer workloads. component footprint. are not isolated.
n Uses resource n Can be scaled to a n You must constantly
pools to ensure standard architecture monitor it to ensure
sufficient resources model. sufficient resources
for management for management
components. components.
n Migrating customer
workloads to dedicated
VI workloads domains
is more complex.

VMware, Inc. 12
VMware Cloud Foundation Design Guide

Table 1-2. Workload Domain Types (continued)

Workload Domain Type Description Benefits Drawbacks

VI workload domain n Represents an n Can share an NSX n This workload domain


additional workload Manager instance with type cannot provide
domain for running other VI workload distinct vCenter
customer workloads. domains. Server Single Sign-On
n Shares a vCenter n All workload domains domains for customer
Server Single Sign- can be managed workloads.
On domain with the through a single pane n You can scale up to
management domain. of glass. 14 VI workload domains
n Shares identity provider n Minimizes password per VMware Cloud
configuration with the management overhead. Foundation instance.
management domain. n Allows for independent
n Has dedicated ESXi life cycle management.
hosts.

Isolated VI workload n Represents an n Can provide distinct n Workload domains of


domain additional workload vCenter Server Single this type cannot share
domain for running Sign-On domains for an NSX Manager
customer workloads. customer workloads. instance with other VI
n Has a distinct vCenter n Supports scale beyond workload domains.
Server Single Sign-On 14 VI workload n Workload domains
domain. domains. are managed through
n Has a distinct identity n Allows for independent different panes of glass.
provider configuration. life cycle management. n Additional password
n Has dedicated ESXi management overhead
hosts. exists.
n You can scale up to 24
VI workload domains
per VMware Cloud
Foundation instance.

VMware, Inc. 13
VMware Cloud Foundation Design Guide

Figure 1-2. Choosing a VMware Cloud Foundation Workload Domain Type for Customer
Workloads

Start choosing
a workload domain type

Have you chosen Yes


the consolidated Use consolidated
architecture model? workload domain

No

Yes Is a shared NSX instance


between the VI workload
domains required?

No

Are more than


14 VI workload
domains per Yes
Use isolated
Use VI workload domains VMware Cloud VI workload domains
Foundation
instance required?

No

Is a VI workload
No domain with Yes
a dedicated
vCenter Single
Sign-on domain
required?

VMware, Inc. 14
VMware Cloud Foundation Design Guide

Table 1-3. Workload Domain Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-WLD-RCMD-CFG-001 Use VI workload domains n Aligns with the Requires additional


or isolated VI workload VMware best hardware.
domains for customer practice of separating
workloads. management workloads
from customer
workloads.
n Provides better long
term flexibility and
expansion options.

VMware Cloud Foundation Topologies


VMware Cloud Foundation supports multiple topologies that provide different levels of
availability and scale.

Availability Zones and VMware Cloud Foundation Instances


Availability zone

An availability zone is a fault domain at the SDDC level.

You create multiple availability zones for the purpose of creating vSAN stretched clusters.
Using multiple availability zones can improve availability of management components and
workloads running within the SDDC, minimize downtime of services, and improve SLAs.

Availability zones are typically located either within the same data center, but in different
racks, chassis, rooms, or in different data centers with low-latency high-speed links
connecting them. One availability zone can contain several fault domains.

Note VMware Cloud Foundation considers and treats as stretched clusters only clusters
created by using the Stretch Cluster API because such clusters are using vSAN storage.

VMware Cloud Foundation Instance

Each VMware Cloud Foundation instance is a separate VMware Cloud Foundation


deployment and might contain one or two availability zones. VMware Cloud Foundation
instances may be geographically separate.

VMware Cloud Foundation Topologies


Several topologies of VMware Cloud Foundation exist according to the number of availability
zones and VMware Cloud Foundation instances.

VMware, Inc. 15
VMware Cloud Foundation Design Guide

Table 1-4. VMware Cloud Foundation Topologies

Topology Description

Single Instance - Single Availability Zone Workload domains are deployed in a single availability
zone.

Single Instance - Multiple Availability Zones Workload domains might be stretched between two
availability zones.

Multiple Instances - Single Availability Zone per VMware Workload domains in each instance are deployed in a
Cloud Foundation instance single availability zone.

Multiple Instances - Multiple Availability Zones per Workload domains in each instance might be stretched
VMware Cloud Foundation instance between two availability zones.

Figure 1-3. Choosing a VMware Cloud Foundation Topology

Start choosing
a VMware Cloud Foundation topology

Do you need
No disaster recovery Yes
or more than 24
VI workload
domains?

No Do you need Yes No Do you need Yes


vSAN stretched vSAN stretched
clusters? clusters?

Use the Single Instance - Use the Single Instance - Use the Multiple Instance - Use the Multiple Instance -
Single Avalibility Multiple Avalibility Single Avalibility Multiple Avalibility
Zone topology Zones topology Zones topology Zones topology

What to read next

n Single Instance - Single Availability Zone


Single Instance - Single Availability Zone is the simplest VMware Cloud Foundation topology
where workload domains are deployed in a single availability zone.

n Single Instance - Multiple Availability Zones


You protect your VMware Cloud Foundation environment against a failure of a single
hardware fault domain by implementing multiple availability zones.

n Multiple Instances - Single Availability Zone per Instance


You protect against a failure of a single VMware Cloud Foundation instance by implementing
multiple VMware Cloud Foundation instances.

VMware, Inc. 16
VMware Cloud Foundation Design Guide

n Multiple Instances - Multiple Availability Zones per Instance


You protect against a failure of a single VMware Cloud Foundation instance by implementing
multiple VMware Cloud Foundation instances. Implementing multiple availability zones in an
instance protects against a failure of a single hardware fault domain.

Single Instance - Single Availability Zone


Single Instance - Single Availability Zone is the simplest VMware Cloud Foundation topology
where workload domains are deployed in a single availability zone.

The Single Instance - Single Availability Zone topology relies on vSphere HA to protect against
host failures.

Figure 1-4. Single VMware Cloud Foundation Instance with a Single Availability Zone

VCF Instance

VI Workload
Domain

Management
Domain

Table 1-5. Single Instance - Single Availability Zone Attributes

Attributes Detail

Data centers Single data center

Workload domain rack mappings n Workload domain in a single rack


n Workload domain spanning multiple racks

Scale n Up to 25 workload domains

Up to 15 workload domains in a single vCenter Single


Sign-On domain

Resilience vSphere HA provides protection against host failures

VMware, Inc. 17
VMware Cloud Foundation Design Guide

Single Instance - Multiple Availability Zones


You protect your VMware Cloud Foundation environment against a failure of a single hardware
fault domain by implementing multiple availability zones.

Incorporating multiple availability zones in your design can help reduce the blast radius of a
failure and can increase application availability. You usually deploy multiple availability zones
across two independent data centers.

Figure 1-5. Multiple Availability Zones in VMware Cloud Foundation

VCF Instance

Availability Zone 1 Availability Zone 2

Stretched vSAN Cluster

VI Workload
Domain

VI Workload
Domain

VI Workload
Domain

Stretched vSAN Cluster

VI Workload
Domain

Stretched vSAN Cluster

Management
Domain

VMware, Inc. 18
VMware Cloud Foundation Design Guide

Table 1-6. Single Instance - Multiple Availability Zone Attributes

Attributes Detail

Workload domain rack mappings n Workload domain in a single rack


n Workload domain spanning multiple racks
n Workload domain with multiple availability zones, each
zone in a single rack
n Workload domain with multiple availability zones, each
zone spanning multiple racks

Stretched cluster n Because availability zones use VMware vSAN™


stretched clusters, the bandwidth between the zones
must be at least 10 Gbps and the round-trip latency
must be less than 5 ms.
n Having the management domain on a vSAN stretched
cluster is a prerequisite to configure and implement
vSAN stretched clusters in your VI workload domains.
n You can have up to two availability zones.

Scale n Up to 25 workload domains.

Up to 15 workload domains in a single vCenter Single


Sign-On domain

Resilience n vSphere HA provides protection against host failures.


n Multiple availability zones protect against data center
failures.

Multiple Instances - Single Availability Zone per Instance


You protect against a failure of a single VMware Cloud Foundation instance by implementing
multiple VMware Cloud Foundation instances.

Incorporating multiple VMware Cloud Foundation instances in your design can help reduce
the blast radius of a failure and can increase application availability across larger geographical
distances than cannot be achieved by using multiple availability zones. You usually deploy this
topology in the same data center for scale or across independent data centers for resilience.

VMware, Inc. 19
VMware Cloud Foundation Design Guide

Figure 1-6. Multiple Instance - Single Availability Zone Topology for VMware Cloud Foundation

VCF Instance 1

VI Workload
Domain

Management
Domain

VCF Instance 2

VI Workload
Domain

Management
Domain

VMware, Inc. 20
VMware Cloud Foundation Design Guide

Table 1-7. Multiple Instance - Single Availability Zone Attributes

Attributes Detail

Workload domain rack mapping n Workload domain in a single rack


n Workload domain spanning multiple racks

Multiple instances Using multiple VMware Cloud Foundation instances can


facilitate the following use cases:
n Disaster recovery across different VMware Cloud
Foundation instances at longer distances
n Scale beyond the maximums of a single VMware
Cloud Foundation instance.
n Co-location of end users and resources
If you plan to use NSX Federation between instances,
VMware Cloud Foundation supports a maximum of two
connected instances with a maximum round-trip latency
between them of 150 ms.

Scale n Up to 25 workload domains per VMware Cloud


Foundation instance

Up to 15 workload domains in a single vCenter Single


Sign-on domain per instance

Resilience n vSphere HA provides protection against host failures.


n Multiple instances can protect against natural
disasters by providing recovery locations at greater
geographical distances.

Multiple Instances - Multiple Availability Zones per Instance


You protect against a failure of a single VMware Cloud Foundation instance by implementing
multiple VMware Cloud Foundation instances. Implementing multiple availability zones in an
instance protects against a failure of a single hardware fault domain.

Incorporating multiple VMware Cloud Foundation instances into your design can help reduce
the blast radius of a failure and can increase application availability across larger geographical
distances that cannot be achieved using multiple availability zones.

VMware, Inc. 21
VMware Cloud Foundation Design Guide

Figure 1-7. Multiple Instance - Multiple Availability Zones Topology for VMware Cloud
Foundation

VCF Instance 1

Availability Zone 1 Availability Zone 2

Stretched vSAN Cluster

VI Workload
Domain

VI Workload
Domain

VI Workload
Domain

Stretched vSAN Cluster

VI Workload
Domain

Stretched vSAN Cluster

Management
Domain

VCF Instance 2

Availability Zone 1 Availability Zone 2

Stretched vSAN Cluster

VI Workload
VMware, Inc.
Domain 22
VMware Cloud Foundation Design Guide

Table 1-8. Multiple Instance - Multiple Availability Zone Attributes

Attributes Detail

Workload domain rack mapping n Workload domain in a single rack


n Workload domain spanning multiple racks
n Workload domain with multiple availability zones, each
zone in a single rack
n Workload domain with multiple availability zones, each
zone spanning multiple racks

Multiple instances Using multiple VMware Cloud Foundation instances can


facilitate the following:
n Disaster recovery across different VMware Cloud
Foundation instances at longer distances
n Scale beyond the maximums of a single VMware
Cloud Foundation instance
n Co-location of end users and resources
If you plan to use NSX Federation between instances,
VMware Cloud Foundation supports a maximum of two
connected instances with a maximum round-trip latency
between them of 150 ms.

Stretched cluster n Because availability zones use VMware vSAN™


stretched clusters, the bandwidth between the zones
must be at least 10 Gbps and the round-trip latency
must be less than 5 ms.
n You can have no more than two availability zones.
n Having the management domain on a vSAN stretched
cluster is a prerequisite to configure and implement
vSAN stretched clusters in your VI workload domains.

Scale n Up to 25 workload domains per VMware Cloud


Foundation instance

Up to 15 workload domains in a single vCenter Single


Sign-On domain per instance

Resilience n vSphere HA provides protection against host failures.


n Multiple availability zones protect against data center
failures.
n Multiple instances can protect against natural
disasters by providing recovery locations at greater
geographical distances.

VMware Cloud Foundation Design Blueprints


A VMware Cloud Foundation design blueprint is a collection of design requirements and
recommendations based on a chosen architecture model, workload domain type, and topology.
It can be used as a full end-to-end design for a VMware Cloud Foundation deployment.

VMware, Inc. 23
VMware Cloud Foundation Design Guide

Design Blueprint One: Rainpole


This design blueprint lists the design choices and resulting requirements and recommendations
for VMware Cloud Foundation for an organization called Rainpole.

Design Choices for Design Blueprint One


Rainpole has made the following choices for their VMware Cloud Foundation deployment:

Table 1-9. Design Choices for Design Blueprint One

Design Aspect Choice Made

Architecture Models and Workload Domain Types in Standard


VMware Cloud Foundation

Workload Domain Types Management domain and VI workload domains

Multiple Instances - Single Availability Zone per Instance Multiple Instances - Multiple Availability Zones per
instance

Leaf-Spine Physical Network Design Requirements and Leaf-Spine


Recommendations for VMware Cloud Foundation

Routing Design for VMware Cloud Foundation BGP

Chapter 6 vSAN Design for VMware Cloud Foundation vSAN

Chapter 10 vRealize Suite Lifecycle Manager Design for Included


VMware Cloud Foundation

Chapter 11 Workspace ONE Access Design for VMware Standard Workspace ONE Access
Cloud Foundation

Design Elements for Design Blueprint One


Table 1-10. External Services Design Elements

Design Area Applicable Design Elements

External Services Design Elements for VMware Cloud External Services Design Requirements
Foundation

Table 1-11. Physical Network Design Elements

Design Area Applicable Design Elements

Physical Network Design Elements for VMware Cloud Leaf-Spine Physical Network Design Requirements
Foundation
Leaf-Spine Physical Network Design Requirements for
NSX Federation

Leaf-Spine Physical Network Design Recommendations

Leaf-Spine Physical Network Design Recommendations


for Stretched Clusters

Leaf-Spine Physical Network Design Recommendations


for NSX Federation

VMware, Inc. 24
VMware Cloud Foundation Design Guide

Table 1-12. Management Domain Design Elements

Design Area Applicable Design Elements

vSAN Design Elements for VMware Cloud Foundation vSAN Design Requirements

vSAN Design Requirements for Stretched Clusters

vSAN Design Recommendations

vSAN Design Recommendations for Stretched Clusters

ESXi Design Elements for VMware Cloud Foundation ESXi Server Design Requirements

ESXi Server Design Recommendations

vCenter Server Design Elements vCenter Server Design Requirements

vCenter Server Design Recommendations

vCenter Server Design Recommendations for Stretched


Clusters

vCenter Single Sign-On Design Elements vCenter Single Sign-on Design Requirements for Multiple
vCenter - Single vCenter Single Sign-On Domain Topology

vSphere Cluster Design Elements for VMware Cloud vSphere Cluster Design Requirements
Foundation
vSphere Cluster Design Requirements for Stretched
Clusters

vSphere Cluster Design Recommendations

vSphere Cluster Design Recommendations for Stretched


Clusters

vSphere Networking Design Elements for VMware Cloud vSphere Networking Design Recommendations
Foundation

NSX Manager Design Elements NSX Manager Design Requirements

NSX Manager Design Recommendation

NSX Manager Design Recommendations for Stretched


Clusters

NSX Global Manager Design Elements NSX Global Manager Design Requirements for NSX
Federation

NSX Global Manager Design Recommendations for NSX


Federation

NSX Global Manager Design Recommendations for


Stretched Clusters

NSX Edge Design Elements NSX Edge Design Requirements

NSX Edge Design Requirements for NSX Federation

NSX Edge Design Recommendations

VMware, Inc. 25
VMware Cloud Foundation Design Guide

Table 1-12. Management Domain Design Elements (continued)

Design Area Applicable Design Elements

NSX Edge Design Recommendations for Stretched


Clusters

BGP Routing Design Elements for VMware Cloud BGP Routing Design Requirements
Foundation
BGP Routing Design Requirements for Stretched Clusters

BGP Routing Design Requirements for NSX Federation

BGP Routing Design Recommendations

BGP Routing Design Recommendations for NSX


Federation

Overlay Design Elements for VMware Cloud Foundation Overlay Design Requirements

Overlay Design Recommendations

Application Virtual Network Design Elements for VMware Application Virtual Network Design Requirements
Cloud Foundation
Application Virtual Network Design Requirements for NSX
Federation

Load Balancing Design Elements for VMware Cloud Load Balancing Design Requirements
Foundation
Load Balancing Design Requirements for NSX Federation

SDDC Manager Design Elements for VMware Cloud SDDC Manager Design Requirements
Foundation
SDDC Manager Design Recommendations

Table 1-13. VI Workload Domain Design Elements

Design Area Applicable Design Elements

vSAN Design Elements for VMware Cloud Foundation vSAN Design Requirements

vSAN Design Requirements for Stretched Clusters

vSAN Design Recommendations

vSAN Design Recommendations for Stretched Clusters

ESXi Design Elements for VMware Cloud Foundation ESXi Server Design Requirements

ESXi Server Design Recommendations

vCenter Server Design Elements vCenter Server Design Requirements

vCenter Server Design Recommendations

vCenter Server Design Recommendations for Stretched


Clusters

vCenter Single Sign-On Design Elements vCenter Single Sign-on Design Requirements for Multiple
vCenter - Single SSO Domain Topology

VMware, Inc. 26
VMware Cloud Foundation Design Guide

Table 1-13. VI Workload Domain Design Elements (continued)

Design Area Applicable Design Elements

vSphere Cluster Design Elements for VMware Cloud vSphere Cluster Design Requirements VMware Cloud
Foundation Foundation

vSphere Cluster Design Requirements for Stretched


Clusters

vSphere Cluster Design Recommendations

vSphere Cluster Design Recommendations for Stretched


Clusters

vSphere Networking Design Elements for VMware Cloud vSphere Networking Design Recommendations
Foundation

NSX Manager Design Elements NSX Manager Design Requirements

NSX Manager Design Recommendations

NSX Manager Design Recommendations for Stretched


Clusters

NSX Global Manager Design Elements NSX Global Manager Design Requirements for NSX
Federation

NSX Global Manager Design Recommendations for NSX


Federation

NSX Global Manager Design Recommendations for


Stretched Clusters

NSX Edge Design Elements NSX Edge Design Requirements

NSX Edge Design Requirements for NSX Federation

NSX Edge Design Recommendations

NSX Edge Design Recommendations for Stretched


Clusters

BGP Routing Design Elements for VMware Cloud BGP Routing Design Requirements
Foundation
BGP Routing Design Requirements for Stretched Clusters

BGP Routing Design Requirements for NSX Federation

BGP Routing Design Recommendations

BGP Routing Design Recommendations for NSX


Federation

Overlay Design Elements for VMware Cloud Foundation Overlay Design Requirements

Overlay Design Recommendations

VMware, Inc. 27
VMware Cloud Foundation Design Guide

Table 1-14. vRealize Suite Lifecycle Manager and Workspace ONE Access Design Elements

Design Area Applicable Design Elements

vRealize Suite Lifecycle Manager Design Elements for vRealize Suite Lifecycle Manager Design Requirements
VMware Cloud Foundation
vRealize Suite Lifecycle Manager Design Requirements for
Stretched Clusters

vRealize Suite Lifecycle Manager Design Requirements for


NSX Federation

vRealize Suite Lifecycle Manager Design


Recommendations

Workspace ONE Access Design Elements for VMware Workspace ONE Access Design Requirements
Cloud Foundation
Workspace ONE Access Design Requirements for
Stretched Clusters

Workspace ONE Access Design Requirements for NSX


Federation

Workspace ONE Access Design Recommendations

Table 1-15. Life Cycle Management Design Elements

Design Area Applicable Design Elements

Life Cycle Management Design Elements for VMware Life Cycle Management Design Requirements
Cloud Foundation

Table 1-16. Account and Password Management Design Elements

Design Area Applicable Design Elements

Information Security Design Elements for VMware Cloud Account and Password Management Design
Foundation Recommendations

Table 1-17. Certificate Management Design Elements

Design Area Applicable Design Elements

Information Security Design Elements for VMware Cloud Certificate Management Design Recommendations
Foundation

VMware, Inc. 28
Workload Domain to Rack
Mapping in VMware Cloud
Foundation
2
VMware Cloud Foundation distributes the functionality of the software-defined data center
(SDDC) across multiple workload domains and vSphere clusters. A workload domain, whether
it is the management workload domain or a VI workload domain, is a logical abstraction of
compute, storage, and network capacity, and consists of one or more clusters. Each cluster can
exist vertically in a single rack or be spanned horizontally across multiple racks.

The relationship between workload domains and data center racks in VMware Cloud Foundation
is not one-to-one. While a workload domain is an atomic unit of repeatable building blocks, a rack
is a unit of size. Because workload domains can have different sizes, you map workload domains
to data center racks according to your requirements and physical infrastructure constraints. You
determine the total number of racks for each cluster type according to your scalability needs.

Table 2-1. Workload Domain to Rack Configuration Options

Workload Domain to Rack Configuration Description

Workload domain in a single rack The workload domain occupies a single rack, whether it
consists of a single or multiple clusters.

Workload domain spanning multiple racks n The management domain can span multiple racks if
the data center fabric can provide Layer 2 adjacency,
such as BGP EVPN, between racks. If the Layer 3
fabric does not support this requirement, then the
management cluster should be mapped to a single
rack.
n A VI workload domain can span multiple racks. If you
are using a Layer 3 network fabric, NSX Edge clusters
cannot be hosted on clusters that span racks.

Workload domain with multiple availability zones, each To span multiple availability zones, the network fabric
zone in a single rack must support stretched Layer 2 networks and Layer 3
routed networks between the availability zones.

Workload domain with multiple availability zones, each To span multiple racks, the network fabric must support
zone spanning multiple racks stretched Layer 2 networks between these racks. To
span multiple availability zones, the network fabric must
support stretched Layer 2 networks and Layer 3 routed
networks between the availability zones.

VMware, Inc. 29
VMware Cloud Foundation Design Guide

Figure 2-1. Workload Domains in a Single Rack

Data Center
Fabric

ToR ToR
Switch Switch

VI Workload
Domain Cluster

Management
Domain Cluster

Rack

VMware, Inc. 30
VMware Cloud Foundation Design Guide

Figure 2-2. Workload Domain Spanning Multiple Racks

Data Center
Fabric

ToR ToR ToR ToR


Switch Switch Switch Switch

VI Workload
Domain Cluster

Management
Domain Cluster

Rack Rack

VMware, Inc. 31
VMware Cloud Foundation Design Guide

Figure 2-3. Workload Domains with Multiple Availability Zones, Each Zone in One Rack

Data Center
Fabric

Availability Zone 1 Availability Zone 2

ToR ToR ToR ToR


Switch Switch Switch Switch

VI Workload Domain
Stretched Cluster

Management Domain
Stretched Cluster

Rack Rack

VMware, Inc. 32
Supported Storage Types for
VMware Cloud Foundation 3
Storage design for VMware Cloud Foundation includes the design for principal and supplemental
storage.

Principal storage is used during the creation of a workload domain and is capable of running
workloads. Supplemental storage can be added after the creation of a workload domain and can
be capable of running workloads or be used for data at rest storage such as virtual machine
templates, backup data, and ISO images. VMware Cloud Foundation supports the following
principal and supplemental storage combinations:

Table 3-1. Supported Storage Types in VMware Cloud Foundation

Storage Type Management Domain VI Workload Domain

vSAN Original Storage Architecture Principal Principal


(OSA)

vSAN Express Storage Architecture Not Supported Not Supported


(ESA)

vVols (FC, iSCSI, or NFS) Supplemental n Principal


n Supplemental

HCI Mesh (Remote vSAN Datastores) Supplemental n Principal (additional clusters only)
n Supplemental

VMFS on FC Supplemental n Principal


n Supplemental

NFS Supplemental (NFS 3 and NFS 4.1) n Principal (NFS 3)


n Supplemental (NFS 3 and NFS 4.1)

iSCSI Supplemental Supplemental

NVMeoF/TCP Supplemental Supplemental

Note For a consolidated VMware Cloud Foundation architecture model, the storage types that
are supported for the management domain apply.

VMware, Inc. 33
External Services Design for
VMware Cloud Foundation 4
IP addressing scheme, name resolution, and time synchronization must support the requirements
for VMware Cloud Foundation deployments.

Table 4-1. External Services Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-EXT-REQD-NET-001 Allocate statically assigned Ensures stability across the You must provide precise
IP addresses and host VMware Cloud Foundation IP address management.
names for all workload instance, and makes it
domain components. simpler to maintain, track,
and implement a DNS
configuration.

VCF-EXT-REQD-NET-002 Configure forward and Ensures that all You must provide
reverse DNS records components are accessible DNS records for each
for all workload domain by using a fully qualified component.
components. domain name instead of by
using IP addresses only. It
is easier to remember and
connect to components
across the VMware Cloud
Foundation instance.

VCF-EXT-REQD-NET-003 Configure time Ensures that all An operational NTP service


synchronization by using an components are must be available in the
internal NTP time source synchronized with a valid environment.
for all workload domain time source.
components.

VCF-EXT-REQD-NET-004 Set the NTP service Ensures that the None.


for all workload domain NTP service remains
components to start synchronized after you
automatically. restart a component.

VMware, Inc. 34
Physical Network Infrastructure
Design for VMware Cloud
Foundation
5
Design of the physical data center network includes defining the network topology for
connecting physical switches and ESXi hosts, determining switch port settings for VLANs and
link aggregation, and designing routing.

A software-defined network (SDN) both integrates with and uses components of the physical
data center. SDN integrates with your physical network to support east-west transit in the data
center and north-south transit to and from the SDDC networks.

Several typical data center network deployment topologies exist:

n Core-Aggregation-Access

n Leaf-Spine

n Hardware SDN

Note Leaf-Spine is the default data center network deployment topology used for VMware
Cloud Foundation

Read the following topics next:

n VLANs and Subnets for VMware Cloud Foundation

n Leaf-Spine Physical Network Design Requirements and Recommendations for VMware Cloud
Foundation

VLANs and Subnets for VMware Cloud Foundation


Configure your VLANs and subnets according to the guidelines and requirements for VMware
Cloud Foundation.

When designing the VLAN and subnet configuration for your VMware Cloud Foundation
deployment, consider the following guidelines:

VMware, Inc. 35
VMware Cloud Foundation Design Guide

Table 5-1. VLAN and Subnet Guidelines for VMware Cloud Foundation
NSX Federation Between Multiple
All Deployment Topologies Multiple Availability Zones VMware Cloud Foundation Instances

n Ensure your subnets are scaled n For network segments which n An RTEP network segment should
appropriately to allow for are stretched between availability have a VLAN ID and Layer
expansion as expanding at a later zones, the VLAN ID must meet 3 range that are specific to
time can be disruptive. the following requirements: the VMware Cloud Foundation
n Use the IP address of the floating n Be the same in both instance.
interface for Virtual Router availability zones with the n In a VMware Cloud Foundation
Redundancy Protocol (VRPP) or same Layer 3 network instance with multiple availability
Hot Standby Routing Protocol segments. zones, the RTEP network
(HSRP) as the gateway. n Have a Layer 3 gateway segment must be stretched
n Use the RFC 1918 IPv4 address at the first hop that is between the zones and assigned
space for these subnets and highly available such that it the same VLAN ID and IP range.
allocate one octet by VMware tolerates the failure of an n All Edge RTEP networks must be
Cloud Foundation instance and entire availability zone. reachable from each other.
another octet by function. n For network segments of the
same type which are not
stretched between availability
zones, the VLAN ID can be the
same or different between the
zones.

When deploying VLANs and subnets for VMware Cloud Foundation, they must conform to the
following requirements according to the VMware Cloud Foundation topology:

Table 5-2. VLANs and Subnets for VMware Cloud Foundation


VMware Cloud Foundation Instances VMware Cloud Foundation Instances
Function with a Single Availability Zone with Multiple Availability Zones

Management - first availability zone n Required n Required


n Highly available gateway within n Must be stretched within the
the instance instance
n Highly available gateway across
availability zones within the
instance

vSphere vMotion - first availability n Required n Required


zone n Highly available gateway within n Highly available gateway in
the instance first availability zone within the
instance

vSAN - first availability zone n Required n Required


n Highly available gateway within n Highly available gateway in
the instance first availability zone within the
instance

Host overlay - first availability zone n Required n Required


n Highly available gateway within n Highly available gateway in
the instance first availability zone within the
instance

VMware, Inc. 36
VMware Cloud Foundation Design Guide

Table 5-2. VLANs and Subnets for VMware Cloud Foundation (continued)
VMware Cloud Foundation Instances VMware Cloud Foundation Instances
Function with a Single Availability Zone with Multiple Availability Zones

Uplink01 n Required n Required


n Gateway optional n Gateway optional
n Must be stretched within the
instance

Uplink02 n Required n Required


n Gateway optional n Gateway optional
n Must be stretched within the
instance

Edge overlay n Required n Required


n Highly available gateway within n Must be stretched within the
the instance instance
n Highly available gateway across
availability zones within the
instance

Management - second availability n Not required n Required


zone n Highly available gateway in
second availability zone within the
instance

vSphere vMotion - second availability n Not required n Required


zone n Highly available gateway in
second availability zone within the
instance

vSAN - second availability zone n Not required n Required


n Highly available gateway in
second availability zone within the
instance

Host overlay - second availability n Not required n Required


zone n Highly available gateway in
second availability zone within the
instance

Edge RTEP n Required for NSX Federation only n Required for NSX Federation only
n Highly available gateway within n Must be stretched within the
the instance instance
n Highly available gateway across
availability zones within the
instance

Management and Witness - witness n Not required n Required


appliance at a third location n Highly available gateway at the
witness location

VMware, Inc. 37
VMware Cloud Foundation Design Guide

Leaf-Spine Physical Network Design Requirements and


Recommendations for VMware Cloud Foundation
Leaf-Spine is the default data center network deployment topology used for VMware Cloud
Foundation. Consider network bandwidth, access port configuration, jumbo frames, routing
configuration, and DHCP availability for NSX in a deployment with a single or multiple VMware
Cloud Foundation instances.

Leaf-Spine Physical Network Logical Design


Each ESXi host is connected redundantly to the top-of-rack (ToR) switches of the SDDC network
fabric by two 25-GbE ports. The ToR switches are configured to provide all necessary VLANs
using an 802.1Q trunk. These redundant connections use features in vSphere Distributed Switch
and NSX to guarantee that no physical interface is overrun and available redundant paths are
used.

Figure 5-1. Leaf-Spine Physical Network Logical Design

Data Center
Fabric Spine

ToR ToR ToR ToR


Switch Switch Switch Switch

ESXi ESXi
Hosts Hosts

Leaf-Spine Physical Network Design Requirements and


Recommendations
The requirements and recommendations for the leaf-spine network configuration determine the
physical layout and use of VLANs. They also include requirements and recommendations on
jumbo frames, and on network-related requirements such as DNS and NTP.

VMware, Inc. 38
VMware Cloud Foundation Design Guide

Table 5-3. Leaf-Spine Physical Network Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NET-REQD-CFG-001 Do not use EtherChannel n Simplifies configuration None.


(LAG, LACP, or vPC) of top of rack switches.
configuration for ESXi host n Teaming options
uplinks. available with vSphere
Distributed Switch
provide load balancing
and failover.
n EtherChannel
implementations might
have vendor-specific
limitations.

VCF-NET-REQD-CFG-002 Use VLANs to separate n Supports physical Requires uniform


physical network functions. network connectivity configuration and
without requiring many presentation on all the
NICs. trunks that are made
n Isolates the different available to the ESXi hosts.
network functions in
the SDDC so that you
can have differentiated
services and prioritized
traffic as needed.

VCF-NET-REQD-CFG-003 Configure the VLANs as All VLANs become Optionally, the


members of a 802.1Q trunk. available on the same management VLAN can act
physical network adapters as the native VLAN.
on the ESXi hosts.

VCF-NET-REQD-CFG-004 Set the MTU size to at least n Improves traffic When adjusting the MTU
1,700 bytes (recommended throughput. packet size, you must
9,000 bytes for jumbo n Supports Geneve by also configure the entire
frames) on the physical increasing the MTU size network path (VMkernel
switch ports, vSphere to a minimum of 1,600 network adapters, virtual
Distributed Switches, bytes. switches, physical switches,
vSphere Distributed Switch and routers) to support the
n Geneve is an extensible
port groups, and N-VDS same MTU packet size.
protocol. The MTU
switches that support the In an environment with
size might increase
following traffic types: multiple availability zones,
with future capabilities.
n Overlay (Geneve) While 1,600 bytes the MTU must be
n vSAN is sufficient, an MTU configured on the entire
size of 1,700 bytes network path between the
n vSphere vMotion
provides more room for zones.
increasing the Geneve
MTU size without the
need to change the
MTU size of the
physical infrastructure.

VMware, Inc. 39
VMware Cloud Foundation Design Guide

Table 5-4. Leaf-Spine Physical Network Design Requirements for NSX Federation in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NET-REQD-CFG-005 Set the MTU size to at least n Jumbo frames are When adjusting the MTU
1,500 bytes (1,700 bytes not required between packet size, you must
preferred; 9,000 bytes VMware Cloud also configure the entire
recommended for jumbo Foundation instances. network path, that is,
frames) on the components However, increased virtual interfaces, virtual
of the physical network MTU improves traffic switches, physical switches,
between the VMware throughput. and routers to support the
Cloud Foundation instances n Increasing the RTEP same MTU packet size.
for the following traffic MTU to 1,700
types. bytes minimizes
n NSX Edge RTEP fragmentation for
standard-size workload
packets between
VMware Cloud
Foundation instances.

VCF-NET-REQD-CFG-006 Ensure that the latency A latency lower than 150 None.
between VMware Cloud ms is required for the
Foundation instances that following features:
are connected in an NSX n Cross vCenter Server
Federation is less than 150 vMotion
ms. n NSX Federation

VCF-NET-REQD-CFG-007 Provide a routed Configuring NSX You must assign unique


connection between the Federation requires routable IP addresses for
NSX Manager clusters in connectivity between the each fault domain.
VMware Cloud Foundation NSX Global Manager
instances that are instances, NSX Local
connected in an NSX Manager instances, and
Federation. NSX Edge clusters.

VMware, Inc. 40
VMware Cloud Foundation Design Guide

Table 5-5. Leaf-Spine Physical Network Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NET-RCMD-CFG-001 Use two ToR switches for Supports the use of Requires two ToR switches
each rack. two 10-GbE (25-GbE or per rack which might
greater recommended) increase costs.
links to each server,
provides redundancy and
reduces the overall design
complexity.

VCF-NET-RCMD-CFG-002 Implement the following n Provides availability n Might limit the


physical network during a switch failure. hardware choices.
architecture: n Provides support for n Requires dynamic
n One 25-GbE (10-GbE BGP dynamic routing routing protocol
minimum) port on each protocol configuration in the
ToR switch for ESXi physical network.
host uplinks (Host to
ToR).
n Layer 3 device that
supports BGP.

VCF-NET-RCMD-CFG-003 Use a physical network n Supports design Requires BGP configuration


that is configured for BGP flexibility for routing in the physical network.
routing adjacency. multi-site and multi-
tenancy workloads.
n BGP is the only
dynamic routing
protocol that is
supported for NSX
Federation.
n Supports failover
between ECMP Edge
uplinks.

VCF-NET-RCMD-CFG-004 Assign persistent IP n Ensures that endpoints Requires a DHCP server


configurations for NSX have a persistent TEP availability on the TEP
tunnel endpoints (TEPs) IP address. VLAN.
that use dynamic IP n In VMware Cloud
allocation rather than static Foundation, TEP IP
IP pool addressing. assignment over DHCP
is required for
advanced deployment
topologies such as a
management domain
with multiple availability
zones.
n Static pools in NSX
Manager support only a
single availability zone.

VMware, Inc. 41
VMware Cloud Foundation Design Guide

Table 5-5. Leaf-Spine Physical Network Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-NET-RCMD-CFG-005 Set the lease duration for The IP addresses of the Requires configuration and
the DHCP scope for the host overlay VMkernel management of a DHCP
host overlay network to at ports are assigned by using server.
least 7 days. a DHCP server.
n Host overlay VMkernel
ports do not have an
administrative endpoint.
As a result, they can
use DHCP for automatic
IP address assignment.
If you must change
or expand the subnet,
changing the DHCP
scope is simpler than
creating an IP pool and
assigning it to the ESXi
hosts.
n DHCP simplifies the
configuration of a
default gateway for
host overlay VMkernel
ports if hosts within
same cluster are on
separate Layer 2
domains.

VCF-NET-RCMD-CFG-006 Designate the trunk ports Reduces the time to Although this design
connected to ESXi NICs as transition ports over to the does not use the STP,
trunk PortFast. forwarding state. switches usually have STP
configured by default.

VCF-NET-RCMD-CFG-007 Configure VRRP, HSRP, or Ensures that the VLANs Requires configuration of a
another Layer 3 gateway that are stretched high availability technology
availability method for between availability zones for the Layer 3 gateways in
these networks. are connected to a the data center.
n Management highly- available gateway.
Otherwise, a failure in the
n Edge overlay
Layer 3 gateway will cause
disruption in the traffic in
the SDN setup.

VMware, Inc. 42
VMware Cloud Foundation Design Guide

Table 5-6. Leaf-Spine Physical Network Design Recommendations for Stretched Clusters with
VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NET-RCMD-CFG-008 Provide high availability for Ensures that a failover Requires configuration of
the DHCP server or DHCP operation of a single highly available DHCP
Helper used for VLANs availability zone will not server.
requiring DHCP services impact DHCP availability.
that are stretched between
availability zones.

Table 5-7. Leaf-Spine Physical Network Design Recommendations for NSX Federation in
VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NET-RCMD-CFG-009 Provide BGP routing BGP is the supported None.


between all VMware Cloud routing protocol for NSX
Foundation instances that Federation.
are connect in an NSX
Federation setup.

VMware, Inc. 43
vSAN Design for VMware Cloud
Foundation 6
VMware Cloud Foundation uses VMware vSAN as the principal storage type for the management
domain and recommended for use as principal storage in VI workload domains. You must
determine the size of the compute and storage resources for the vSAN storage, and the
configuration of the network carrying vSAN traffic. For multiple availability zones, you extend
the resource size and determine the configuration of the vSAN witness host.

Read the following topics next:

n Logical Design for vSAN for VMware Cloud Foundation

n Hardware Configuration for vSAN for VMware Cloud Foundation

n Network Design for vSAN for VMware Cloud Foundation

n vSAN Witness Design for VMware Cloud Foundation

n vSAN Design Requirements and Recommendations for VMware Cloud Foundation

Logical Design for vSAN for VMware Cloud Foundation


vSAN is a cost-efficient storage technology that provides a simple storage management user
experience, and permits a fully automated initial deployment of VMware Cloud Foundation. It also
provides support for future storage expansion and implementation of vSAN stretched clusters in
a workload domain.

VMware, Inc. 44
VMware Cloud Foundation Design Guide

Figure 6-1. vSAN Logical Design for VMware Cloud Foundation

vSphere Cluster

ESXi Hosts

Principal Supplemental Datastores


Datastore (Optional)

Virtual
Machines

Software-Defined Storage
Storage

Policy-Based Storage Management


Virtualized Data Services
Hypervisor Storage Abstraction

vSAN

Physical Disks

FLASH NVMe SATA

VMware, Inc. 45
VMware Cloud Foundation Design Guide

Table 6-1. vSAN Logical Design


VMware Cloud Foundation Instances VMware Cloud FoundationInstances
Workload Domain Type with a Single Availability Zone with Multiple Availability Zones

Management domain (default cluster) Four nodes minimum n Must be stretched first
n 8 node minimum, equally
distributed across availability
zones
n vSAN witness appliance in a third
fault domain

Management domain (additional n Three nodes minimum n Six nodes minimum, equally
clusters) n Four nodes minimum is distributed across availability
recommended for higher zones
availability n Eight nodes minimum is
recommended for higher
availability
n vSAN witness appliance in a third
fault domain

VI workload domain (all clusters) n Three nodes minimum n Six nodes minimum, equally
n Four nodes minimum is distributed across availability
recommended for higher zones
availability n Eight nodes minimum is
recommended for higher
availability
n vSAN witness appliance in a third
fault domain

Hardware Configuration for vSAN for VMware Cloud


Foundation
Determine the type of the capacity and caching devices, and the storage controllers for
performance and stability according to the requirements of the management components of
VMware Cloud Foundation.

vSAN Physical Requirements and Dependencies


vSAN has the following requirements and options:

n vSAN Original Storage Architecture (OSA) as hybrid storage or all-flash storage.

n A vSAN hybrid storage configuration requires both magnetic devices and flash caching
devices. The cache tier must be at least 10% of the size of the capacity tier.

n An all-flash vSAN configuration requires flash devices for both the caching and capacity
tiers.

n VMware vSAN ReadyNodes or hardware from the VMware Compatibility Guide to build
your own.

VMware, Inc. 46
VMware Cloud Foundation Design Guide

n vSAN Express Storage Architecture (ESA)

n All storage devices claimed by vSAN contribute to capacity and performance. Each host's
storage devices claimed by vSAN form a storage pool. The storage pool represents the
amount of caching and capacity provided by the host to the vSAN datastore.

n ESXi hosts must be on the vSAN ESA Ready Node HCL with a minimum of 512 GB RAM
per host.

Note vSAN Express Storage Architecture is not supported with VMware Cloud Foundation.

For best practices, capacity considerations, and general recommendations about designing and
sizing a vSAN cluster, see the VMware vSAN Design and Sizing Guide.

Network Design for vSAN for VMware Cloud Foundation


In the network design for vSAN in VMware Cloud Foundation, you determine the network
configuration for vSAN traffic.

Consider the overall traffic bandwidth and decide how to isolate storage traffic.

n Consider how much vSAN data traffic is running between ESXi hosts.

n The amount of storage traffic depends on the number of VMs that are running in the cluster,
and on how write-intensive the I/O process is for the applications running in the VMs.

For information on the physical network setup for vSAN traffic, and other system traffic, see
Chapter 5 Physical Network Infrastructure Design for VMware Cloud Foundation.

For information on the virtual network setup for vSAN traffic, and other system traffic, see
Logical vSphere Networking Design for VMware Cloud Foundation.

The vSAN network design includes these components.

Table 6-2. Components of vSAN Network Design

Design Component Description

Physical NIC speed For best and most predictable performance (IOPS) for
the environment, this design uses a minimum of a 10-
GbE connection, with 25-GbE recommended, for use with
vSAN all-flash configurations.

VMkernel network adapters for vSAN The vSAN VMkernel network adapter on each ESXi
host is created when you enable vSAN on the cluster.
Connect the vSAN VMkernel network adapters on all ESXi
hosts in a cluster to a dedicated distributed port group
including also ESXi hosts that are not contributing storage
resources to the cluster.

VMware, Inc. 47
VMware Cloud Foundation Design Guide

Table 6-2. Components of vSAN Network Design (continued)

Design Component Description

VLAN All storage traffic should be isolated on its own VLAN.


When a design uses multiple vSAN clusters, each cluster
should use a dedicated VLAN or segment for its traffic.
This approach increases security, prevents interference
between clusters, and helps with troubleshooting cluster
configuration.

Jumbo frames vSAN traffic can be handled by using jumbo frames.


Use jumbo frames for vSAN traffic only if the physical
environment is already configured to support them, they
are part of the existing design, or if the underlying
configuration does not create a significant amount of
added complexity to the design.

What to read next

vSAN Witness Design for VMware Cloud Foundation


The vSAN witness appliance is a specialized ESXi installation that provides quorum and
tiebreaker services for stretched clusters in VMware Cloud Foundation.

vSAN Witness Deployment Specification


When using vSAN in a stretched cluster configuration, you must deploy a witness ESXi host. This
appliance must be deployed in a third location that is not local to the ESXi hosts on either side of
the stretched cluster.

Table 6-3. vSAN Witness Appliance Sizing Considerations


Maximum Number of Supported Maximum Number of Supported
Appliance Size Virtual Machines Witness Components

Tiny 10 750

Medium 500 21, 000

Large 500 64, 000

vSAN Witness Network Design


When using two availability zones, connect the vSAN witness appliance to the workload domain
vCenter Server so that you can perform the initial setup of the stretched cluster and have
workloads failover between the zones.

VMware, Inc. 48
VMware Cloud Foundation Design Guide

VMware Cloud Foundation uses vSAN witness traffic separation where you can use a VMkernel
adapter for vSAN witness traffic that is different from the adapter for vSAN data traffic. In this
design, you configure vSAN witness traffic in the following way:

n On each ESXi host in both availability zones, place the vSAN witness traffic on the
management VMkernel adapter.

n On the vSAN witness appliance, use the same VMkernel adapter for both management and
witness traffic.

For information about vSAN witness traffic separation, see vSAN Stretched Cluster Guide on
VMware Cloud Platform Tech Zone.

Management network

Routed to the management networks in both availability zones. Connect the first VMkernel
adapter of the vSAN witness appliance to this network. The second VMkernel adapter on the
vSAN witness appliance is not used.

Place the following traffic on this network:

n Management traffic

n vSAN witness traffic

Figure 6-2. vSAN Witness Network Design


Witness
Site

Witness
Appliance Physical
Upstream
Router

Witness
Management
Network

Availability Zone 1 (AZ 1) Availability Zone 2 (AZ 2)

ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi


Physical Management Host 01 Host 02 Host 03 Host 04 Host 01 Host 02 Host 03 Host 04 Physical
Upstream Domain Upstream
Router vCenter Server Router

AZ 1: Management
Domain Management
Network

AZ 1: VI Workload AZ 2: VI Workload
Domain Management Domain
Network Management
Network

AZ 1: VI Workload AZ 2: VI Workload
Domain vSAN Domain vSAN
Network Network

VMware, Inc. 49
VMware Cloud Foundation Design Guide

vSAN Design Requirements and Recommendations for


VMware Cloud Foundation
Consider the requirements for using vSAN storage for standard and stretched clusters in VMware
Cloud Foundation, such as required capacity, number of hosts, number of disk groups, storage
policy, and the similar best practices for having vSAN operate in an optimal way.

For related vSphere cluster requirements and recommendations, see vSphere Cluster Design
Requirements and Recommendations for VMware Cloud Foundation.

vSAN Design Requirements


You must meet the following design requirements for standard and stretched clusters in your
vSAN design for VMware Cloud Foundation.

Table 6-4. vSAN Design Requirements for VMware Cloud Foundation


Requireme
nt ID Design Requirement Justification Implication

VCF- Provide sufficient raw Ensures that sufficient None.


VSAN- capacity to meet resources are present to
REQD- the initial needs of create the workload domain
CFG-001 the workload domain cluster.
cluster.

VCF- Provide at least Satisfies the requirements for None.


VSAN- the required minimum storage availability.
REQD- number of hosts
CFG-002 according to the
cluster type.

Table 6-5. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
Requireme
nt ID Design Requirement Justification Implication

VCF- Add the following Provides the necessary You might need additional policies if third-
VSAN- setting to the default protection for virtual party virtual machines are to be hosted in
REQD- vSAN storage policy: machines in each availability these clusters because their performance or
CFG-003 Site disaster tolerance zone, with the ability to availability requirements might differ from
= Site mirroring - recover from an availability what the default VMware vSAN policy
stretched cluster zone outage. supports.

VCF- Configure two fault Fault domains are mapped to You must provide additional raw storage when
VSAN- domains, one for availability zones to provide the site mirroring - stretched cluster option is
REQD- each availability zone. logical host separation and selected and fault domains are enabled.
CFG-004 Assign each host ensure a copy of vSAN
to their respective data is always available even
availability zone fault when an availability zone
domain. goes offline.

VMware, Inc. 50
VMware Cloud Foundation Design Guide

Table 6-5. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
(continued)
Requireme
nt ID Design Requirement Justification Implication

VCF- Deploy a vSAN witness Ensures availability of vSAN You must provide a third physically-separate
VSAN- appliance in a location witness components in the location that runs a vSphere environment.
WTN- that is not local to the event of a failure of one of You might use a VMware Cloud Foundation
REQD- ESXi hosts in any of the availability zones. instance in a separate physical location.
CFG-001 the availability zones.

VCF- Deploy a witness Ensures the witness The vSphere environment at the witness
VSAN- appliance of a size appliance is sized to support location must satisfy the resource
WTN- that corresponds to the projected workload requirements of the witness appliance.
REQD- the required capacity storage consumption.
CFG-002 of the cluster.

VCF- Connect the first Enables connecting the The management networks in both availability
VSAN- VMkernel adapter of witness appliance to the zones must be routed to the management
WTN- the vSAN witness workload domain vCenter network in the witness site.
REQD- appliance to the Server.
CFG-003 management network
in the witness site.

VCF- Allocate a statically Simplifies maintenance and Requires precise IP address management.
VSAN- assigned IP address tracking, and implements a
WTN- and host name to the DNS configuration.
REQD- management adapter
CFG-004 of the vSAN witness
appliance.

VCF- Configure forward Enables connecting the vSAN You must provide DNS records for the vSAN
VSAN- and reverse DNS witness appliance to the witness appliance.
WTN- records for the vSAN workload domain vCenter
REQD- witness appliance for Server by FQDN instead of IP
CFG-005 the VMware Cloud address.
Foundation instance.

VCF- Configure time Prevents any failures n An operational NTP service must be
VSAN- synchronization by in the stretched cluster available in the environment.
WTN- using an internal NTP configuration that are caused n All firewalls between the vSAN witness
REQD- time for the vSAN by time mismatch between appliance and the NTP servers must allow
CFG-006 witness appliance. the vSAN witness appliance NTP traffic on the required network ports.
and the ESXi hosts in
both availability zones and
workload domain vCenter
Server.

VCF- Configure an individual The vSAN storage policy of You must configure additional vSAN storage
VSAN- vSAN storage policy a stretched cluster cannot be policies.
WTN- for each stretched shared with other clusters.
REQD- cluster.
CFG-007

VMware, Inc. 51
VMware Cloud Foundation Design Guide

vSAN Design Recommendations


In your vSAN design for VMware Cloud Foundation, you can apply certain best practices for
standard and stretched clusters .

Table 6-6. vSAN Design Recommendations for VMware Cloud Foundation


Recommen
dation ID Design Recommendation Justification Implication

VCF- Ensure that the storage I/O Storage controllers with lower queue depths can Limits the
VSAN- controller that is running the vSAN cause performance and stability problems when number of
RCMD- disk groups is capable and has a running vSAN. compatible
CFG-001 minimum queue depth of 256 set. vSAN ReadyNode servers are configured with the I/O
right queue depths for vSAN. controllers
that can be
used for
storage.

VCF- Do not use the storage I/O Running non-vSAN disks, for example, VMFS, on a If non-vSAN
VSAN- controllers that are running vSAN storage I/O controller that is running a vSAN disk disks are
RCMD- disk groups for another purpose. group can impact vSAN performance. required in
CFG-002 ESXi hosts,
you must
have an
additional
storage I/O
controller in
the host.

VCF- Configure vSAN in all-flash Meets the performance needs of the default All vSAN
VSAN- configuration in the default workload domain cluster. disks must
RCMD- workload domain cluster. be flash
CFG-003 disks, which
might cost
more than
magnetic
disks.

VCF- Provide sufficient raw capacity to Ensures that sufficient resources are present in the None.
VSAN- meet the planned needs of the workload domain cluster, preventing the need to
RCMD- workload domain cluster. expand the vSAN datastore in the future.
CFG-004

VCF- On the vSAN datastore, ensure that This free space, called slack space, is set aside Increases
VSAN- at least 30% of free space is always for host maintenance mode data evacuation, the amount
RCMD- available. component rebuilds, rebalancing operations, and of available
CFG-005 VM snapshots. storage
needed.

VCF- Configure vSAN with a minimum of Reduces the size of the fault domain and Using
VSAN- two disk groups per ESXi host. spreads the I/O load over more disks for better multiple disk
RCMD- performance. groups
CFG-006 requires
more disks
in each ESXi
host.

VMware, Inc. 52
VMware Cloud Foundation Design Guide

Table 6-6. vSAN Design Recommendations for VMware Cloud Foundation (continued)
Recommen
dation ID Design Recommendation Justification Implication

VCF- For the cache tier in each disk Provides enough cache for both hybrid or all-flash Using larger
VSAN- group, use a flash-based drive that vSAN configurations to buffer I/O and ensure disk flash disks
RCMD- is at least 600 GB large. group performance. can increase
CFG-007 Additional space in the cache tier does not the initial
increase performance. host cost.

VCF- Use the default VMware vSAN n Provides the level of redundancy that is You might
VSAN- storage policy. needed in the workload domain cluster. need
RCMD- n Provides the level of performance that is additional
CFG-008 enough for the individual workloads. policies
for third-
party virtual
machines
hosted in
these
clusters
because
their
performanc
e or
availability
requirement
s might
differ from
what the
default
VMware
vSAN policy
supports.

VCF- Leave the default virtual machine Sparse virtual swap files consume capacity on None.
VSAN- swap file as a sparse object on vSAN only as they are accessed. As a result,
RCMD- vSAN. you can reduce the consumption on the vSAN
CFG-009 datastore if virtual machines do not experience
memory over-commitment which would require
the use of the virtual swap file.

VCF- Use the existing vSphere Provides guaranteed performance for vSAN traffic All traffic
VSAN- Distributed Switch instance in the in a connection-free network by using existing paths are
RCMD- workload domain cluster. networking components. shared over
CFG-010 common
uplinks.

VCF- Configure jumbo frames on the n Simplifies configuration because jumbo frames Every
VSAN- VLAN for vSAN traffic. are also used to improve the performance of device in
RCMD- vSphere vMotion and NFS storage traffic. the network
CFG-011 n Reduces the CPU overhead resulting high must
network usage. support
jumbo
frames.

VMware, Inc. 53
VMware Cloud Foundation Design Guide

Table 6-7. vSAN Design Recommendations for Stretched Clusters with VMware Cloud
Foundation
Recommen
dation ID Design Recommendation Justification Implication

VCF- Configure the vSAN witness Removes the requirement to have static routes on The
VSAN- appliance to use the first VMkernel the witness appliance as witness traffic is routed managemen
WTN- adapter, that is the management over the management network. t networks
RCMD- interface, for vSAN witness traffic. in both
CFG-001 availability
zones must
be routed to
the
managemen
t network in
the witness
site.

VCF- Place witness traffic on the Separates the witness traffic from the vSAN data The
VSAN- management VMkernel adapter of traffic. Witness traffic separation provides the managemen
WTN- all the ESXi hosts in the workload following benefits: t networks
RCMD- domain. n Removes the requirement to have static routes in both
CFG-002 from the vSAN networks in both availability availability
zones to the witness site. zones must
be routed to
n Removes the requirement to have jumbo
the
frames enabled on the path between each
managemen
availability zone and the witness site because
t network in
witness traffic can use a regular MTU size of
the witness
1500 bytes.
site.

VMware, Inc. 54
vSphere Design for VMware Cloud
Foundation 7
The vSphere design includes determining the configuration of the vCenter Server instances, ESXi
hosts, vSphere clusters,and vSphere networking for a VMware Cloud Foundation environment.

Read the following topics next:

n ESXi Design for VMware Cloud Foundation

n vCenter Server Design for VMware Cloud Foundation

n vSphere Cluster Design for VMware Cloud Foundation

n vSphere Networking Design for VMware Cloud Foundation

ESXi Design for VMware Cloud Foundation


In the design of the ESXi host configuration for your VMware Cloud Foundation environment,
consider the resources, networking, and security policies that are required to support the virtual
machines in each workload domain cluster.

n Logical Design for ESXi for VMware Cloud Foundation


In the logical design for ESXi, you determine the high-level integration of the ESXi hosts
with the other components of the VMware Cloud Foundation instance for providing virtual
infrastructure to management and workload components.

n Sizing Considerations for ESXi for VMware Cloud Foundation


You decide on the number of ESXi hosts per cluster and the number of physical disks per
ESXi host.

n ESXi Design Requirements and Recommendations for VMware Cloud Foundation


The requirements for the ESXi hosts in a workload domain in VMware Cloud Foundation
are related to the system requirements of the workloads hosted in the domain. The
ESXi requirements include number, server configuration, amount of hardware resources,
networking, and certificate management. Similar best practices help you design optimal
environment operation.

VMware, Inc. 55
VMware Cloud Foundation Design Guide

Logical Design for ESXi for VMware Cloud Foundation


In the logical design for ESXi, you determine the high-level integration of the ESXi hosts with the
other components of the VMware Cloud Foundation instance for providing virtual infrastructure
to management and workload components.

To provide the resources required to run the management and workload components of the
VMware Cloud Foundation instance, each ESXi host consists of the following elements:

n CPU and memory

n Storage devices

n Out of band management interface

n Network interfaces

Figure 7-1. ESXi Logical Design for VMware Cloud Foundation

ESXi Host

Compute

CPU Memory

Storage

Non-Local Storage Local Storage


(SAN/NAS) (vSAN)

Network

NIC 1 and NIC 2 Out of Band


Uplinks Management Uplink

Sizing Considerations for ESXi for VMware Cloud Foundation


You decide on the number of ESXi hosts per cluster and the number of physical disks per ESXi
host.

For detailed sizing based on the overall profile of the VMware Cloud Foundation instance you
plan to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.

VMware, Inc. 56
VMware Cloud Foundation Design Guide

The configuration and assembly process for each system should be standardized, with all
components installed in the same manner on each ESXi host. Because standardization of
the physical configuration of the ESXi hosts removes variability, the infrastructure is easily
managed and supported. ESXi hosts are deployed with identical configuration across all cluster
members, including storage and networking configurations. For example, consistent PCIe card
slot placement, especially for network interface controllers, is essential for accurate mapping
of physical network interface controllers to virtual network resources. By using identical
configurations, you have an even balance of virtual machine storage components across storage
and compute resources.

VMware, Inc. 57
VMware Cloud Foundation Design Guide

Table 7-1. ESXi Server Sizing Considerations by Hardware Element

Hardware Element Considerations

CPU n Total CPU requirements for the workloads that are


running in the cluster.
n Host failure and maintenance scenarios.

Keep the CPU overcommitment ratio vCPU-to-pCPU


less than or equal to 2:1 for the management domain
and less than or equal to 8:1 for VI workload domains .
n Additional third-party management components.
n Size your CPU according to the number of
physical cores, not the logical cores. Simultaneous
multithreading (SMT) technologies in CPUs, such
as hyper-threading in Intel CPUs, improve CPU
performance by allowing multiple threads to run in
parallel on the same CPU core. Although a single
CPU core can be viewed as two logical cores, the
performance enhancement will not be equivalent to
100% more CPU power. It will also differ from one
environment to another.

Memory n Total memory requirements for the workloads that are


running in the cluster.
n When sizing memory for the ESXi hosts in a cluster,
to reserve the resources of one host for failover or
maintenance, set the admission control setting to N+1,
which reserves the resources of one host for failover
or maintenance.
n Number of vSAN disk groups and disks on an ESXi
host.

To support the maximum number of disk groups, you


must provide 32 GB of RAM. For more information
about disk groups, including design and sizing
guidance, see Administering VMware vSAN in the
vSphere documentation.

Storage n Use high-endurance device such as a hard drive or


SSD for boot device
n Use 128-GB boot device to maximize the space
available for ESX-OS Data
n Provide at least one 600-GB cache disk for vSAN
OSA.
n Use a minimum of two capacity disks for vSAN OSA.
n Use hosts with homogeneous configuration.

ESXi Design Requirements and Recommendations for VMware Cloud


Foundation
The requirements for the ESXi hosts in a workload domain in VMware Cloud Foundation
are related to the system requirements of the workloads hosted in the domain. The ESXi
requirements include number, server configuration, amount of hardware resources, networking,

VMware, Inc. 58
VMware Cloud Foundation Design Guide

and certificate management. Similar best practices help you design optimal environment
operation.

ESXi Server Design Requirements


You must the following design requirements for the ESXi hosts in a workload domain in a
VMware Cloud Foundation deployment.

Table 7-2. Design Requirements for ESXi Server Hardware

Requirement ID Design Requirement Requirement Justification Requirement Implication

VCF-ESX-REQD-CFG-001 Install no less than n Ensures availability None.


the minimum number of requirements are met.
ESXi hosts required for n If one of the hosts is
the cluster type being not available because
deployed. of a failure or
maintenance event, the
CPU overcommitment
ratio becomes 2:1.

VCF-ESX-REQD-CFG-002 Ensure each ESXi host n Ensures workloads Assemble the server
matches the required will run without specification and number
CPU, memory and storage contention even during according to the sizing in
specification. failure and maintenance VMware Cloud Foundation
conditions. Planning and Preparation
Workbook which is based
on projected deployment
size.

VCF-ESX-REQD-NET-001 Place the ESXi hosts Reduces the number of Separation of the physical
in each management VLANs needed because VLAN between ESXi hosts
domain cluster on the a single VLAN can be and other management
VLAN-backed management allocated to both the ESXi components for security
network segment for hosts, vCenter Server, and reasons is missing.
vCenter Server, and management components
management components for NSX.
for NSX.

VCF-ESX-REQD-NET-002 Place the ESXi hosts Physical VLAN security A new VLAN and a new
in each VI workload separation between subnet are required for
domain cluster on a VI workload domain the VI workload domain
VLAN-backed management ESXi hosts and other management network.
network segment other management components
than that used by the in the management domain
management domain. is achieved.

VCF-ESX-REQD-SEC-001 Regenerate the certificate Establishes a secure You must manually


of each ESXi host after connection with VMware regenerate the certificates
assigning the host an Cloud Builder during the of the ESXi hosts before
FQDN. deployment of a workload the deployment of a
domain and prevents workload domain.
man-in-the-middle (MiTM)
attacks.

VMware, Inc. 59
VMware Cloud Foundation Design Guide

ESXi Server Design Recommendations


In your ESXi host design for VMware Cloud Foundation, you can apply certain best practices.

Table 7-3. Design Recommendations for ESXi Server Hardware

Recommendation ID Recommendation Justification Implication

VCF-ESX-RCMD-CFG-001 Use vSAN ReadyNodes Your management domain Hardware choices might be
with vSAN storage for is fully compatible with limited.
each ESXi host in the vSAN at deployment. If you plan to use a
management domain. For information about the server configuration that is
models of physical servers not a vSAN ReadyNode,
that are vSAN-ready, see your CPU, disks and I/O
vSAN Compatibility Guide modules must be listed on
for vSAN ReadyNodes. the VMware Compatibility
Guide under CPU Series
and vSAN Compatibility List
aligned to the ESXi version
specified in VMware Cloud
Foundation 5.0 Release
Notes.

VCF-ESX-RCMD-CFG-002 Allocate hosts with uniform A balanced cluster has You must apply vendor
configuration across the these advantages: sourcing, budgeting,
default management n Predictable and procurement
vSphere cluster. performance even considerations for uniform
during hardware server nodes on a per
failures cluster basis.

n Minimal impact of
resynchronization or
rebuild operations on
performance

VCF-ESX-RCMD-CFG-003 When sizing CPU, do Although multithreading Because you must provide
not consider multithreading technologies increase more physical CPU
technology and associated CPU performance, the cores, costs increase and
performance gains. performance gain depends hardware choices become
on running workloads and limited.
differs from one case to
another.

VCF-ESX-RCMD-CFG-004 Install and configure all Provides hosts that have None
ESXi hosts in the default large memory, that is,
management cluster to greater than 512 GB,
boot using a 128 GB device with enough space for
or greater. the scratch partition when
using vSAN.

VMware, Inc. 60
VMware Cloud Foundation Design Guide

Table 7-3. Design Recommendations for ESXi Server Hardware (continued)

Recommendation ID Recommendation Justification Implication

VCF-ESX-RCMD-CFG-005 Use the default n If a failure in the None


configuration for the vSAN cluster occurs,
scratch partition on all the ESXi hosts remain
ESXi hosts in the default responsive and log
management cluster. information is still
accessible.
n It is not possible to use
vSAN datastore for the
scratch partition.

VCF-ESX-RCMD-CFG-006 For workloads running in Simplifies the configuration Increases the amount
the default management process. of replication traffic for
cluster, save the virtual management workloads
machine swap file at the that are recovered as part
default location. of the disaster recovery
process.

VCF-ESX-RCMD-NET-001 Place ESXi Hosts for Physical VLAN security A new workload domain
each VI Workload Domain separation between ESXi management and subnet
on separate VLAN-backed hosts in different VI are required for each new
management network workload domains is VI workload domain.
segments achieved.

VCF-ESX-RCMD-SEC-001 Deactivate SSH access on Ensures compliance with You must activate SSH
all ESXi hosts in the the vSphere Security access manually for
management domain by Configuration Guide and troubleshooting or support
having the SSH service with security best activities as VMware Cloud
stopped and using the practices. Foundation deactivates
default SSH service policy Disabling SSH access SSH on ESXi hosts
Start and stop manually . reduces the risk of security after workload domain
attacks on the ESXi hosts deployment.
through the SSH interface.

VCF-ESX-RCMD-SEC-002 Set the advanced setting n Ensures compliance You must suppress
UserVars.SuppressShellWa with the vSphere SSH enablement warning
rning to 0 across all ESXi Security Configuration messages manually when
hosts in the management Guide and with security performing troubleshooting
domain. best practices or support activities.
n Turns off the warning
message that appears
in the vSphere Client
every time SSH access
is activated on an ESXi
host.

VMware, Inc. 61
VMware Cloud Foundation Design Guide

vCenter Server Design for VMware Cloud Foundation


vCenter Server design considers the location, size, high availability, and identity domain isolation
of the vCenter Server instances for the workload domains in a VMware Cloud Foundation
environment.

n Logical Design for vCenter Server for VMware Cloud Foundation


Each workload domain has a dedicated vCenter Server that manages the ESXi hosts
running NSX Edge nodes and customer workloads. All vCenter Server instances run in the
management domain.

n Sizing Considerations for vCenter Server for VMware Cloud Foundation


You select an appropriate vCenter Server appliance size according to the scale of your
environment.

n High Availability Design for vCenter Server for VMware Cloud Foundation
Protecting vCenter Server is important because it is the central point of management and
monitoring for each workload domain.

n vCenter Server Design Requirements and Recommendations for VMware Cloud Foundation
Each workload domain in VMware Cloud Foundation is managed by a single vCenter
Server instance. You determine the size of this vCenter Server instance and its storage
requirements according to the number of ESXi hosts per cluster and the number of virtual
machines you plan to run on these clusters.

n vCenter Single Sign-On Design Requirements for VMware Cloud Foundation


vCenter Server instances for the VI workload domains in a VMware Cloud Foundation
deployment can be either joined to the vCenter Single Sign-On domain of the vCenter
Server instance for the management domain or deployed in isolated vCenter Single Sign-On
domains.

Logical Design for vCenter Server for VMware Cloud Foundation


Each workload domain has a dedicated vCenter Server that manages the ESXi hosts running
NSX Edge nodes and customer workloads. All vCenter Server instances run in the management
domain.

VMware, Inc. 62
VMware Cloud Foundation Design Guide

Figure 7-2. Design of vCenter Server for VMware Cloud Foundation

VCF Instance

Access

User Interface

API

Management Domain

Management
Domain vCenter Server

VI Workload Domain
vCenter Server

VI Workload Domain
vCenter Server

Supporting
Infrastructure: Shared
Storage, DNS, NTP

Virtual Infrastructure

ESXi

VMware, Inc. 63
VMware Cloud Foundation Design Guide

Table 7-4. vCenter Server Layout


VMware Cloud Foundation Instances with a Single VMware Cloud Foundation Instances with Multiple
Availability Zone Availability Zones

n One vCenter Server instance for the management n One vCenter Server instance for the management
domain that manages the management components domain that manages the management components
of the SDDC, such as the vCenter Server instances for of the SDDC, such as vCenter Server instances for
the VI workload domains, NSX Manager cluster nodes, the VI workload domains, NSX Manager cluster nodes,
SDDC Manager, and other solutions. SDDC Manager, and other solutions.
n Optionally, additional vCenter Server instances for the n Optionally, additional vCenter Server instances for the
VI workload domains to support customer workloads. VI workload domains to support customer workloads.
n vSphere HA protecting all vCenter Server appliances. n vSphere HA protecting all vCenter Server appliances.
n A should-run-on-host-in-group VM-Host affinity rule
in vSphere DRS specifying that the vCenter Server
appliances should run in the primary availability zone
unless an outage in this zone occurs.

Sizing Considerations for vCenter Server for VMware Cloud


Foundation
You select an appropriate vCenter Server appliance size according to the scale of your
environment.

When you deploy a workload domain, you select a vCenter Server appliance size that is suitable
for the scale of your environment. The option that you select determines the number of CPUs
and the amount of memory of the appliance. For detailed sizing according to a collective profile
of the VMware Cloud Foundation instance you plan to deploy, refer to the VMware Cloud
Foundation Planning and Preparation Workbook .

Table 7-5. Sizing Considerations for vCenter Server

vCenter Server Appliance Size Management Capacity

Tiny Up to 10 hosts or 100 virtual machines

Small * Up to 100 hosts or 1,000 virtual machines

Medium ** Up to 400 hosts or 4,000 virtual machines

Large Up to 1,000 hosts or 10,000 virtual machines

X-Large Up to 2,000 hosts or 35,000 virtual machines

* Default for the management domain vCenter Server

** Default for VI workload domain vCenter Server instances

High Availability Design for vCenter Server for VMware Cloud


Foundation
Protecting vCenter Server is important because it is the central point of management and
monitoring for each workload domain.

VMware, Inc. 64
VMware Cloud Foundation Design Guide

VMware Cloud Foundation supports only vSphere HA as a high availability method for vCenter
Server.

Table 7-6. Methods for Protecting the vCenter Server Appliance


Supported in VMware Cloud
High Availability Method Foundation Considerations

vSphere High Availability Yes -

vCenter High Availability (vCenter No n vCenter Server services must fail


HA) over to the passive node so there
is no continuous availability.
n Recovery time can be up to 30
mins.
n You must meet additional
networking requirements for the
private network.
n vCenter HA requires additional
resources for the Passive and
Witness nodes.
n Life cycle management is
complicated because you must
manually delete and recreate the
standby virtual machines during a
life cycle management operation.

vSphere Fault Tolerance (vSphere FT) No n The vCPU limit of vSphere FT


vCPU would limit vCenter Server
appliance size to medium.
n You must provide a dedicated
network.

vCenter Server Design Requirements and Recommendations for


VMware Cloud Foundation
Each workload domain in VMware Cloud Foundation is managed by a single vCenter Server
instance. You determine the size of this vCenter Server instance and its storage requirements
according to the number of ESXi hosts per cluster and the number of virtual machines you plan
to run on these clusters.

vCenter Server Design Requirements for VMware Cloud Foundation


You allocate vCenter Server appliances according to the requirements for workload isolation,
scalability, and resilience to failures.

VMware, Inc. 65
VMware Cloud Foundation Design Guide

Table 7-7. vCenter Server Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VCS-REQD-CFG-001 Deploy a dedicated n Isolates vCenter Server failures Requires a


vCenter Server appliance to management or customer separate license
for the management workloads. for the vCenter
domain of the VMware n Isolates vCenter Server Server instance in
Cloud Foundation instance. operations between the management
management and customers. domain
n Supports a scalable cluster
design where you can reuse
the management components as
more customer workloads are
added to the SDDC.
n Simplifies capacity planning
for customer workloads
because you do not consider
management workloads for the
VI workload domain vCenter
Server.
n Improves the ability to upgrade
the vSphere environment and
related components by enabling
for explicit separation of
maintenance windows:
n Management workloads
remain available while you
are upgrading the tenant
workloads
n Customer workloads remain
available while you are
upgrading the management
nodes
n Supports clear separation of
roles and responsibilities to
ensure that only administrators
with granted authorization
can control the management
workloads.
n Facilitates quicker
troubleshooting and problem
resolution.
n Simplifies disaster recovery
operations by supporting a clear
separation between recovery of
the management components
and tenant workloads.
n Provides isolation of potential
network issues by introducing
network separation of the
clusters in the SDDC.

VCF-VCS-REQD-NET-001 Place all workload domain Reduces the number of required None.
vCenters Server appliances VLANs because a single VLAN can

VMware, Inc. 66
VMware Cloud Foundation Design Guide

Table 7-7. vCenter Server Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

on the management VLAN be allocated to both, vCenter Server


network segment of the and NSX management components.
management domain.

vCenter Server Design Recommendations


In your vCenter Server design for VMware Cloud Foundation, you can apply certain best
practices for sizing and high availability.

Table 7-8. vCenter Server Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-VCS-RCMD-CFG-001 Deploy an appropriately Ensures resource The default size for a


sized vCenter Server availability and usage management domain is
appliance for each efficiency per workload Small and for VI workload
workload domain. domain. domains is Medium. To
override these values you
must use the Cloud Builder
API and the SDDC Manager
API.

VCF-VCS-RCMD-CFG-002 Deploy a vCenter Server Ensures resource The default size for a
appliance with the availability and usage management domain is
approproate storage size. efficiency per workload Small and for VI Workload
domain. Domains is Medium. To
override these values you
must use the API.

VCF-VCS-RCMD-CFG-003 Protect workload domain vSphere HA is the only vCenter Server becomes
vCenter Server appliances supported method to unavailable during a
by using vSphere HA. protect vCenter Server vSphere HA failover.
availability in VMware
Cloud Foundation.

VCF-VCS-RCMD-CFG-004 In vSphere HA, set the vCenter Server is the If the restart priority for
restart priority policy management and control another virtual machine
for the vCenter Server plane for physical and is set to highest, the
appliance to high. virtual infrastructure. In a connectivity delay for the
vSphere HA event, to management components
ensure the rest of the will be longer.
SDDC management stack
comes up faultlessly, the
workload domain vCenter
Server must be available
first, before the other
management components
come online.

VMware, Inc. 67
VMware Cloud Foundation Design Guide

vCenter Server Design Recommendations for Stretched Clusters with VMware


Cloud Foundation
The following additional design recommendations apply when using stretched clusters.

Table 7-9. vCenter Server Design Recommendations for vSAN Stretched Clusters with VMware
Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-VCS-RCMD-CFG-005 Add the vCenter Server Ensures that, by default, None.


appliance to the virtual the vCenter Server
machine group for the first appliance is powered on a
availability zone. host in the first availability
zone.

vCenter Single Sign-On Design Requirements for VMware Cloud


Foundation
vCenter Server instances for the VI workload domains in a VMware Cloud Foundation
deployment can be either joined to the vCenter Single Sign-On domain of the vCenter Server
instance for the management domain or deployed in isolated vCenter Single Sign-On domains.

You select the vCenter Single Sign-On topology according to the needs and design objectives of
your deployment.

VMware, Inc. 68
VMware Cloud Foundation Design Guide

Table 7-10. vCenter Single Sign-On Topologies for VMware Cloud Foundation
VMware Cloud Foundation vCenter Single Sign-On Benefits Drawbacks
Topology Domain Topology

Single vCenter Server Instance One vCenter Single Enables a small -


- Single vCenter Single Sign-On Sign-On domain with environment where
Domain the management domain customer workloads run
vCenter Server instance in the same cluster as
only. the management domain
components.

Multiple vCenter Server Instances One vCenter Single Sign- Enables sharing of Limited to 15 workload
- Single vCenter Single Sign-On On domain with the vCenter Server roles, domains per VMware
Domain management domain and tags and licenses Cloud Foundation
all VI workload domain between all workload instance including the
vCenter Server instances domain instances. management domain.
in enhanced linked mode
(ELM) using a ring
topology.

Multiple vCenter Server Instances n One vCenter Single n Enables isolation at Additional password
- Multiple vCenter Single Sign-On Sign-On domain the vCenter Single management overhead
Domains with at least Sign-On domain layer per vCenter Single Sign-
the management for increased security On domain.
domain vCenter Server separation.
instance n Supports up to 25
n Additional VI workload workload domains
domains, each with per VMware Cloud
their own isolated Foundation instance.
vCenter Single Sign-On
domains.

Figure 7-3. Single vCenter Server Instance - Single vCenter Single Sign-On Domain

VMware Cloud
Foundation Instance

vCenter Single Sign-On


Domain

Management Domain
vCenter Server

Because the Single vCenter Server Instance - Single vCenter Single Sign-On Domain topology
contains a single vCenter Server instance by definition, no relevant design requirements or
recommendations for vCenter Single Sign-On are needed.

VMware, Inc. 69
VMware Cloud Foundation Design Guide

Figure 7-4. Multiple vCenter Server Instances - Single vCenter Single Sign-On Domain

VMware Cloud Foundation Instance

vCenter Single Sign-On


Domain

Management Domain Replication VI Workload Domain


vCenter Server Partner vCenter Server 1

Ring topology
Replication formed during Replication
Partner domain creation Partner

VI Workload Domain Replication VI Workload Domain


vCenter Server 3 Partner vCenter Server 2

VMware, Inc. 70
VMware Cloud Foundation Design Guide

Table 7-11. Design Requirements for the Multiple vCenter Server Instance - Single vCenter Single
Sign-on Domain Topology for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VCS-REQD-SSO- Join all vCenter Server When all vCenter Server n Only one vCenter Single
STD-001 instances within aVMware instances are in the Sign-On domain exists.
Cloud Foundation instance same vCenter Single Sign- n The number of
to a single vCenter Single On domain, they can linked vCenter Server
Sign-On domain. share authentication and instances in the
license data across all same vCenter Single
components. Sign-On domain is
limited to 15 instances.
Because each workload
domain uses a
dedicated vCenter
Server instance, you
can deploy up to
15 domains within
each VMware Cloud
Foundation instance.

VCF-VCS-REQD-SSO- Create a ring topology By default, one vCenter None.


STD-002 between the vCenter Server instance replicates
Server instances within the only with another vCenter
VMware Cloud Foundation Server instance. This setup
instance. creates a single point
of failure for replication.
A ring topology ensures
that each vCenter Server
instance has two replication
partners and removes any
single point of failure.

VMware, Inc. 71
VMware Cloud Foundation Design Guide

Figure 7-5. Multiple vCenter Server Instances - Multiple vCenter Single Sign-On Domain

VMware Cloud Foundation Instance

vCenter Single Sign-On


Domain

Management Domain Replication VI Workload Domain


vCenter Server Partner vCenter Server 1

Ring topology
Replication formed during Replication
Partner domain creation Partner

VI Workload Domain Replication VI Workload Domain


vCenter Server 3 Partner vCenter Server 2

vCenter Single Sign-On vCenter Single Sign-On


Domain Domain

VI Workload Domain VI Workload Domain


vCenter Server 4 vCenter Server 5

VMware, Inc. 72
VMware Cloud Foundation Design Guide

Table 7-12. Design Requirements for Multiple vCenter Server Instance - Multiple vCenter Single
Sign-On Domain Topology for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VCS-REQD-SSO- Create all vCenter Server n Enables isolation at n Each vCenter server
ISO-001 instances within a VMware the vCenter Single instance is managed
Cloud Foundation instance Sign-On domain layer through its own pane of
in their own unique vCenter for increased security glass using a different
Single Sign-On domains. separation. set of administrative
n Supports up to 25 credentials.
workload domains. n You must manage
password rotation
for each vCenter
Single Sign-On domain
separately.

vSphere Cluster Design for VMware Cloud Foundation


vSphere cluster design must consider the requirements for standard, stretched and remote
clusters and for the life cycle management of the ESXi hosts in the clusters according to the
characteristics of the workloads.

n Logical vSphere Cluster Design for VMware Cloud Foundation


The cluster design must consider the characteristics of the workloads that are deployed in
the cluster.

n vSphere Cluster Life Cycle Method Design for VMware Cloud Foundation
vSphere Lifecycle Manager is used to manage the vSphere clusters in each VI workload
domain.

n vSphere Cluster Design Requirements and Recommendations for VMware Cloud Foundation
The design of a vSphere cluster is a subject to a minimum number of hosts, design
requirements, and design recommendations.

Logical vSphere Cluster Design for VMware Cloud Foundation


The cluster design must consider the characteristics of the workloads that are deployed in the
cluster.

When you design the cluster layout in vSphere, consider the following guidelines:

n Compare the capital costs of purchasing fewer, larger ESXi hosts with the costs of purchasing
more, smaller ESXi hosts. Costs vary between vendors and models. Evaluate the risk of losing
one larger host in a scaled-up cluster and the impact on the business with the higher chance
of losing one or more smaller hosts in a scale-out cluster.

n Evaluate the operational costs of managing a few ESXi hosts with the costs of managing
more ESXi hosts.

n Consider the purpose of the cluster.

VMware, Inc. 73
VMware Cloud Foundation Design Guide

n Consider the total number of ESXi hosts and cluster limits.

Figure 7-6. Logical vSphere Cluster Layout with a Single Availability Zone for VMware Cloud
Foundation

VCF Instance

vSphere Cluster

Workload Domain
vCenter Server

ESXi ESXi ESXi ESXi

VMware, Inc. 74
VMware Cloud Foundation Design Guide

Figure 7-7. Logical vSphere Cluster Layout for Multiple Availability Zones for VMware Cloud
Foundation

VCF Instance

vSphere Cluster

Workload Domain
vCenter Server

ESXi ESXi ESXi ESXi

Availability Zone 1

Availability Zone 2

ESXi ESXi ESXi ESXi

Remote Cluster Design Considerations


Remote clusters are managed by the management infrastructure at the central site.

Table 7-13. Remote Cluster Design Considerations

Remote Cluster Attribute Consideration

Number of hosts per remote cluster n Minimum: 3


n Maximum: 16

Number of remote clusters per VMware Cloud Foundation n Maximum: 8


instance

Number of remote clusters per VI workload domain 1

Cluster types per VI workload domain A VI workload domain can include either local clusters or
a remote cluster.

Latency between the central site and the remote site n Maximum: 100 ms

Bandwidth between the central site and the remote site n Minimum: 10 Mbps

VMware, Inc. 75
VMware Cloud Foundation Design Guide

vSphere Cluster Life Cycle Method Design for VMware Cloud


Foundation
vSphere Lifecycle Manager is used to manage the vSphere clusters in each VI workload domain.

When you deploy a workload domain, you choose a vSphere cluster life cycle method according
to your requirements.

Table 7-14. vSphere Cluster Life Cycle Considerations

Cluster Life Cycle Method Description Benefits Drawbacks

vSphere Lifecycle Manager vSphere Lifecycle Manager n Supports VI workload n Not supported for the
images Desired State images domains with vSphere management domain.
contain base images, with Tanzu. n Supports only
vendor add-ons, firmware, n Supports NVIDIA GPU- environments with a
and drivers. enabled clusters. single availability zone.
n Supports 2-node NFS,
FC, or vVols clusters.

vSphere Lifecycle Manager An upgrade baseline n Supports vSAN n Not supported for
baselines contains the ESXi image, stretched clusters. NVIDIA GPU-enabled
and a patch baseline n Supports VI workload clusters.
contains the respective domains with vSphere n Not supported for 2-
patches for ESXi host. with Tanzu. node NFS, FC, or vVols
This is the required method clusters.
for the management
domain.

vSphere Cluster Design Requirements and Recommendations for


VMware Cloud Foundation
The design of a vSphere cluster is a subject to a minimum number of hosts, design requirements,
and design recommendations.

For vSAN design requirements and recommendations, see vSAN Design Requirements and
Recommendations for VMware Cloud Foundation.

The requirements for the ESXi hosts in a workload domain in VMware Cloud Foundation
are related to the system requirements of the workloads hosted in the domain. The ESXi
requirements include number, server configuration, amount of hardware resources, networking,
and certificate management. Similar best practices help you design optimal environment
operation

vSphere Cluster Design Considerations


You consider different number of hosts per cluster according to the storage type and specific
resource requirements for standard and stretched vSAN clusters.

VMware, Inc. 76
VMware Cloud Foundation Design Guide

Table 7-15. Host-Related Design Considerations per Cluster


Management Domain Management Domain (Additional Clusters) or
Attribute Specification (Default Cluster) VI Workload Domain (All Clusters)

Minimum number of vSAN (single 4 3


ESXi hosts availability zone)

vSAN (two 8 6
availability zones)

NFS, FC, or vVols Not supported n 2


n VI workload domain only
n Requires vSphere Lifecycle Manager
images
n 3
n Additional management clusters

Reserved capacity Single availability n 25% CPU and n 33% CPU and memory
for handling ESXi zone memory n Tolerates one host failure
host failures per n Tolerates one
cluster host failure

Two availability n 50% CPU and n 50% CPU and memory


zones memory n Tolerates one availability zone failure
n Tolerates one
availability zone
failure

vSphere Cluster Design Requirements VMware Cloud Foundation


You must meet the following design requirements for standard and stretched clusters in your
vSphere cluster design for VMware Cloud Foundation.The cluster design considers the storage
type for the cluster, the architecture model of the environment, and the life cycle management
method .

Table 7-16. vSphere Cluster Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-CLS-REQD-CFG-001 Create a cluster in each n Simplifies configuration Management of multiple


workload domain for the by isolating clusters and vCenter
initial set of ESXi hosts. management from Server instances increases
customer workloads. operational overhead.
n Ensures that customer
workloads have no
impact on the
management stack.

VCF-CLS-REQD-CFG-002 Allocate a minimum n Ensures correct level of To support redundancy,


number of ESXi hosts redundancy to protect you must allocate
according to the cluster against host failure in additional ESXi host
type being deployed. the cluster. resources.

VMware, Inc. 77
VMware Cloud Foundation Design Guide

Table 7-16. vSphere Cluster Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-CLS-REQD-CFG-003 If using a consolidated n Ensures sufficient You must manage the


workload domain, resources for vSphere resource pool
configure the following the management settings over time.
vSphere resource pools to components.
control resource usage by
management and customer
workloads.
n cluster-name-rp-sddc-
mgmt
n cluster-name-rp-sddc-
edge
n cluster-name-rp-user-
edge
n cluster-name-rp-user-
vm

VCF-CLS-REQD-CFG-004 Configure the vSAN Allows vSphere HA to None.


network gateway IP validate if a host is isolated
address as the isolation from the vSAN network.
address for the cluster.

VCF-CLS-REQD-CFG-005 Set the advanced cluster Ensures that vSphere None.


setting HA uses the manual
das.usedefaultisolationa isolation addresses instead
ddress to false. of the default management
network gateway address.

VCF-CLS-REQD-LCM-001 Use baselines as the life vSphere Lifecycle Manager You must manage the
cycle management method images are not supported life cycle of firmware and
for the management for the management vendor add-ons manually.
domain. domain.

Table 7-17. vSphere Cluster Design Requirements for vSAN Stretched Clusters with VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-CLS-REQD-CFG-006 Configure the vSAN Allows vSphere HA to None.


network gateway IP validate if a host is isolated
addresses for the second from the vSAN network for
availability zone as hosts in both availability
an additional isolation zones.
addresses for the cluster.

VCF-CLS-REQD-CFG-007 Enable the Override Enables routing the vSAN vSAN networks across
default gateway for this data traffic through the availability zones must
adapter setting on the vSAN network gateway have a route to each other.
vSAN VMkernel adapters rather than through the
on all ESXi hosts. management gateway.

VMware, Inc. 78
VMware Cloud Foundation Design Guide

Table 7-17. vSphere Cluster Design Requirements for vSAN Stretched Clusters with VMware
Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-CLS-REQD-CFG-008 Create a host group for Makes it easier to manage You must create and
each availability zone and which virtual machines run maintain VM-Host DRS
add the ESXi hosts in in which availability zone. group rules.
the zone to the respective
group.

VCF-CLS-REQD-LCM-002 Create workload domains vSphere Lifecycle Manager You must manage the
selecting baselines as the images are not supported life cycle of firmware and
life cycle management for vSAN stretched vendor add-ons manually.
method. clusters.

vSphere Cluster Design Recommendations for VMware Cloud Foundation


In your vSphere cluster design, you can apply certain best practices for standard and stretched
clusters .

Table 7-18. vSphere Cluster Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-CLS-RCMD-CFG-001 Use vSphere HA to protect vSphere HA supports a You must provide sufficient
all virtual machines against robust level of protection resources on the remaining
failures. for both ESXi host and hosts so that virtual
virtual machine availability. machines can be restarted
on those hosts in the event
of a host outage.

VCF-CLS-RCMD-CFG-002 Set host isolation response vSAN requires that the If a false positive event
to Power Off and restart host isolation response be occurs, virtual machines are
VMs in vSphere HA. set to Power Off and to powered off and an ESXi
restart virtual machines on host is declared isolated
available ESXi hosts. incorrectly.

VCF-CLS-RCMD-CFG-003 Configure admission Using the percentage- In a cluster of 4 ESXi hosts,


control for 1 ESXi host based reservation works the resources of only 3
failure and percentage- well in situations where ESXi hosts are available for
based failover capacity. virtual machines have use.
varying and sometimes
significant CPU or memory
reservations.
vSphere automatically
calculates the reserved
percentage according to
the number of ESXi host
failures to tolerate and the
number of ESXi hosts in the
cluster.

VMware, Inc. 79
VMware Cloud Foundation Design Guide

Table 7-18. vSphere Cluster Design Recommendations for VMware Cloud Foundation (continued)

Recommendation ID Design Recommendation Justification Implication

VCF-CLS-RCMD-CFG-004 Enable VM Monitoring for VM Monitoring provides None.


each cluster. in-guest protection for
most VM workloads. The
application or service
running on the virtual
machine must be capable
of restarting successfully
after a reboot or the
virtual machine restart is
not sufficient.

VCF-CLS-RCMD-CFG-005 Set the advanced Enables triggering a restart If you want to


cluster setting of a management appliance specifically enable I/O
das.iostatsinterval to 0 when an OS failure occurs monitoring, then configure
to deactivate monitoring and heartbeats are not the das.iostatsinterval
the storage and network received from VMware advanced setting.
I/O activities of the Tools instead of waiting
management appliances. additionally for the I/O
check to complete.

VCF-CLS-RCMD-CFG-006 Enable vSphere DRS on all Provides the best If a vCenter Server outage
clusters, using the default trade-off between load occurs, the mapping from
fully automated mode with balancing and unnecessary virtual machines to ESXi
medium threshold. migrations with vSphere hosts might be difficult to
vMotion. determine.

VCF-CLS-RCMD-CFG-007 Enable Enhanced vMotion Supports cluster upgrades You must enable EVC only
Compatibility (EVC) on all without virtual machine if the clusters contain hosts
clusters in the management downtime. with CPUs from the same
domain. vendor.
You must enable EVC on
the default management
domain cluster during
bringup.

VCF-CLS-RCMD-CFG-008 Set the cluster EVC mode Supports cluster upgrades None.
to the highest available without virtual machine
baseline that is supported downtime.
for the lowest CPU
architecture on the hosts in
the cluster.

VCF-CLS-RCMD-LCM-001 Use images as the life cycle vSphere Lifecycle Manager You cannot add any
management method for VI images simplify the stretched cluster to a
workload domains that do management of firmware VI workload domains that
not require vSAN stretched and vendor add-ons uses vSphere Lifecycle
clusters. manually. Manager images.

VMware, Inc. 80
VMware Cloud Foundation Design Guide

Table 7-19. vSphere Cluster Design Recommendations for vSAN Stretched Clusters with VMware
Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-CLS-RCMD-CFG-009 Increase admission control Allocating only half of a In a cluster of 8 ESXi hosts,
percentage to the half stretched cluster ensures the resources of only 4
of the ESXi hosts in the that all VMs have enough ESXi hosts are available for
cluster. resources if an availability use.
zone outage occurs. If you add more ESXi hosts
to the default management
cluster, add them in pairs,
one per availability zone.

VCF-CLS-RCMD-CFG-010 Create a virtual machine Ensures that virtual You must add virtual
group for each availability machines are located machines to the allocated
zone and add the VMs in only in the assigned group manually.
the zone to the respective availability zone to
group. avoid unnecessary vSphere
vMotion migrations.

VCF-CLS-RCMD-CFG-011 Create a should-run-on- Ensures that virtual You must manually create
hosts-in-group VM-Host machines are located the rules.
affinity rule to run each only in the assigned
group of virtual machines availability zone to
on the respective group avoid unnecessary vSphere
of hosts in the same vMotion migrations.
availability zone.

vSphere Networking Design for VMware Cloud Foundation


VMware Cloud Foundation uses vSphere Distributed Switch for virtual networking.

Logical vSphere Networking Design for VMware Cloud Foundation


When you design vSphere networking, consider the configuration of the vSphere Distributed
Switches, distributed port groups, and VMkernel adapters in the VMware Cloud Foundation
environment.

vSphere Distributed Switch Design


The default cluster in a workload domain uses a single vSphere Distributed Switch with a
configuration for system traffic types, NIC teaming, and MTU size.

VMware Cloud Foundation supports NSX traffic over a single vSphere Distributed Switch per
cluster. Additional distributed switches are supported for other traffic types.

When using vSAN ReadyNodes, you must define the number of vSphere Distributed Switches at
workload domain deployment time. You cannot add additional vSphere Distributed Switches post
deployment.

VMware, Inc. 81
VMware Cloud Foundation Design Guide

Table 7-20. Configuration Options for vSphere Distributed Switch for VMware Cloud Foundation
vSphere Distributed
Switch Configuration Management Domain VI Workload Domain Benefits Drawbacks

Single vSphere n One vSphere n vSphere Requires least All traffic shares the
Distributed Switch Distributed Distributed number of physical same uplinks.
Switch for all Switch VDS for all NICs and switch
traffic traffic ports.

Multiple vSphere n Maximum n Maximum Provides support You must provide


Distributed Switches one vSphere one vSphere for traffic separation additional physical
Distributed Distributed across different NICs and switch
Switch for NSX Switch for uplinks or vSphere ports.
traffic and NSX traffic by Distributed Switches.
one vSphere using the SDDC
Distributed Manager UI.
Switch for non- n Max of
NSX traffic 15 additional
by using vSphere
the Deployment Distributed
Parameters Switches for non-
Workbook in NSX trafficby
Cloud Builder. using the SDDC
n Maximum Manager API.
1 vSphere
Distributed
Switch for
NSX traffic
and 2 vSphere
Distributed
Switches for non-
NSX traffic by
using a JSON file
in Cloud Builder.

Note To use physical NICs other than vminc0 and vmnic1 as the uplinks for the first vSphere
Distributed Switch, you must use a JSON file for management domain deployment in Cloud
Builder or use the SDDC Manager API for VI workload domain deployment.

In a deployment that has multiple availability zones, you must provide new or extend the existing
networks.

Distributed Port Group Design


VMware Cloud Foundation requires several port groups on the vSphere Distributed Switch for
the workload domain. The VMkernel adapter for the host TEP is connected to the host overlay
VLAN, but does not require a dedicated port group on the distributed switch. The VMkernel
network adapter for host TEP is automatically created when you configure the ESXi host as a
transport node.

VMware, Inc. 82
VMware Cloud Foundation Design Guide

Table 7-21. Distributed Port Group Configuration for VMware Cloud Foundation
Number of Physical NIC
Function Teaming Policy Ports MTU Size (Bytes)

n Management Route based on physical n Maximum four ports n 1700 minimum


n vSphere vMotion NIC load for the management n 9000 recommended
n Failover Detection: Link domain by using
n vSAN
status only the Deployment

n Failback: Yes Parameters Workbook


in Cloud Builder.
Occurs only on n Maximum two ports for
saturation of the active a VI workload domain
uplink. by using the SDDC
n Notify Switches: Yes Manager UI.

n Host Overlay Not applicable. n For all domain types,


maximum six ports by
n Edge Uplinks and Use explicit failover order. using the API.
Overlay

n Edge RTEP (NSX Not applicable.


Federation Only)

VMkernel Network Adapter Design


The VMkernel networking layer provides connectivity to hosts and handles the system traffic for
management, vSphere vMotion, vSphere HA, vSAN, and others.

Table 7-22. Default VMkernel Adapters for a Workload Domain per Availability Zone
Recommended MTU Size
VMkernel Adapter Service Connected Port Group Activated Services (Bytes)

Management Management Port Group Management Traffic 1500 (Default)

vMotion vMotion Port Group vMotion Traffic 9000

vSAN vSAN Port Group vSAN 9000

Host TEPs Not applicable Not applicable 9000

vSphere Networking Design Recommendations for VMware Cloud


Foundation
Consider the recommendations for vSphere networking in VMware Cloud Foundation, such as
MTU size, port binding, teaming policy and traffic-specific network shares.

VMware, Inc. 83
VMware Cloud Foundation Design Guide

Table 7-23. vSphere Networking Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-VDS-RCMD-CFG-001 Use a single vSphere n Reduces the complexity Increases the number
Distributed Switch per of the network design. of vSphere Distributed
cluster. n Reduces the size of the Switches that must be
fault domain. managed.

VCF-VDS-RCMD-CFG-002 Configure the MTU size n Supports the MTU size When adjusting the MTU
of the vSphere Distributed required by system packet size, you must
Switch to 9000 for jumbo traffic types. also configure the entire
frames. n Improves traffic network path (VMkernel
throughput. ports, virtual switches,
physical switches, and
routers) to support the
same MTU packet size.

VCF-VDS-RCMD-DPG-001 Use ephemeral port binding Using ephemeral port Port-level permissions and
for the management port binding provides the option controls are lost across
group. for recovery of the vCenter power cycles, and no
Server instance that is historical context is saved.
managing the distributed
switch.

VCF-VDS-RCMD-DPG-002 Use static port binding for Static binding ensures a None.
all non-management port virtual machine connects
groups. to the same port on
the vSphere Distributed
Switch. This allows for
historical data and port
level monitoring.

VCF-VDS-RCMD-DPG-003 Use the Route based Reduces the complexity of None.


on physical NIC load the network design and
teaming algorithm for the increases resiliency and
management port group. performance.

VCF-VDS-RCMD-DPG-004 Use the Route based on Reduces the complexity of None.


physical NIC load teaming the network design and
algorithm for the vSphere increases resiliency and
vMotion port group. performance.

VCF-VDS-RCMD-NIO-001 Enable Network I/O Control Increases resiliency and If configured incorrectly,
on vSphere Distributed performance of the Network I/O Control
Switch of the management network. might impact network
domain cluster performance for critical
traffic types.

VMware, Inc. 84
VMware Cloud Foundation Design Guide

Table 7-23. vSphere Networking Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-VDS-RCMD-NIO-002 Set the share value for By keeping the default None.
management traffic to setting of Normal,
Normal. management traffic is
prioritized higher than
vSphere vMotion but
lower than vSAN traffic.
Management traffic is
important because it
ensures that the hosts
can still be managed
during times of network
contention.

VCF-VDS-RCMD-NIO-003 Set the share value for During times of network During times of network
vSphere vMotion traffic to contention, vSphere contention, vMotion takes
Low. vMotion traffic is not longer than usual to
as important as virtual complete.
machine or storage traffic.

VCF-VDS-RCMD-NIO-004 Set the share value for Virtual machines are the None.
virtual machines to High. most important asset in the
SDDC. Leaving the default
setting of High ensures that
they always have access to
the network resources they
need.

VCF-VDS-RCMD-NIO-005 Set the share value for During times of None.


vSAN traffic to High. network contention,
vSAN traffic needs
guaranteed bandwidth to
support virtual machine
performance.

VCF-VDS-RCMD-NIO-006 Set the share value for By default, VMware Cloud None.
other traffic types to Low. Foundation does not use
other traffic types, like
vSphere FT traffic. Hence,
these traffic types can be
set the lowest priority.

VMware, Inc. 85
NSX Design for VMware Cloud
Foundation 8
In VMware Cloud Foundation, you use NSX for connecting management and customer virtual
machines by using virtual network segments and routing. You also create constructs for solutions
that are deployed for a single VMware Cloud Foundation instance or are available across multiple
VMware Cloud Foundation instances. These constructs provide routing to the data center and
load balancing.

Table 8-1. NSX Logical Concepts and Components

Component Description

NSX Manager n Provides the user interface and the REST API for creating,
configuring, and monitoring NSX components, such as segments,
and Tier-0 and Tier-1 gateways.
n In a deployment with NSX Federation, NSX Manager is called NSX
Local Manager.

NSX Edge nodes n Is a special type of transport node which contains service router
components.
n Provides north-south traffic connectivity between the physical
data center networks and the NSX SDN networks. Each NSX Edge
node has multiple interfaces where traffic flows.
n Can provide east-west traffic flow between virtualized workloads.
They provide stateful services such as load balancers and
DHCP. In a deployment with multiple VMware Cloud Foundation
instances, east-west traffic between the VMware Cloud
Foundation instances flows through the NSX Edge nodes too.

NSX Federation (optional design extension) n Propagates configurations that span multiple NSX instances in
a single VMware Cloud Foundation instance or across multiple
VMware Cloud Foundation instances. You can stretch overlay
segments, activate failover of segment ingress and egress traffic
between VMware Cloud Foundation instances, and implement a
unified firewall configuration.
n In a deployment with multiple VMware Cloud Foundation
instances, you use NSX to provide cross-instance services to
SDDC management components that do not have native support
for availability at several locations, such as vRealize Automation
and vRealize Operations.
n Connect only workload domains of matching types (management
domain to management domain or VI workload domain to VI
workload domain).

VMware, Inc. 86
VMware Cloud Foundation Design Guide

Table 8-1. NSX Logical Concepts and Components (continued)

Component Description

NSX Global Manager (Federation only) n Is part of deployments with multiple VMware Cloud Foundation
instances where NSX Federation is required. NSX Global Manager
can connect multiple NSX Local Manager instances under a single
global management plane.
n Provides the user interface and the REST API for creating,
configuring, and monitoring NSX global objects, such as global
virtual network segments, and global Tier-0 and Tier-1 gateways.
n Connected NSX Local Manager instances create the global objects
on the underlying software-defined network that you define
from NSX Global Manager. An NSX Local Manager instance
directly communicates with other NSX Local Manager instances to
synchronize configuration and state needed to implement a global
policy.
n NSX Global Manager is a deployment-time role that you assign to
an NSX Manager appliance.

NSX Manager instance shared between VI n An NSX Manager instance can be shared between up to 14 VI
workload domains workload domains that are part of the same vCenter Single Sign-
On domain.
n VI workload domains sharing an NSX Manager instance must use
the same vSphere cluster life cycle method.
n Using a shared NSX Manager instance reduces resource
requirements for the management domain.
n A single transport zone is shared across all clusters in all VI
workload domains that share the NSX Manager instance.
n The management domain NSX instance cannot be shared.
n Isolated workload domain NSX instances cannot be shared.

Read the following topics next:

n Logical Design for NSX for VMware Cloud Foundation

n NSX Manager Design for VMware Cloud Foundation

n NSX Edge Node Design for VMware Cloud Foundation

n Routing Design for VMware Cloud Foundation

n Overlay Design for VMware Cloud Foundation

n Application Virtual Network Design for VMware Cloud Foundation

n Load Balancing Design for VMware Cloud Foundation

Logical Design for NSX for VMware Cloud Foundation


NSX provides networking services to workloads in VMware Cloud Foundation such as load
balancing, routing and virtual networking.

VMware, Inc. 87
VMware Cloud Foundation Design Guide

Table 8-2. NSX Logical Design


VMware Cloud Foundation Instances VMware Cloud Foundation Instances
Component with a Single Availability Zone with Multiple Availability Zones

NSX Manager Cluster n Three appropriately sized nodes n Three appropriately sized nodes
with a virtual IP (VIP) address with with a VIP address with an anti-
an Anti-affinity rule to separate. affinity rule to separate the nodes
n vSphere HA protects the cluster on different hosts.
nodes applying high restart n vSphere HA protects the cluster
priority nodes applying high restart
priority
n vSphere DRS rule should-run-on-
hosts-in-group keeps the NSX
Manager VMs in the first
availability zone.

NSX Global Manager Cluster n Manually deployed three n Manually deployed three
(Conditional) appropriately sized nodes with a appropriately sized nodes with a
VIP address with an anti-affinity VIP address with an anti-affinity
rule to separate them on different rule to separate them on different
hosts. hosts.
n One active and one standby n One active and one standby
cluster. cluster.
n vSphere HA protects the cluster n vSphere HA protects the cluster
nodes applying high restart nodes applying high restart
priority. priority.
n vSphere DRS rule should run on
hosts in group keeps the NSX
Global Manager VMs in the first
availability zone.

NSX Edge Cluster n Two appropriately sized NSX n Two appropriately sized NSX
Edge nodes with an anti-affinity Edge nodes in the first availability
rule to separate them on different zone with an anti-affinity rule to
hosts. separate them on different hosts.
n vSphere HA protects the cluster n vSphere HA protects the cluster
nodes applying high restart nodes applying high restart
priority. priority.
n vSphere DRS rule should run on
hosts in group keeps the NSX
Edge VMs in the first availability
zone.

Transport Nodes n Each ESXi host acts as a host n Each ESXi host acts as a host
transport node. transport node.
n Two Edge transport nodes. n Two Edge transport nodes in the
first availability zone.

Transport zones n One VLAN transport zone for n One VLAN transport zone for
north-south traffic. north-south traffic.
n Maximum one overlay transport n Maximum one overlay transport
zone for overlay segments per zone for overlay segments per
NSX instance. NSX instance.

VMware, Inc. 88
VMware Cloud Foundation Design Guide

Table 8-2. NSX Logical Design (continued)


VMware Cloud Foundation Instances VMware Cloud Foundation Instances
Component with a Single Availability Zone with Multiple Availability Zones

VLANs and IP subnets allocated to See VLANs and Subnets for VMware See Chapter 5 Physical Network
NSX Cloud Foundation. Infrastructure Design for VMware
For information about the networks Cloud Foundation.
for virtual infrastructure management,
see Distributed Port Group Design.

Routing configuration n BGP for a single VMware Cloud n BGP with path prepend to
Foundation instance. control ingress traffic and local
n In a VMware Cloud Foundation preference to control egress
deployment with NSX Federation, traffic through the first availability
BGP with ingress and egress zone during normal operating
traffic to the first VMware condition.
Cloud Foundation instance during n In a VMware Cloud Foundation
normal operating conditions. deployment with NSX Federation,
BGP with ingress and egress
traffic to the first instance during
normal operating conditions.

For a description of the NSX logical component in this design, see Table 8-1. NSX Logical
Concepts and Components.

Single Instance - Single Availability Zone


The NSX design for the Single Instance - Single Availability Zone topology consists of the
following components:

VMware, Inc. 89
VMware Cloud Foundation Design Guide

Figure 8-1. NSX Logical Design for a Single Instance - Single Availability Zone Topology

Access Supporting
NSX Edge
Cluster Infrastructure
User Interface
DNS NTP
API NSX Edge NSX Edge
Node 1 Node 2

NSX
Manager Cluster

Virtual Infrastructure Management


Internal VIP Load Balancer

Workload Domain
vCenter Server
NSX NSX NSX
Manager Manager Manager
1 2 3

NSX
Transport Nodes

Workload Domain Cluster

ESXi ESXi ESXi ESXi

n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.

n NSX Edge nodes in the workload domain that provide advanced services such as load
balancing, and north-south connectivity.

n ESXi hosts in the workload domain that are registered as NSX transport nodes to provide
distributed routing and firewall services to workloads.

Single Instance - Multiple Availability Zones


The NSX design for a Single Instance - Multiple Availability Zone topology consists of the
following components:

VMware, Inc. 90
VMware Cloud Foundation Design Guide

Figure 8-2. NSX Logical Design for a Single Instance - Multiple Availability Zone Topology

Access Supporting
NSX Edge
Cluster Infrastructure
User Interface
DNS NTP
API NSX Edge NSX Edge
Node 1 Node 2

NSX
Manager Cluster

Virtual Infrastructure Management


Internal VIP Load Balancer

Workload Domain
vCenter Server
NSX NSX NSX
Manager Manager Manager
1 2 3

Availability Zone 1 Availability Zone 2


NSX Transport Nodes NSX Transport Nodes

Workload Domain Cluster

ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi

n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.

n NSX Edge nodes that provide advanced services such as load balancing, and north-south
connectivity.

n ESXi hosts that are distributed evenly across availability zones in the workload domain and
are registered as NSX transport nodes to provide distributed routing and firewall services to
workloads.

Multiple Instances - Single Availability Zone


The NSX design for a Multiple Instance - Single Availability Zone topology consists of the
following components:

VMware, Inc. 91
VMware Cloud Foundation Design Guide

Figure 8-3. NSX Logical Design for a Multiple Instance - Single Availability Zone Topology
VCF Instance A VCF Instance B

NSX Global NSX Global


Manager Cluster (Active) Manager Cluster (Standby)

Internal VIP Load Balancer Internal VIP Load Balancer

Global Global Global Global Global Global


Manager Manager Manager Manager Manager Manager
1 2 3 1 2 3

NSX Local NSX Local


Manager Cluster Manager Cluster
Virtual Infrastructure Management Virtual Infrastructure Management

Internal VIP Load Balancer Internal VIP Load Balancer


Workload Domain Workload Domain
vCenter Server vCenter Server
Local Local Local Local Local Local
Manager Manager Manager Manager Manager Manager
1 2 3 1 2 3

NSX NSX
NSX Edge NSX Edge
Transport Nodes Transport Nodes
Node Cluster Node Cluster

Workload Domain Cluster Workload Domain Cluster


Edge Edge Edge Edge
Node 1 Node 2 Node 1 Node 2

ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi

n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.

n NSX Edge nodes that provide advanced services such as load balancing, and north-south
connectivity.

n ESXi hosts in the workload domain that are registered as NSX transport nodes to provide
distributed routing and firewall services to workloads.

n NSX Global Manager cluster in each of the first two VMware Cloud Foundation instances.

You deploy the NSX Global Manager cluster in each VMware Cloud Foundation instance so
that you can use NSX Federation for global management of networking and security services.

n An additional infrastructure VLAN in each VMware Cloud Foundation instance to carry


instance-to-instance traffic (RTEP).

Multiple Instances - Multiple Availability Zones


The NSX design for a Multiple Instance - Multiple Availability Zone topology consists of the
following components:

VMware, Inc. 92
VMware Cloud Foundation Design Guide

Figure 8-4. NSX Logical Design for Multiple Instance - Multiple Availability Zone Topology
VCF Instance A VCF Instance B

NSX Global NSX Global


Manager Cluster (Active) Manager Cluster (Standby)
Virtual Infrastructure Management Virtual Infrastructure Management

Internal VIP Load Balancer Internal VIP Load Balancer


Workload Domain Workload Domain
vCenter Server vCenter Server
Global Global Global Global Global Global
Manager Manager Manager Manager Manager Manager
1 2 3 1 2 3

NSX Local NSX Local


Manager Cluster Manager Cluster
NSX Edge NSX Edge
Node Cluster Node Cluster
Internal VIP Load Balancer Internal VIP Load Balancer

Edge Edge Edge Edge


Node 1 Node 2 Local Local Local Local Local Local Node 1 Node 2
Manager Manager Manager Manager Manager Manager
1 2 3 1 2 3

Availability Zone 1 Availability Zone 2 Availability Zone 1 Availability Zone 2


NSX Transport Nodes NSX Transport Nodes NSX Transport Nodes NSX Transport Nodes

Workload Domain Cluster Workload Domain Cluster

ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi ESXi

n Unified appliances that have both the NSX Local Manager and NSX Controller roles. They
provide management and control plane capabilities.

n NSX Edge nodes that provide advanced services such as load balancing, and north-south
connectivity.

n ESXi hosts that are distributed evenly across availability zones in the workload domain in a
VMware Cloud Foundation instance, and are registered as NSX transport nodes to provide
distributed routing and firewall services to workloads.

n NSX Global Manager cluster in each of the first two VMware Cloud Foundation instances.

You deploy the NSX Global Manager cluster in each VMware Cloud Foundation instance so
that you can use NSX Federation for global management of networking and security services.

n An additional infrastructure VLAN in each VMware Cloud Foundation instance to carry


instance-to-instance traffic (RTEP).

NSX Manager Design for VMware Cloud Foundation


Following the principles of this design and of each product, you determine the size of, deploy
and configure NSX Manager as part of your VMware Cloud Foundation deployment.

VMware, Inc. 93
VMware Cloud Foundation Design Guide

Sizing Considerations for NSX Manager for VMware Cloud


Foundation
You select an appropriate NSX Manager appliance size that is suitable for the scale of your
environment.

When you deploy NSX Manager appliances, either with a local or global scope, you select to
deploy the appliance with a size that is suitable for the scale of your environment. The option
that you select determines the number of CPUs and the amount of memory of the appliance. For
detailed sizing according to the overall profile of the VMware Cloud Foundation instance you plan
to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.

Table 8-3. Sizing Considerations for NSX Manager

NSX Manager Appliance Size Scale

Extra-Small Cloud Service Manager only

Small Proof of concept

Medium Up to 128 ESXi hosts


Default for the management domain vCenter Server

Large Up to 1,024 ESXi hosts


Default for the VI workload domain vCenter Server
instances

Note To deploy an NSX Manager appliance with a size different from the default one, you must
use the API.

NSX Manager Design Requirements and Recommendations for


VMware Cloud Foundation
Consider the placement requirements for using NSX Manager in VMware Cloud Foundation, and
the best practices for having an NSX Manager cluster operate in an optimal way, such as number
and size of the nodes, and high availability, on a standard or stretched management cluster.

NSX Manager Design Requirements for VMware Cloud Foundation


You must meet the following design requirements for in your NSX Manager design for VMware
Cloud Foundation.

VMware, Inc. 94
VMware Cloud Foundation Design Guide

Table 8-4. NSX Manager Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-LM-REQD- Place the appliances n Provides direct secure None.


CFG-001 of the NSX Manager connection to the ESXi
cluster on the management hosts and vCenter
VLAN network in the Server for edge
management domain. node management and
distributed network
services.
n Reduces the number
of required VLANs
because a single VLAN
can be allocated to
both, vCenter Server
and NSX.

VCF-NSX-LM-REQD- Deploy three NSX Manager Supports high availability of You must have sufficient
CFG-002 nodes in the default the NSX manager cluster. resources in the default
vSphere cluster in the cluster of the management
management domain for domain to run three NSX
configuring and managing Manager nodes.
the network services for
the workload domain.

NSX Manager Design Recommendations for VMware Cloud Foundation


In your NSX Manager design for VMware Cloud Foundation, you can apply certain best practices
for standard and stretched clusters.

Table 8-5. NSX Manager Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-LM-RCMD- Deploy appropriately sized Ensures resource The default size for
CFG-001 nodes in the NSX Manager availability and usage a management domain
cluster for the workload efficiency per workload is Medium, and for VI
domain. domain. workload domains is Large.

VCF-NSX-LM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-002 address for the NSX the user interface and API feature provides high
Manager cluster for the of NSX Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Manager nodes must
be deployed on the
same Layer 2 network.

VMware, Inc. 95
VMware Cloud Foundation Design Guide

Table 8-5. NSX Manager Design Recommendations for VMware Cloud Foundation (continued)

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-LM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Manager You must allocate at
CFG-003 rules in vSphere Distributed appliances running on least four physical hosts
Resource Scheduler different ESXi hosts for so that the three
(vSphere DRS) to the NSX high availability. NSX Manager appliances
Manager appliances. continue running if an ESXi
host failure occurs.

VCF-NSX-LM-RCMD- In vSphere HA, set the n NSX Manager If the restart priority
CFG-004 restart priority policy implements the control for another management
for each NSX Manager plane for virtual appliance is set to highest,
appliance to high. network segments. the connectivity delay for
vSphere HA restarts management appliances
the NSX Manager will be longer.
appliances first so that
other virtual machines
that are being powered
on or migrated by using
vSphere vMotion while
the control plane is
offline lose connectivity
only until the control
plane quorum is re-
established.
n Setting the restart
priority to high reserves
the highest priority for
flexibility for adding
services that must be
started before NSX
Manager.

Table 8-6. NSX Manager Design Recommendations for Stretched Clusters in VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-LM-RCMD- Add the NSX Manager Ensures that, by default, Done automatically by
CFG-006 appliances to the virtual the NSX Manager VMware Cloud Foundation
machine group for the first appliances are powered when configuring a vSAN
availability zone. on a host in the primary stretched cluster.
availability zone.

NSX Global Manager Design Requirements and Recommendations


for VMware Cloud Foundation
For a deployment with multiple VMware Cloud Foundation instances, you use NSX Federation
that requires the manual deployment of NSX Global Manager nodes in the first two instances.
Consider the placement requirements for using NSX Manager in VMware Cloud Foundation, and
the best practices for having an NSX Manager cluster operate in an optimal way, such as number
and size of the nodes, high availability, on a standard or stretched management cluster.

VMware, Inc. 96
VMware Cloud Foundation Design Guide

NSX Global Manager Design Requirements


You must meet the following design requirements in your NSX Global Manager design for
VMware Cloud Foundation.

Table 8-7. NSX Global Manager Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-GM-REQD- Place the appliances of Reduces the number of None.


CFG-001 the NSX Global Manager required VLANs because
cluster on the management a single VLAN can be
VLAN network in each allocated to both vCenter
VMware Cloud Foundation Server and NSX.
instance.

NSX Global Manager Design Recommendations


In your NSX Global Manager design for VMware Cloud Foundation, you can apply certain best
practices for standard and stretched clusters.

Table 8-8. NSX Global Manager Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-GM-RCMD- Deploy three NSX Global Provides high availability You must have sufficient
CFG-001 Manager nodes for the for the NSX Global resources in the default
workload domain to Manager cluster. cluster of the management
support NSX Federation domain to run three NSX
across VMware Cloud Global Manager nodes.
Foundation instances.

VCF-NSX-GM-RCMD- Deploy appropriately sized Ensures resource The recommended size


CFG-002 nodes in the NSX Global availability and usage for a management domain
Manager cluster for the efficiency per workload is Medium and for VI
workload domain. domain. workload domains is Large.

VCF-NSX-GM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-003 address for the NSX Global the user interface and API feature provides high
Manager cluster for the of NSX Global Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Global Manager nodes
must be deployed on
the same Layer 2
network.

VCF-NSX-GM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Global You must allocate at
CFG-004 rules in vSphere DRS to Manager appliances least four physical hosts
the NSX Global Manager running on different ESXi so that the three
appliances. hosts for high availability. NSX Manager appliances
continue running if an ESXi
host failure occurs.

VMware, Inc. 97
VMware Cloud Foundation Design Guide

Table 8-8. NSX Global Manager Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-GM-RCMD- In vSphere HA, set the n NSX Global Manager n Management of NSX
CFG-005 restart priority policy for implements the global components will
each NSX Global Manager management plane for be unavailable until the
appliance to medium. global segments and NSX Global Manager
firewalls. virtual machines restart.
n The NSX Global
NSX Global Manager is
Manager cluster is
not required for control
deployed in the
plane and data plane
management domain,
connectivity.
where the total number
n Setting the restart
of virtual machines
priority to medium
is limited and where
reserves the high
it competes with
priority for services that
other management
impact the NSX control
components for restart
or data planes.
priority.

VCF-NSX-GM-RCMD- Deploy an additional NSX Enables recoverability of Requires additional NSX


CFG-006 Global Manager Cluster in NSX Global Manager in Global Manager nodes in
the second VMware Cloud the second VMware Cloud the second VMware Cloud
Foundation instance. Foundation instance if a Foundation instance.
failure in the first VMware
Cloud Foundation instance
occurs.

VCF-NSX-GM-RCMD- Set the NSX Global Enables recoverability of Must be done manually.
CFG-007 Manager cluster in the NSX Global Manager in
second VMware Cloud the second VMware Cloud
Foundation instance as Foundation instance if a
standby for the workload failure in the first instance
domain. occurs.

VCF-NSX-GM-RCMD- Establish an operational Ensures secured The administrator must


SEC-001 practice to capture and connectivity between the establish and follow an
update the thumbprint of NSX Manager instances. operational practice by
the NSX Local Manager Each certificate has its using a runbook or
certificate on NSX Global own unique thumbprint. automated process to
Manager every time the NSX Global Manager stores ensure that the thumbprint
certificate is updated by the unique thumbprint of up-to-date.
using SDDC Manager. the NSX Local Manager
instances for enhanced
security.
If an authentication failure
between NSX Global
Manager and NSX Local
Manager occurs, objects
that are created from NSX
Global Manager will not be
propagated on to the SDN.

VMware, Inc. 98
VMware Cloud Foundation Design Guide

Table 8-9. NSX Global Manager Design Recommendations for Stretched Clusters in VMware
Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-GM-RCMD- Add the NSX Global Ensures that, by default, Done automatically by
CFG-008 Manager appliances to the the NSX Global Manager VMware Cloud Foundation
virtual machine group for appliances are powered when stretching a cluster.
the first availability zone. on a host in the primary
availability zone.

NSX Edge Node Design for VMware Cloud Foundation


Following the principles of this design and of each product, you deploy, configure, and connect
the NSX Edge nodes to support networks in the NSX instances in your VMware Cloud Foundation
deployment.

Deployment Model for the NSX Edge Nodes for VMware Cloud
Foundation
For NSX Edge nodes, you determine the form factor, number of nodes and placement according
to the requirements for network services in a VMware Cloud Foundation workload domain.

An NSX Edge node is an appliance that provides centralized networking services which cannot
be distributed to hypervisors, such as load balancing, NAT, VPN, and physical network uplinks.
Some services, such as Tier-0 gateways, are limited to a single instance per NSX Edge node.
However, most services can coexist in these nodes.

NSX Edge nodes are grouped in one or more edge clusters, representing a pool of capacity for
NSX services.

An NSX Edge node can be deployed as a virtual appliance, or installed on bare-metal hardware.
The edge node on bare-metal hardware can have better performance capabilities at the expense
of more difficult deployment and limited deployment topology use cases. For details on the
trade-offs of using virtual or bare-metal NSX Edges, see the NSX documentation.

VMware, Inc. 99
VMware Cloud Foundation Design Guide

Table 8-10. NSX Edge Deployment Model Considerations

Deployment Model Benefits Drawbacks

NSX Edge virtual appliance deployed n Deployment and life cycle n Might not provide best
by using SDDC Manager management by using SDDC performance in individual
Manager workflows that call NSX customer scenarios
Manager
n Automated password
management by using SDDC
Manager
n Benefits from vSphere HA
recovery
n Can be used across availability
zones
n Easy to scale up by modifying
the specification of the virtual
appliance

NSX Edge virtual appliance deployed n Benefits from vSphere HA n Might not provide best
by using NSX Manager recovery performance in individual
n Can be used across availability customer scenarios
zones n Manually deployed by using NSX
n Easy to scale up by modifying Manager
the specification of the virtual n Manual password Management
appliance by using NSX Manager
n Cannot be used to support
Application Virtual Networks
(AVNs) in the management
domain

Bare-metal NSX Edge appliance n Might provide better performance n Has hardware compatibility
in individual customer scenarios requirements
n Requires individual hardware
life cycle management and
monitoring of failures, firmware
and drivers
n Manual password management
n Must be manually deployed and
connected to the environment
n Requires manual recovery after
hardware failure
n Requires deploying a bare-metal
NSX Edge appliance in each
availability zone for network
failover
n Deploying a bare-metal edge in
each availability zone requires
considering asymmetric routing
n Requires edge fault domains
if more than one edge is
deployed in each availability zone
for Active/Standby Tier-0/Tier-1
gateways

VMware, Inc. 100


VMware Cloud Foundation Design Guide

Table 8-10. NSX Edge Deployment Model Considerations (continued)

Deployment Model Benefits Drawbacks

n Requires redeployment to new


host to achieve scale-up
n Cannot be used to support AVNs
in the management domain

Sizing Considerations for NSX Edges for VMware Cloud Foundation


When you deploy NSX Edge appliances, you select a size according to the scale of your
environment. The option that you select determines the number of CPUs and the amount of
memory of the appliance.

For detailed sizing according to the overall profile of the VMware Cloud Foundation instance you
plan to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.

Table 8-11. Sizing Considerations for NSX Edges

NSX Edge Appliance Size Scale

Small Proof of concept

Medium Suitable when only Layer 2 through Layer 4 features such


as NAT, routing, Layer 4 firewall, Layer 4 load balancer
are required and the total throughput requirement is less
than 2 Gbps.

Large Suitable when only Layer 2 through Layer 4 features such


as NAT, routing, Layer 4 firewall, Layer 4 load balancer
are required and the total throughput is 2 ~ 10 Gbps. It
is also suitable when Layer 7 load balancer, for example,
SSL offload is required.

Extra Large Suitable when the total throughput required is multiple


Gbps for Layer 7 load balancer and VPN.

Network Design for the NSX Edge Nodes for VMware Cloud
Foundation
In each VMware Cloud Foundation instance, you implement an NSX Edge configuration with a
single N-VDS. You connect the uplink network interfaces of the edge appliance to VLAN trunk
port groups that are connected to particular physical NICs on the host.

NSX Edge Network Configuration


The NSX Edge node contains a virtual switch, called an N-VDS, that is managed by NSX. This
internal N-VDS is used to define traffic flow through the interfaces of the edge node. An N-VDS
can be connected to one or more interfaces. Interfaces cannot be shared between N-VDS
instances.

VMware, Inc. 101


VMware Cloud Foundation Design Guide

If you plan to deploy multiple VMware Cloud Foundation instances, apply the same network
design to the NSX Edge cluster in the second and other additional VMware Cloud Foundation
instances.

Figure 8-5. NSX Edge Network Configuration

ToR
Switches

vmnic0 vmnic1

vSphere Distributed Switch with NSX

Management Network Edge Uplink 01 Network Edge Uplink 02 Network

Eth0 fp-eth0 fp-eth1 fp-eth2


(mgmt) (uplink 1) (uplink 2) (unused)

Edge N-VDS

TEP1 RTEP TEP2


Uplink 1 Uplink 2
(VLAN) (VLAN)
Overlay Segments

NSX Edge Node

ESXi Host

Uplink Policy Design for the NSX Edge Nodes for VMware Cloud Foundation
A transport node can participate in an overlay and VLAN network. Uplink profiles define policies
for the links from the NSX Edge transport nodes to top of rack switches. Uplink profiles are
containers for the properties or capabilities for the network adapters. Uplink profiles are applied
to the N-VDS of the edge node.

VMware, Inc. 102


VMware Cloud Foundation Design Guide

Uplink profiles can use either load balance source or failover order teaming. If using load balance
source, multiple uplinks can be active. If using failover order, only a single uplink can be active.

Teaming can be configured by using the default teaming policy or a user-defined named teaming
policy. You can use named teaming policies to pin traffic segments to designated edge uplinks.

NSX Edge Node Requirements and Recommendations for VMware


Cloud Foundation
Consider the network, N-VDS configuration and uplink policy requirements for using NSX
Edge nodes in VMware Cloud Foundation, and the best practices for having NSX Edge nodes
operate in an optimal way, such as number and size of the nodes, high availability, and N-VDS
architecture, on a standard or stretched cluster.

NSX Edge Design Requirements


You must meet the following design requirements for standard and stretched clusters in your
NSX Edge design for VMware Cloud Foundation.

Table 8-12. NSX Edge Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-EDGE-REQD- Connect the management Provides connection from None.


CFG-001 interface of each NSX Edge the NSX Manager cluster to
node to the management the NSX Edge.
VLAN.

VCF-NSX-EDGE-REQD- n Connect the fp-eth0 n Because VLAN trunk None.


CFG-002 interface of each NSX port groups pass
Edge appliance to a traffic for all VLANs,
VLAN trunk port group VLAN tagging can
pinned to physical NIC occur in the NSX
0 of the host, with Edge node itself for
the ability to failover to easy post-deployment
physical NIC 1. configuration.
n Connect the fp-eth1 n By using two separate
interface of each NSX VLAN trunk port
Edge appliance to a groups, you can direct
VLAN trunk port group traffic from the edge
pinned to physical NIC node to a particular
1 of the host, with the host network interface
ability to failover to and top of rack switch
physical NIC 0. as needed.
n Leave the fp-eth2 n In the event of failure of
interface of each NSX the top of rack switch,
Edge appliance unused. the VLAN trunk port
group will failover to
the other physical NIC
and to ensure both fp-
eth0 and fp-eth1 are
available.

VMware, Inc. 103


VMware Cloud Foundation Design Guide

Table 8-12. NSX Edge Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-EDGE-REQD- Use a dedicated VLAN A dedicated edge overlay n You must have routing
CFG-003 for edge overlay that is network provides support between the VLANs for
different from the host for edge mobility in edge overlay and host
overlay VLAN. support of advanced overlay.
deployments such as n You must allocate
multiple availability zones another VLAN in
or multi-rack clusters. the data center
infrastructure for edge
overlay.

VCF-NSX-EDGE-REQD- Create one uplink profile n An NSX Edge node None.


CFG-004 for the edge nodes with that uses a single N-
three teaming policies. VDS can have only one
n Default teaming policy uplink profile.
of load balance source n For increased resiliency
with both active uplinks and performance,
uplink1 and uplink2. supports the
n Named teaming policy concurrent use of both
of failover order edge uplinks through
with a single active both physical NICs on
uplink uplink1 without the ESXi hosts.
standby uplinks. n The default teaming
n Named teaming policy policy increases overlay
of failover order performance and
with a single active availability by using
uplink uplink2 without multiple TEPs, and
standby uplinks. balancing of overlay
traffic.
n By using named
teaming policies, you
can connect an edge
uplink to a specific host
uplink and from there
to a specific top of
rack switch in the data
center.
n Enables ECMP because
the NSX Edge nodes
can uplink to the
physical network over
two different VLANs.

VMware, Inc. 104


VMware Cloud Foundation Design Guide

Table 8-13. NSX Edge Design Requirements for NSX Federation in VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-EDGE-REQD- Allocate a separate VLAN The RTEP network must be You must allocate another
CFG-005 for edge RTEP overlay that on a VLAN that is different VLAN in the data center
is different from the edge from the edge overlay infrastructure.
overlay VLAN. VLAN. This is an NSX
requirement that provides
support for configuring
different MTU size per
network.

NSX Edge Design Recommendations


In your NSX Edge design for VMware Cloud Foundation, you can apply certain best practices for
standard and stretched clusters.

Table 8-14. NSX Edge Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implications

VCF-NSX-EDGE-RCMD- Use appropriately sized Ensures resource You must provide sufficient
CFG-001 NSX Edge virtual availability and usage compute resources to
appliances. efficiency per workload support the chosen
domain. appliance size.

VCF-NSX-EDGE-RCMD- Deploy the NSX Edge n Simplifies the Workloads and NSX Edges
CFG-002 virtual appliances to the configuration and share the same compute
default vSphere cluster minimizes the number resources.
of the workload domain, of ESXi hosts required
sharing the cluster between for initial deployment.
the workloads and the n Keeps the management
edge appliances. components located in
the same domain and
cluster, isolated from
customer workloads.

VCF-NSX-EDGE-RCMD- Deploy two NSX Edge Creates the NSX Edge None.
CFG-003 appliances in an edge cluster for satisfying the
cluster in the default requirements for availability
vSphere cluster of the and scale.
workload domain.

VCF-NSX-EDGE-RCMD- Apply VM-VM anti-affinity Keeps the NSX Edge nodes None.
CFG-004 rules for vSphere DRS to running on different ESXi
the virtual machines of the hosts for high availability.
NSX Edge cluster.

VMware, Inc. 105


VMware Cloud Foundation Design Guide

Table 8-14. NSX Edge Design Recommendations for VMware Cloud Foundation (continued)

Recommendation ID Design Recommendation Justification Implications

VCF-NSX-EDGE-RCMD- In vSphere HA, set the n The NSX Edge nodes If the restart priority
CFG-005 restart priority policy for are part of the for another management
each NSX Edge appliance north-south data path appliance is set to highest,
to high. for overlay segments. the connectivity delays
vSphere HA restarts the for management appliances
NSX Edge appliances will be longer .
first so that other
virtual machines that
are being powered on
or migrated by using
vSphere vMotion while
the edge nodes are
offline lose connectivity
only for a short time.
n Setting the restart
priority to high reserves
highest for future
needs.

VCF-NSX-EDGE-RCMD- Configure all edge nodes as Enables the participation of None.


CFG-006 transport nodes. edge nodes in the overlay
network for delivery of
services to the SDDC
management components
such as routing and load
balancing.

VCF-NSX-EDGE-RCMD- Create an NSX n Satisfies the availability None.


CFG-007 Edge cluster with requirements by
the default Bidirectional default.
Forwarding Detection n Edge nodes must
(BFD) configuration remain available to
between the NSX Edge create services such
nodes in the cluster. as NAT, routing to
physical networks, and
load balancing.

VCF-NSX-EDGE-RCMD- Use a single N-VDS in the n Simplifies deployment None.


CFG-008 NSX Edge nodes. of the edge nodes.
n The same N-VDS switch
design can be used
regardless of edge
form factor.
n Supports multiple TEP
interfaces in the edge
node.
n vSphere Distributed
Switch is not supported
in the edge node.

VMware, Inc. 106


VMware Cloud Foundation Design Guide

Table 8-15. NSX Edge Design Recommendations for Stretched Clusters in VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implications

VCF-NSX-EDGE-RCMD- Add the NSX Edge Ensures that, by default, None.


CFG-009 appliances to the virtual the NSX Edge appliances
machine group for the first are powered on upon
availability zone. a host in the primary
availability zone.

Routing Design for VMware Cloud Foundation


NSX Edge clusters in VMware Cloud Foundation provide pools of capacity for service router
functions in NSX.

Routing Options for VMware Cloud Foundation


BGP routing is the routing option used for VMware Cloud Foundation.

BGP Routing Design for VMware Cloud Foundation


Determine the number, networking, and high-availability configuration of the Tier-0 and
Tier-1 gateways in NSX for VMware Cloud Foundation workload domains. Identify the BGP
configuration for a single availability zone and two availability zones in the environment.

Table 8-16. Routing Direction Definitions

Routing Direction Description

North-south Traffic leaving or entering the NSX domain, for example,


a virtual machine on an overlay network communicating
with an end-user device on the corporate network.

East-west Traffic that remains in the NSX domain, for example,


two virtual machines on the same or different segments
communicating with each other.

North-South Routing
The routing design considers different levels of routing in the environment, such as number and
type of gateways in NSX, dynamic routing protocol, and others.

The following models for north-south traffic exist:

VMware, Inc. 107


VMware Cloud Foundation Design Guide

Table 8-17. Considerations for the Operating Model for North-South Service Routers
North-South Service
Router Operating Model Description Benefits Drawbacks

Active-Active n Bandwidth independent n The active-active mode n Cannot provide stateful


of the Tier-0 gateway can support up to services, such as NAT.
failover model. 8 NSX Edge nodes
n Configured in active- per northbound service
active equal-cost multi- router (SR).
path (ECMP) mode. n Availability can be as
n Failover takes high as N+7, with up
approximately 2 to 8 active-active NSX
seconds for virtual Edge nodes.
edges and is sub- n Supports ECMP north-
second for bare-metal south routing on all
edges. nodes in the NSX Edge
cluster.

Active-Standby n Bandwidth independent n Can provide stateful n The active-standby


of the Tier-0 gateway services such as NAT. mode is limited to a
failover model. single node.
n Failover takes n Availability limited to
approximately 2 N+1.
seconds for virtual
edges and is sub-
second for bare-metal
edges.

BGP North-South Routing for a Single or Multiple Availability Zones


For multiple availability zones, plan for failover of the NSX Edge nodes by configuring BGP so
that traffic from the top of rack switches is directed to the first availability zone unless a failure in
this zone occurs.

VMware, Inc. 108


VMware Cloud Foundation Design Guide

Figure 8-6. BGP North-South Routing for VMware Cloud Foundation Instances with a Single
Availability Zone

VCF Instance A

ToR
Switches

Data Center A
BGP ASN eBGP
ECMP
Uplink VLAN 1 BDF (Optional) Default Route

Uplink VLAN 2

Tier-0 SR SR
Gateway
DR DR
SDDC BGP ASN

ESXi Transport Nodes

SR SR
Tier-1
DR DR DR DR DR DR
Gateway
NSX NSX ESXi ESXi ESXi ESXi
Edge Edge Host 1 Host 2 Host 3 Host 4
Node 1 Node 2

VMware, Inc. 109


VMware Cloud Foundation Design Guide

Figure 8-7. BGP North-South Routing for VMware Cloud Foundation Instances with Multiple
Availability Zones
VCF Instance A

Availability Zone 1 eBGP (loc-pref Availability Zone 2


& ASPath_prepend)
ToR ECMP
eBGP BFD (Optional) ToR
Switches
ECMP Switches
BFD (Optional)
Data Center A - AZ 1
BGP ASN Data Center A - AZ 2
Default Route Default Route BGP ASN
Uplink VLAN 1 (low loc-pref)

Uplink VLAN 2

Tier-0 SR SR
Gateway
DR DR
SDDC BGP ASN

ESXi Transport Nodes ESXi Transport Nodes

SR SR

DR DR DR DR DR DR Tier-1 DR DR DR DR
Gateway
ESXi ESXi ESXi ESXi NSX NSX ESXi ESXi ESXi ESXi
Host 1 Host 2 Host 3 Host 4 Edge Edge Host 1 Host 2 Host 3 Host 4
Node 1 Node 2

BGP North-South Routing Design for NSX Federation


In a routing design for an environment with VMware Cloud Foundation instances that use NSX
Federation, you identify the instances that an SDN network must span and at which physical
location ingress and egress traffic should occur.

Local egress allows traffic to leave any location which the network spans. The use of local-egress
would require controlling local-ingress to prevent asymmetrical routing. This design does not
use local-egress. Instead, this design uses a preferred and failover VMware Cloud Foundation
instances for all networks.

VMware, Inc. 110


VMware Cloud Foundation Design Guide

Figure 8-8. BGP North-South Routing for VMware Cloud Foundation Instances with NSX
Federation

G G

NSX Global NSX Global


Manager Manager
Management/
Control Plane

L L

NSX Local NSX Local


Manager Manager

Data Plane G Tier-0 Gateway


Active/Active
Primary/Primary

G Tier-1 Gateway G Tier-1 Gateway G Tier-1 Gateway


Active/Standby Active/Standby Active/Standby
Primary Primary/Secondary Primary

Segment Local to Multi-Instance Segment Segment Local to


VCF Instance A Egress through single VCF Instance B
Location

VCF Instance A VCF Instance B

Tier-0 Gateways with NSX Federation


In NSX Federation, a Tier-0 gateway can span multiple VMware Cloud Foundation instances.

Each VMware Cloud Foundation instance that is in the scope of a Tier-0 gateway can be
configured as primary or secondary. A primary instance passes traffic for any other SDN service
such as Tier-0 logical segments or Tier-1 gateways. A secondary instance routes traffic locally but
does not egress traffic outside the SDN or advertise networks in the data center.

VMware, Inc. 111


VMware Cloud Foundation Design Guide

When deploying an additional VMware Cloud Foundation instance, the Tier-0 gateway in the first
instance is extended to the new instance.

In this design, the Tier-0 gateway in each VMware Cloud Foundation instance is configured as
primary. Although the Tier-0 gateway technically supports local-egress, the design does not
recommend the use of local-egress. Ingress and egress traffic is controlled at the Tier-1 gateway
level.

Each VMware Cloud Foundation instance has its own NSX Edge cluster with associated uplink
VLANs for north-south traffic flow for that instance. The Tier-0 gateway in each instance peers
with the top of rack switches over eBGP.

Figure 8-9. BGP Peering to Top of Rack Switches for VMware Cloud Foundation Instances with
NSX Federation

Data Center A Data Center B


BGP ASN A RTEP BGP ASN B
Tunnel

Data Center
Network
ToR ToR
Management Management
VLAN VLAN

eBGP Routes eBGP Routes


Inbound: Inbound:
Default Gateway Default Gateway
VCF Instance A local network VCF Instance B local network

Outbound: Outbound:
Tier-0 -No segments attached NSX Edge NSX Edge Tier-0 -No segments attached
Tier-1- Global and local segments Cluster Cluster Tier-1- Global and local segments
Primary at this location Primary at this location

Edge Edge
Node Node
Tier-0 SR iBGP -Inter-SR Routing Tier-0 SR
Active/Active Active/Active

SDN G Tier-0
Active/Active
Primary/Primary

G Tier-1 G Tier-1 G Tier-1


Active/Standby Active/Standby Active/Standby
Primary Primary/Secondary Primary

Segments Local Multi-Instance Segments Segments Local to


to VCF Instance A Egress through single VCF Instance B
Location

NSX SDN
SDDC BGP ASN

VCF Instance A VCF Instance B

VMware, Inc. 112


VMware Cloud Foundation Design Guide

Tier-1 Gateways with NSX Federation


A Tier-1 gateway can span several VMware Cloud Foundation instances. As with a Tier-0
gateway, you can configure an instance's location as primary or secondary for the Tier-1
gateway. The gateway then passes ingress and egress traffic for the logical segments connected
to it.

Any logical segments connected to the Tier-1 gateway follow the span of the Tier-1 gateway. If
the Tier-1 gateway spans several VMware Cloud Foundation instances, any segments connected
to that gateway become available in both instances.

Using a Tier-1 gateway enables more granular control on logical segments in the first and second
VVMware Cloud Foundation instances. You use three Tier-1 gateways - one in each VMware
Cloud Foundation instance for segments that are local to the instance, and one for segments
which span the two instances.

Table 8-18. Location Configuration of the Tier-1 Gateways for Multiple VMware Cloud Foundation
Instances
First VMware Cloud Second VMware Cloud
Tier-1 Gateway Foundation Instance Foundation Instance Ingress and Egress Traffic

Connected to both VMware Primary Secondary First VMware Cloud


Cloud Foundation instances Foundation instance
Second VMware Cloud
Foundation instance

Local to the firstVMware Primary - First VMware Cloud


Cloud Foundation instance Foundation instance only

Local to the second - Primary Second VMware Cloud


VMware Cloud Foundation Foundation instance only
instance

The Tier-1 gateway advertises its networks to the connected local-instance unit of the Tier-0
gateway. In the case of primary-secondary location configuration, the Tier-1 gateway advertises
its networks only to the Tier-0 gateway unit in the location where the Tier-1 gateway is primary.
The Tier-0 gateway unit then re-advertises those networks to the data center in the sites
where that Tier-1 gateway is primary. During failover of the components in the first VMware
Cloud Foundation instance, an administrator must manually set the Tier-1 gateway in the second
VMware Cloud Foundation instance as primary. Then, networks become advertised through the
Tier-1 gateway unit in the second instance.

In a Multiple Instance-Multiple Availability Zone topology, the same Tier-0 and Tier-1 gateway
architecture applies. The ESXi transport nodes from the second availability zone are also
attached to the Tier-1 gateway as per the Figure 8-7. BGP North-South Routing for VMware
Cloud Foundation Instances with Multiple Availability Zones design.

VMware, Inc. 113


VMware Cloud Foundation Design Guide

BGP Routing Design Requirements and Recommendations for


VMware Cloud Foundation
Consider the requirements for the configuration of Tier-0 and Tier-1 gateways for implementing
BGP routing in VMware Cloud Foundation, and the best practices for having optimal traffic
routing on a standard or stretched cluster in a environment with a single or multiple VMware
Cloud Foundation instances.

BGP Routing
The BGP routing design has the following characteristics:

n Enables dynamic routing by using NSX.

n Offers increased scale and flexibility.

n Is a proven protocol that is designed for peering between networks under independent
administrative control - data center networks and the NSX SDN.

Note These design recommendations do not include BFD. However, if faster convergence than
BGP timers is required, you must enable BFD on the physical network and also on the NSX Tier-0
gateway.

BGP Routing Design Requirements


You must meet the following design requirements for standard and stretched clusters in your
routing design for a single VMware Cloud Foundation instance. For NSX Federation, additional
requirements exist.

Table 8-19. BGP Routing Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- To enable ECMP between Supports multiple equal- Additional VLANs are
CFG-001 the Tier-0 gateway and cost routes on the Tier-0 required.
the Layer 3 devices gateway and provides
(ToR switches or upstream more resiliency and better
devices), create two bandwidth use in the
VLANs . network.
The ToR switches or
upstream Layer 3 devices
have an SVI on one of the
two VLANS and each Edge
node in the cluster has an
interface on each VLAN.

VCF-NSX-BGP-REQD- Assign a named teaming Pins the VLAN traffic on None.


CFG-002 policy to the VLAN each segment to its target
segments to the Layer 3 edge node interface. From
device pair. there the traffic is directed
to the host physical NIC
that is connected to the
target top of rack switch.

VMware, Inc. 114


VMware Cloud Foundation Design Guide

Table 8-19. BGP Routing Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Create a VLAN transport Enables the configuration Additional VLAN transport
CFG-003 zone for edge uplink traffic. of VLAN segments on the zones are required if
N-VDS in the edge nodes. the edge nodes are not
connected to the same top
of rack switch pair.

VCF-NSX-BGP-REQD- Deploy a Tier-1 gateway Creates a two-tier routing A Tier-1 gateway can only
CFG-004 and connect it to the Tier-0 architecture. be connected to a single
gateway. Abstracts the NSX logical Tier-0 gateway.
components which interact In cases where multiple
with the physical data Tier-0 gateways are
center from the logical required, you must create
components which provide multiple Tier-1 gateways.
SDN services.

VCF-NSX-BGP-REQD- Deploy a Tier-1 gateway to Enables stateful services, None.


CFG-005 the NSX Edge cluster. such as load balancers
and NAT, for SDDC
management components.
Because a Tier-1 gateway
always works in active-
standby mode, the
gateway supports stateful
services.

VMware, Inc. 115


VMware Cloud Foundation Design Guide

Table 8-20. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Extend the uplink VLANs Because the NSX Edge You must configure a
CFG-006 to the top of rack switches nodes will fail over stretched Layer 2 network
so that the VLANs are between the availability between the availability
stretched between both zones, ensures uplink zones by using physical
availability zones. connectivity to the top network infrastructure.
of rack switches in
both availability zones
regardless of the zone
the NSX Edge nodes are
presently in.

VCF-NSX-BGP-REQD- Provide this SVI Enables the communication You must configure a
CFG-007 configuration on the top of of the NSX Edge nodes to stretched Layer 2 network
the rack switches. the top of rack switches in between the availability
n In the second both availability zones over zones by using the physical
availability zone, the same uplink VLANs. network infrastructure.
configure the top
of rack switches or
upstream Layer 3
devices with an SVI on
each of the two uplink
VLANs.
n Make the top of
rack switch SVI in
both availability zones
part of a common
stretched Layer 2
network between the
availability zones.

VCF-NSX-BGP-REQD- Provide this VLAN Supports multiple equal- n Extra VLANs are
CFG-008 configuration: cost routes on the Tier-0 required.
n Use two VLANs to gateway, and provides n Requires stretching
enable ECMP between more resiliency and better uplink VLANs between
the Tier-0 gateway and bandwidth use in the availability zones
the Layer 3 devices network.
(top of rack switches or
Leaf switches).
n The ToR switches
or upstream Layer 3
devices have an SVI to
one of the two VLANS
and each NSX Edge
node has an interface
to each VLAN.

VMware, Inc. 116


VMware Cloud Foundation Design Guide

Table 8-20. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Create an IP prefix list Used in a route map to You must manually create
CFG-009 that permits access to prepend a path to one or an IP prefix list that is
route advertisement by any more autonomous system identical to the default one.
network instead of using (AS-path prepend) for BGP
the default IP prefix list. neighbors in the second
availability zone.

VCF-NSX-BGP-REQD- Create a route map-out n Used for configuring You must manually create
CFG-010 that contains the custom neighbor relationships the route map.
IP prefix list and an AS- with the Layer 3 The two NSX Edge nodes
path prepend value set to devices in the second will route north-south
the Tier-0 local AS added availability zone. traffic through the second
twice. n Ensures that all ingress availability zone only if
traffic passes through the connection to their
the first availability BGP neighbors in the first
zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.

VCF-NSX-BGP-REQD- Create an IP prefix list that Used in a route map to You must manually create
CFG-011 permits access to route configure local-reference an IP prefix list that is
advertisement by network on learned default-route identical to the default one.
0.0.0.0/0 instead of using for BGP neighbors in the
the default IP prefix list. second availability zone.

VMware, Inc. 117


VMware Cloud Foundation Design Guide

Table 8-20. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Apply a route map-in that n Used for configuring You must manually create
CFG-012 contains the IP prefix list for neighbor relationships the route map.
the default route 0.0.0.0/0 with the Layer 3 The two NSX Edge nodes
and assign a lower local- devices in the second will route north-south
preference , for example, availability zone. traffic through the second
80, to the learned default n Ensures that all egress availability zone only if
route and a lower local- traffic passes through the connection to their
preference, for example, 90 the first availability BGP neighbors in the first
any routes learned. zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.

VCF-NSX-BGP-REQD- Configure the neighbors of Makes the path in and The two NSX Edge nodes
CFG-013 the second availability zone out of the second will route north-south
to use the route maps as In availability zone less traffic through the second
and Out filters respectively. preferred because the AS availability zone only if
path is longer and the the connection to their
local preference is lower. BGP neighbors in the first
As a result, all traffic passes availability zone is lost, for
through the first zone. example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.

VMware, Inc. 118


VMware Cloud Foundation Design Guide

Table 8-21. BGP Routing Design Requirements for NSX Federation in VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Extend the Tier-0 gateway n Supports ECMP north- The Tier-0 gateway
CFG-014 to the second VMware south routing on all deployed in second
Cloud Foundation instance. nodes in the NSX Edge instance is removed
cluster.
n Enables support for
cross-instance Tier-1
gateways and cross-
instance network
segments.

VCF-NSX-BGP-REQD- Set the Tier-0 gateway n In NSX Federation, None.


CFG-015 as primary for all a Tier-0 gateway
VMware Cloud Foundation lets egress traffic
instances. from connected Tier-1
gateways only in its
primary locations.
n Local ingress and
egress traffic
is controlled
independently at
the Tier-1 level.
No segments are
provisioned directly to
the Tier-0 gateway.
n A mixture of network
spans (local to
a VMware Cloud
Foundation instance
or spanning multiple
instances) is enabled
without requiring
additional Tier-0
gateways and hence
edge nodes.
n If a failure in a VMware
Cloud Foundation
instance occurs,
the local-instance
networking in the
other instance remains
available without
manual intervention.

VMware, Inc. 119


VMware Cloud Foundation Design Guide

Table 8-21. BGP Routing Design Requirements for NSX Federation in VMware Cloud Foundation
(continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- From the global Tier-0 n Enables the learning None.


CFG-016 gateway, establish BGP and advertising of
neighbor peering to the routes in the
ToR switches connected to second VMware Cloud
the second VMware Cloud Foundation instance.
Foundation instance. n Facilitates a potential
automated failover
of networks from
the first to the
second VMware Cloud
Foundation instance.

VCF-NSX-BGP-REQD- Use a stretched Tier-1 n Enables network span None.


CFG-017 gateway and connect it between the VMware
to the Tier-0 gateway for Cloud Foundation
cross-instance networking. instances because NSX
network segments
follow the span of
the gateway they are
attached to.
n Creates a two-tier
routing architecture.

VMware, Inc. 120


VMware Cloud Foundation Design Guide

Table 8-21. BGP Routing Design Requirements for NSX Federation in VMware Cloud Foundation
(continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables cross-instance You must manually fail over
CFG-018 cluster in each VMware network span between and fail back the cross-
Cloud Foundation instance the first and instance network from
to the stretched second VMware Cloud the standby NSX Global
Tier-1 gateway. Set Foundation instances. Manager.
the firstVMware Cloud n Enables deterministic
Foundation instance as ingress and egress
primary and the second traffic for the cross-
instance as secondary. instance network.
n If a VMware Cloud
Foundation instance
failure occurs, enables
deterministic failover of
the Tier-1 traffic flow.
n During the
recovery of the
inaccessibleVMware
Cloud Foundation
instance, enables
deterministic failback of
the Tier-1 traffic flow,
preventing unintended
asymmetrical routing.
n Eliminates the need
to use BGP attributes
in the first and
second VMware Cloud
Foundation instances
to influence location
preference and failover.

VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables instance- You can use the
CFG-019 cluster in each VMware specific networks to be service router that is
Cloud Foundation instance isolated to their specific created for the Tier-1
to the local Tier-1 gateway instances. gateway for networking
for that VMware Cloud n Enables deterministic services. However, such
Foundation instance. flow of ingress and configuration is not
egress traffic for required for network
the instance-specific connectivity.
networks.

VCF-NSX-BGP-REQD- Set each local Tier-1 Prevents the need to use None.
CFG-020 gateway only as primary in BGP attributes in primary
that instance. Avoid setting and secondary instances
the gateway as secondary to influence the instance
in the other instances. ingress-egress preference.

VMware, Inc. 121


VMware Cloud Foundation Design Guide

BGP Routing Design Recommendations


In your routing design for a single VMware Cloud Foundation instance, you can apply certain best
practices for standard and stretched clusters. For NSX Federation, additional recommendations
are available.

Table 8-22. BGP Routing Design Recommendations for VMware Cloud Foundation
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication

VCF-NSX-BGP-RCMD- Deploy an active-active Supports ECMP north-south Active-active Tier-0


CFG-001 Tier-0 gateway. routing on all Edge nodes gateways cannot provide
in the NSX Edge cluster. stateful services such as
NAT.

VCF-NSX-BGP-RCMD- Configure the BGP Keep Provides a balance By using longer timers to
CFG-002 Alive Timer to 4 and Hold between failure detection detect if a router is not
Down Timer to 12 or lower between the top of responding, the data about
between the top of tack rack switches and the such a router remains in
switches and the Tier-0 Tier-0 gateway, and the routing table longer. As
gateway. overburdening the top of a result, the active router
rack switches with keep- continues to send traffic to
alive traffic. a router that is down.
These timers must be
aligned with the data
center fabric design of your
organization.

VCF-NSX-BGP-RCMD- Do not enable Graceful Avoids loss of traffic. None.


CFG-003 Restart between BGP On the Tier-0 gateway,
neighbors. BGP peers from all the
gateways are always
active. On a failover, the
Graceful Restart capability
increases the time a remote
neighbor takes to select an
alternate Tier-0 gateway.
As a result, BFD-based
convergence is delayed.

VCF-NSX-BGP-RCMD- Enable helper mode for Avoids loss of traffic. None.


CFG-004 Graceful Restart mode During a router restart,
between BGP neighbors. helper mode works
with the graceful restart
capability of upstream
routers to maintain the
forwarding table which in
turn will forward packets
to a down neighbor even
after the BGP timers have
expired causing loss of
traffic.

VMware, Inc. 122


VMware Cloud Foundation Design Guide

Table 8-22. BGP Routing Design Recommendations for VMware Cloud Foundation (continued)
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication

VCF-NSX-BGP-RCMD- Enable Inter-SR iBGP In the event that an None.


CFG-005 routing. edge node has all of its
northbound eBGP sessions
down, north-south traffic
will continue to flow by
routing traffic to a different
edge node.

VCF-NSX-BGP-RCMD- Deploy a Tier-1 gateway Ensures that after a failed None.


CFG-006 in non-preemptive failover NSX Edge transport node
mode. is back online, it does
not take over the gateway
services thus preventing a
short service outage.

VCF-NSX-BGP-RCMD- Enable standby relocation Ensures that if an edge None.


CFG-007 of the Tier-1 gateway. failure occurs, a standby
Tier-1 gateway is created
on another edge node.

Table 8-23. BGP Routing Design Recommendations for NSX Federation in VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-BGP-RCMD- Use Tier-1 gateways to Enables a mixture of To control location span,


CFG-008 control the span of network spans (isolated a Tier-1 gateway must
networks and ingress and to a VMware Cloud be assigned to an edge
egress traffic in the Foundation instance cluster and hence has the
VMware Cloud Foundation or spanning multiple Tier-1 SR component. East-
instances. instances) without requiring west traffic between Tier-1
additional Tier-0 gateways gateways with SRs need to
and hence edge nodes. physically traverse an edge
node.

VCF-NSX-BGP-RCMD- Allocate a Tier-1 gateway in n Creates a two-tier None.


CFG-009 each instance for instance- routing architecture.
specific networks and n Enables local-instance
connect it to the stretched networks that are
Tier-0 gateway. not to span between
the VMware Cloud
Foundation instances.
n Guarantees that local-
instance networks
remain available if
a failure occurs in
another VMware Cloud
Foundation instance.

VMware, Inc. 123


VMware Cloud Foundation Design Guide

Overlay Design for VMware Cloud Foundation


As part of the overlay design, you determine the NSX configuration for handling traffic between
workloads, management or customer, in VMware Cloud Foundation. You determine the NSX
segments and the transport zones.

Logical Overlay Design for VMware Cloud Foundation


This conceptual design provides the network virtualization design of the logical components
that handle the data to and from the workloads in the environment. For an environment with
multiple VMware Cloud Foundation instances, you replicate the design of the first VMware Cloud
Foundation instance to the additional VMware Cloud Foundation instances.

ESXi Host Transport Nodes


A transport node in NSX is a node that is capable of participating in an NSX data plane. The
workload domains contain multiple ESXi hosts in a vSphere cluster to support management or
customer workloads. You register these ESXi hosts as transport nodes so that networks and
workloads on that host can use the capabilities of NSX. During the preparation process, the
native vSphere Distributed Switch for the workload domain is extended with NSX capabilities.

Virtual Segments
Geneve provides the overlay capability to create isolated, multi-tenant broadcast domains in NSX
across data center fabrics, and enables customers to create elastic, logical networks that span
physical network boundaries, and physical locations.

Transport Zones
A transport zone identifies the type of traffic, VLAN or overlay, and the vSphere Distributed
Switch name. You can configure one or more VLAN transport zones and a single overlay
transport zone per virtual switch. A transport zone does not represent a security boundary.
VMware Cloud Foundation supports a single overlay transport zone per NSX Instance. All
vSphere clusters, within and across workload domains that share the same NSX instance
subsequently share the same overlay transport zone.

VMware, Inc. 124


VMware Cloud Foundation Design Guide

Figure 8-10. Transport Zone Design

NSX ESXi Hosts


Edge Nodes

Edge Edge
Node 1 Node 2 ESXi

vSphere Distributed
N-VDS
Switch with NSX

Overlay Transport Zone

VLAN Transport Zone


VLAN Transport Zone
(Optional - Workload
(Edge Uplinks to ToR)
VLANs)

Uplink Policy for ESXi Host Transport Nodes


Uplink profiles define policies for the links from ESXi hosts to NSX segments or from NSX
Edge appliances to top of rack switches. By using uplink profiles, you can apply consistent
configuration of capabilities for network adapters across multiple ESXi hosts or NSX Edge nodes.

Uplink profiles can use either load balance source or failover order teaming. If using load balance
source, multiple uplinks can be active. If using failover order, only a single uplink can be active.

Replication Mode of Segments


The control plane decouples NSX from the physical network. The control plane handles the
broadcast, unknown unicast, and multicast (BUM) traffic in the virtual segments.

The following options are available for BUM replication on segments.

VMware, Inc. 125


VMware Cloud Foundation Design Guide

Table 8-24. BUM Replication Modes of NSX Segments

BUM Replication Mode Description

Hierarchical Two-Tier The ESXi host transport nodes are grouped according
to their TEP IP subnet. One ESXi host in each subnet
is responsible for replication to an ESXi host in another
subnet. The receiving ESXi host replicates the traffic to
the ESXi hosts in its local subnet.
The source ESXi host transport node knows about the
groups based on information it has received from the
control plane. The system can select an arbitrary ESXi
host transport node as the mediator for the source subnet
if the remote mediator ESXi host node is available.

Head-End The ESXi host transport node at the origin of the frame
to be flooded on a segment sends a copy to every
other ESXi host transport node that is connected to this
segment.

Overlay Design Requirements and Recommendations for VMware


Cloud Foundation
Consider the requirements for the configuration of the ESXi hosts in a workload domain as NSX
transport nodes, transport zone layout, uplink teaming policies, and the best practices for IP
allocation and BUM replication mode in a VMware Cloud Foundation deployment.

Overlay Design Requirements


You must meet the following design requirements in your overlay design.

VMware, Inc. 126


VMware Cloud Foundation Design Guide

Table 8-25. Overlay Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-OVERLAY-REQD- Configure all ESXi hosts in Enables distributed routing, None.


CFG-001 the workload domain as logical segments, and
transport nodes in NSX. distributed firewall.

VCF-NSX-OVERLAY-REQD- Configure each ESXi host n Enables the You must configure each
CFG-002 as a transport node participation of ESXi transport node with an
without using transport hosts and the virtual uplink profile individually.
node profiles. machines on them in
NSX overlay and VLAN
networks.
n Transport node profiles
can only be applied
at the cluster
level. Because in
an environment with
multiple availability
zones each availability
zone is connected to a
different set of VLANs,
you cannot use a
transport node profile.

VCF-NSX-OVERLAY-REQD- To provide virtualized n Creates isolated, multi- Requires configuring


CFG-003 network capabilities to tenant broadcast transport networks with an
workloads, use overlay domains across data MTU size of at least 1,600
networks with NSX Edge center fabrics to deploy bytes.
nodes and distributed elastic, logical networks
routing. that span physical
network boundaries.
n Enables advanced
deployment topologies
by introducing Layer
2 abstraction from the
data center networks.

VCF-NSX-OVERLAY-REQD- Create a single overlay n Ensures that overlay All clusters in all workload
CFG-004 transport zone for all segments are domains that share the
overlay traffic across the connected to an same NSX Manager share
workload domain and NSX NSX Edge node for the same transport zone.
Edge nodes. services and north-
south routing.
n Ensures that all
segments are available
to all ESXi hosts
and NSX Edge nodes
configured as transport
nodes.

VMware, Inc. 127


VMware Cloud Foundation Design Guide

Table 8-25. Overlay Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-OVERLAY-REQD- Create a VLAN transport Ensures that uplink VLAN If VLAN segments are
CFG-005 zone for uplink VLAN traffic segments are configured required on hosts, you
that is applied only to NSX on the NSX Edge transport must create another VLAN
Edge nodes. nodes. transport zone for the host
transport nodes only.

VCF-NSX-OVERLAY-REQD- Create an uplink profile For increased resiliency None.


CFG-006 with a load balance source and performance, supports
teaming policy with two the concurrent use of both
active uplinks for ESXi physical NICs on the ESXi
hosts. hosts that are configured
as transport nodes.

Overlay Design Recommendations


In your overlay design for VMware Cloud Foundation, you can apply certain best practices.

Table 8-26. Overlay Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-OVERLAY-RCMD- Use DHCP to assign IP Required for deployments DHCP server is required for
CFG-001 addresses to the host TEP where a cluster spans the host overlay VLANs.
interfaces. Layer 3 network domains
such as multiple availability
zones and management
clusters that span Layer 3
domains.

VCF-NSX-OVERLAY-RCMD- Use hierarchical two-tier Hierarchical two-tier None.


CFG-002 replication on all overlay replication is more efficient
segments. because it reduced the
number of ESXi hosts
the source ESXi host
must replicate traffic to
if hosts have different
TEP subnets. This is
typically the case with
more than one cluster and
will improve performance in
that scenario.

Application Virtual Network Design for VMware Cloud


Foundation
In VMware Cloud Foundation, you place vRealize Suite components on a pre-defined
configuration of NSX segments (known as application virtual networks or AVNs) for dynamic
routing and load balancing.

VMware, Inc. 128


VMware Cloud Foundation Design Guide

Logical Application Virtual Network Design VMware Cloud


Foundation
NSX segments provide flexibility for workload placement by removing the dependence on
traditional physical data center networks. This approach also improves security and mobility of
the management applications, and reduces the integration effort with existing customer network.

Table 8-27. Comparing Application Virtual Network Types


VLAN-Backed NSX
Design Component Overlay-Based NSX Segments Segments

Benefits n Supports IP mobility with dynamic routing. Uses the data center fabric
n Limits the number of VLANs needed in the data center fabric. for the network segment
and the next-hop gateway.
n In an environment with multiple availability zones, limits the
number of VLANs needed to expand from an architecture
with one availability zone to an architecture with two
availability zones.

Requirement Requires routing between the data center fabric and the NSX
Edge nodes.

VMware, Inc. 129


VMware Cloud Foundation Design Guide

Figure 8-11. Application Virtual Networks in VMware Cloud Foundation

Internet/ Enterprise VI Workload


Network Domain
VC SDDC Mgr
OS OS

ToR Switches

ECMP

Tier-0 Gateway NSX Edge


Active/ Active Cluster

Cross-Instance Local-Instance
Tier-1 Gateway Tier-1 Gateway

Cross-Instance Local-Instance
IP Subnet IP Subnet
Cross-Instance Local-Instance
NSX Segment NSX Segment
vRSCLM Local-Instance vRealize Suite
Clustered WSA Component
vRealize Suite Component
with Cross-Instance Mobility

For the design for specific vRealize Suite components, see this design and VMware Validated
Solutions. For identity and access management design for NSX, see Identity and Access
Management for VMware Cloud Foundation.

Important If you plan to use NSX Federation in the management domain, create the AVNs
before you enable the federation. Creating AVNs in an environment where NSX Federation is
already active is not supported.

With NSX Federation, an NSX segment can span multiple instances of NSX and VMware Cloud
Foundation. A single network segment can be available in different physical locations over
the NSX SDN. In an environment with multiple VMware Cloud Foundation instances, the cross-
instance NSX network in the management domain is extended between the first two instances.
This configuration provides IP mobility for management components which fail over from the first
to the second instance.

VMware, Inc. 130


VMware Cloud Foundation Design Guide

Application Virtual Network Design Requirements and


Recommendations forVMware Cloud Foundation
Consider the requirements and best practices for the configuration of the NSX segments for
using the Application Virtual Networks in VMware Cloud Foundation for a single VMware Cloud
Foundation or multiple VMware Cloud Foundation instances.

Application Virtual Network Design Requirements


You must meet the following design requirements in your Application Virtual Network design
for a single VMware Cloud Foundation instance and for multiple VMware Cloud Foundation
instances.

Table 8-28. Application Virtual Network Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-AVN-REQD- Create one cross-instance Prepares the environment Each NSX segment requires
CFG-001 NSX segment for the for the deployment of a unique IP address space.
components of a vRealize solutions on top of VMware
Suite application or Cloud Foundation, such as
another solution that vRealize Suite, without a
requires mobility between complex physical network
VMware Cloud Foundation configuration.
instances. The components of
the vRealize Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.

VCF-NSX-AVN-REQD- Create one or more local- Prepares the environment Each NSX segment requires
CFG-002 instance NSX segments for the deployment of a unique IP address space.
for the components of a solutions on top of VMware
vRealize Suite application Cloud Foundation, such as
or another solution that vRealize Suite, without a
are assigned to a specific complex physical network
VMware Cloud Foundation configuration.
instance.

VMware, Inc. 131


VMware Cloud Foundation Design Guide

Table 8-29. Application Virtual Network Design Requirements for NSX Federation in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-AVN-REQD- Extend the cross-instance Enables workload mobility Each NSX segment requires
CFG-003 NSX segment to the without a complex physical a unique IP address space.
second VMware Cloud network configuration.
Foundation instance. The components of
a vRealize Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.

VCF-NSX-AVN-REQD- In each VMware Cloud Enables workload mobility Each NSX segment requires
CFG-004 Foundation instance, create within a VMware a unique IP address space.
additional local-instance Cloud Foundation instance
NSX segments. without complex physical
network configuration.
Each VMware Cloud
Foundation instance should
have network segments to
support workloads which
are isolated to that VMware
Cloud Foundation instance.

VCF-NSX-AVN-REQD- In each VMware Configures local-instance Requires an individual Tier-1


CFG-005 Cloud Foundation NSX segments at required gateway for local-instance
instance, connect or sites only. segments.
migrate the local-instance
NSX segments to
the corresponding local-
instance Tier-1 gateway.

Application Virtual Network Design Recommendations


In your Application Virual Network design for VMware Cloud Foundation, you can apply certain
best practices.

Table 8-30. Application Virtual Network Design Recommendations for VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-AVN-RCMD- Use overlay-backed NSX n Supports expansion to Using overlay-backed NSX


CFG-001 segments. deployment topologies segments requires routing,
for multiple VMware eBGP recommended,
Cloud Foundation between the data center
instances. fabric and edge nodes.
n Limits the number of
VLANs required for the
data center fabric.

VMware, Inc. 132


VMware Cloud Foundation Design Guide

Load Balancing Design for VMware Cloud Foundation


Following the principles of this design and of each product, you deploy and configure NSX load
balancing services to support vRealize Suite and Workspace ONE Access components.

Logical Load Balancing Design for VMware Cloud Foundation


The logical load balancer capability in NSX offers a high-availability service for applications in
VMware Cloud Foundation and distributes the network traffic load among multiple servers.

A standalone Tier-1 gateway is created to provide load balancing services with a service interface
on the cross-instance application virtual network.

VMware, Inc. 133


VMware Cloud Foundation Design Guide

Figure 8-12. NSX Logical Load Balancing Design for VMware Cloud Foundation

NSX Edge
Cluster

NSX Tier-1 NSX Tier-0


Standalone Gateway Gateway

NSX
Load Balancer
Service
NSX Tier-1
Gateway

Cross-Instance NSX Segment

NSX
Manager Cluster Supporting
Infrastructure

Internal VIP Load Balancer DNS

NSX NSX NSX


Manager Manager Manager NTP
1 2 3

NSX
Transport Nodes

Management Cluster

ESXi ESXi ESXi ESXi

VMware, Inc. 134


VMware Cloud Foundation Design Guide

Load Balancing Design Requirements for VMware Cloud Foundation


Consider the requirements for running a load balancing service including creating a standalone
Tier-1 gateway and connecting it to the client applications. Separate requirements exist for a
single VMware Cloud Foundation instance and for multiple VMware Cloud Foundation instances.

Table 8-31. Load Balancing Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-LB-REQD- Deploy a standalone Provides independence You must add a separate


CFG-001 Tier-1 gateway to support between north-south Tier-1 Tier-1 gateway.
advanced stateful services gateways to support
such as load balancing advanced deployment
for other management scenarios.
components.

VCF-NSX-LB-REQD- When creating load Provides load balancing to You must connect
CFG-002 balancing services for applications connected to the gateway to each
Application Virtual the cross-instance network. network that requires load
Networks, connect the balancing.
standalone Tier-1 gateway
to the cross-instance NSX
segments.

VCF-NSX-LB-REQD- Configure a default static Because the Tier-1 gateway None.


CFG-003 route on the standalone is standalone, it does not
Tier-1 gateway with a next auto-configure its routes.
hop the Tier-1 gateway for
the segment to provide
connectivity to the load
balancer.

VMware, Inc. 135


VMware Cloud Foundation Design Guide

Table 8-32. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-LB-REQD- Deploy a standalone Tier-1 Provides a cold-standby n You must add


CFG-004 gateway in the second non-global service router a separate Tier-1
VMware Cloud Foundation instance for the second gateway.
instance. VMware Cloud Foundation n You must manually
instance to support configure any services
services on the cross- and synchronize them
instance network which between the non-global
require advanced services service router instances
not currently supported as in the first and
NSX global objects. second VMware Cloud
Foundation instances.
n To avoid a network
conflict between the
two VMware Cloud
Foundation instances,
make sure that the
primary and standby
networking services are
not both active at the
same time.

VCF-NSX-LB-REQD- Connect the standalone Provides load balancing to You must connect
CFG-005 Tier-1 gateway in the applications connected to the gateway to each
second VMware Cloud the cross-instance network network that requires load
Foundationinstance to in the second VMware balancing.
the cross-instance NSX Cloud Foundation instance.
segment.

VMware, Inc. 136


VMware Cloud Foundation Design Guide

Table 8-32. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-LB-REQD- Configure a default static Because the Tier-1 gateway None.


CFG-006 route on the standalone is standalone, it does not
Tier-1 gateway in the autoconfigure its routes.
second VMware Cloud
Foundation instance with
a next hop as the Tier-1
gateway for the segment
it connects with to provide
connectivity to the load
balancers.

VCF-NSX-LB-REQD- Establish a process to Keeps the network service n Because of incorrect


CFG-007 ensure any changes made in the failover load configuration between
on to the load balancer balancer instance ready the VMware Cloud
instance in the first VMware for activation if a failure Foundation instances,
Cloud Foundationinstance in the first VMware the load balancer
are manually applied to the Cloud Foundation instance service in the second
disconnected load balancer occurs. instance might come
in the second instance. Because network services online with an
are not supported as global invalid or incomplete
objects, you must configure configuration.
them manually in each n If both VMware Cloud
VMware Cloud Foundation Foundation instances
instance. The load balancer are online and active
service in one instance at the same time,
must be connected and a conflict between
active, while the service in services could occur
the other instance must be resulting in a potential
disconnected and inactive. outage.
n The administrator
must establish and
follow an operational
practice by using a
runbook or automated
process to ensure that
configuration changes
are reproduced in
each VMware Cloud
Foundation instance.

VMware, Inc. 137


SDDC Manager Design for
VMware Cloud Foundation 9
In VMware Cloud Foundation, operational day-to-day efficiencies are delivered through SDDC
Manager. These efficiencies include full life cycle management tasks such as deployment,
configuration, patching and upgrades.

Read the following topics next:

n Logical Design for SDDC Manager

n SDDC Manager Design Requirements and Recommendations for VMware Cloud Foundation

Logical Design for SDDC Manager


You deploy an SDDC Manager appliance in the management domain for creating VI workload
domains, provisioning additional virtual infrastructure, and life cycle management of the SDDC
management components.

VMware, Inc. 138


VMware Cloud Foundation Design Guide

Figure 9-1. Logical Design of SDDC Manager


VCF Instance A VCF Instance B

External Services Access Access External Services

My VMware User Interface User Interface My VMware

API API
VMware Depot VMware Depot

Infrastructure Provisioning Infrastructure Provisioning


and Configuration and Configuration

vCenter vCenter
Server Server

Life Cycle Management Life Cycle Management

NSX SDDC Manager SDDC Manager NSX

vCenter vCenter
Server Server

Supporting Infrastructure: Supporting Infrastructure:


ESXi Shared Storage, DNS, NTP, Shared Storage, DNS, NTP, ESXi
Certificate Authority Certificate Authority

vRealize vRealize
Suite Lifecycle Suite Lifecycle
Manager Manager
Solution and Solution and
User Authentication User Authentication

vCenter Single vCenter Single


Sign-On Domain Sign-On Domain

You use SDDC Manager to perform the following operations:

n Commissioning or decommissioning ESXi hosts

n Deployment of VI workload domains

n Deployment of vRealize Suite Lifecycle Manager

n Deployment of NSX Edge clusters in workload domains

n Adding and extending clusters in workload domains

n Life cycle management of the virtual infrastructure components in all workload domains and
of vRealize Suite Lifecycle Manager

n Storage management for vVOL VASA providers

n Identity provider management

n Composable infrastructure management

n Creation of network pools for host configuration workload domains

n Product licenses storage

VMware, Inc. 139


VMware Cloud Foundation Design Guide

n Certificate management

n Password management and rotation

n Backup configuration

Table 9-1. SDDC Manager Logical Components


VMware Cloud Foundation Instances with a Single VMware Cloud Foundation Instances with Multiple
Availability Zone Availability Zones

n A single SDDC Manager appliance is deployed on the n A single SDDC Manager appliance is deployed on the
management network. management network.
n vSphere HA protects the SDDC Manager appliance. n vSphere HA protects the SDDC Manager appliance.
n A vSphere DRS rule specifies that the SDDC Manager
appliance should run on an ESXi host in the first
availability zone.

SDDC Manager Design Requirements and


Recommendations for VMware Cloud Foundation
Consider the placement and network design requirements for SDDC Manager, and the best
practices for configuring the access to install and upgrade software bundles .

SDDC Manager Design Requirements


You must meet the following design requirements for in your SDDC Manager design.

Table 9-2. SDDC Manager Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-SDDCMGR-REQD- Deploy an SDDC Manager SDDC Manager is required None.


CFG-001 system in the first to perform VMware Cloud
availability zone of the Foundation capabilities,
management domain. such as provisioning of
VI workload domains,
deployment of solutions,
patching and upgrade, and
others.

VCF-SDDCMGR-REQD- Deploy SDDC Manager with The configuration of None.


CFG-002 its default configuration. SDDC Manager is not
configurable and should
not be changed from its
defaults.

VCF-SDDCMGR-REQD- Place the SDDC Reduces the number None.


CFG-003 Manager appliance on of VLANs. You allocate
the management VLAN a single VLAN to
network segment. vCenter Server, NSX, SDDC
Manager, and other SDDC
management components.

VMware, Inc. 140


VMware Cloud Foundation Design Guide

SDDC Manager Design Recommendations


In your SDDC Manager design, you can apply certain best practices.

Table 9-3. SDDC Manager Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-SDDCMGR-RCMD- Connect SDDC Manager SDDC Manager must be The rules of your
CFG-001 to the Internet for able to download install organization might not
downloading software and upgrade software permit direct access to the
bundles. bundles for deployment of Internet. In this case, you
VI workload domains and must download software
solutions, and for upgrade bundles for SDDC Manager
from a repository. manually.

VCF-SDDCMGR-RCMD- Configure a network proxy To protect SDDC Manager The proxy must not
CFG-002 to connect SDDC Manager against external attacks use authentication because
to the Internet. from the Internet. SDDC Manager does
not support proxy with
authentication.

VCF-SDDCMGR-RCMD- To check for and Software bundles for Requires the use of a
CFG-003 download software VMware Cloud Foundation VMware Customer Connect
bundles, configure SDDC are stored in a repository user account with access to
Manager with a VMware that is secured with access VMware Cloud Foundation
Customer Connect account controls. licensing.
with VMware Cloud Sites without internet
Foundation entitlement. connection can use local
upload option instead.

VCF-SDDCMGR-RCMD- Configure SDDC Manager Provides increased security An external certificate


CFG-004 with an external certificate by implementing signed authority, such as Microsoft
authority that is responsible certificate generation and CA, must be locally
for providing signed replacement across the available.
certificates. management components.

VMware, Inc. 141


vRealize Suite Lifecycle Manager
Design for VMware Cloud
Foundation
10
In VMware Cloud Foundation, vRealize Suite Lifecycle Manager provides life cycle management
capabilities for vRealize Suite components and Workspace ONE Access, including automated
deployment, configuration, patching, and upgrade, and content management across vRealize
Suite products.

You deploy vRealize Suite Lifecycle Manager by using SDDC Manager. SDDC Manager deploys
vRealize Suite Lifecycle Manager in VMware Cloud Foundation mode. In this mode, vRealize Suite
Lifecycle Manager is integrated with SDDC Manager, providing the following benefits:

n Integration with the SDDC Manager inventory to retrieve infrastructure details when creating
environments for Workspace ONE Access and vRealize Suite components, such as NSX
segments and vCenter Server details.

n Automation of the load balancer configuration when deploying Workspace ONE Access,
vRealize Operations, and vRealize Automation.

n Deployment details for vRealize Suite Lifecycle Manager environments are populated in the
SDDC Manager inventory and can be queried using the SDDC Manager API.

n Day-two workflows in SDDC Manager to connect vRealize Log Insight and vRealize
Operations to workload domains.

n The ability to manage password life cycle for Workspace ONE Access and vRealize Suite
components.

For information about deploying vRealize Suite components, see VMware Validated Solutions.

Read the following topics next:

n Logical Design for vRealize Suite Lifecycle Manager for VMware Cloud Foundation

n Network Design for vRealize Suite Lifecycle Manager

n Data Center and Environment Design for vRealize Suite Lifecycle Manager

n Locker Design for vRealize Suite Lifecycle Manager

n vRealize Suite Lifecycle Manager Design Requirements and Recommendations for VMware
Cloud Foundation

VMware, Inc. 142


VMware Cloud Foundation Design Guide

Logical Design for vRealize Suite Lifecycle Manager for


VMware Cloud Foundation
You deploy vRealize Suite Lifecycle Manager to provide life cycle management capabilities for
vRealize Suite components and a Workspace ONE Access cluster.

Logical Design
In a VMware Cloud Foundation environment, you use vRealize Suite Lifecycle Manager in VMware
Cloud Foundation mode. In this mode, vRealize Suite Lifecycle Manager is integrated with
VMware Cloud Foundation in the following way:

n SDDC Manager deploys the vRealize Suite Lifecycle Manager appliance. Then, you deploy the
vRealize Suite products that are supported by VMware Cloud Foundation by using vRealize
Suite Lifecycle Manager.

n Supported versions are controlled by the vRealize Suite Lifecycle Manager appliance and
Product Support Packs. See the VMware Interoperability Matrix.

n To orchestrate the deployment, patching, and upgrade of Workspace ONE Access and the
vRealize Suite products, vRealize Suite Lifecycle Manager communicates with SDDC Manager
and the management domain vCenter Server in the environment.

n SDDC Manager configures the load balancer for Workspace ONE Access, vRealize
Operations, and vRealize Automation.

VMware, Inc. 143


VMware Cloud Foundation Design Guide

Figure 10-1. Logical Design of vRealize Suite Lifecycle Manager

VCF Instance A

Identity Management

Access Workspace
ONE Access
Integration
User Interface

SDDC
REST API Manager

vRealize Suite
Lifecycle Manager
in VMware Cloud
Life Cycle Management Foundation Mode
Endpoint
vRealize Suite
Components
vCenter
Server
Workspace
ONE Access

Shared Storage

VCF Instance B

Access Integration

User Interface SDDC


vRealize Suite
Lifecycle Manager Manager
in VMware Cloud
REST API
Foundation Mode

Life Cycle Management Endpoint

vRealize Log vCenter


Insight Server

Shared Storage

According to the VMware Cloud Foundation topology deployed, vRealize Suite Lifecycle
Manager is deployed in one or more locations and is responsible for the life cycle of the vRealize
Suite components in one or more VMware Cloud Foundation instances.

VMware Cloud Foundation instances might be connected for the following reasons:

n Disaster recovery of the vRealize Suite components.

VMware, Inc. 144


VMware Cloud Foundation Design Guide

n Over-arching management of those instances from the same vRealize Suite deployments.

Table 10-1. vRealize Suite Lifecycle Manager Component Layout


VMware Cloud Foundation Instances VMware Cloud Foundation Instances Connected VMware Cloud
with a Single Availability Zone with Multiple Availability Zones Foundation Instances

n A single vRealize Suite Lifecycle n A single vRealize Suite Lifecycle The vRealize Suite Lifecycle Manager
Manager appliance deployed on Manager appliance deployed on instance in the first VMware Cloud
the cross-instance NSX segment. the cross-instance NSX segment. Foundation instance provides life
n vSphere HA protects the n vSphere HA protects the cycle management for:
vRealize Suite Lifecycle Manager vRealize Suite Lifecycle Manager n Workspace ONE Access
appliance. appliance. n vRealize Suite
Life cycle management for: n A should-run vSphere DRS vRealize Suite Lifecycle Manager
n Workspace ONE Access rule specifies that the vRealize in each additional VMware Cloud
Suite Lifecycle Manager appliance Foundation instance provides life
n vRealize Suite
should run on an ESXi host in the cycle management for:
first availability zone. n vRealize Log Insight
Life cycle management for:
n Workspace ONE Access
n vRealize Suite

Network Design for vRealize Suite Lifecycle Manager


For secure access to the UI and API, you place the vRealize Suite Lifecycle Manager appliance on
an overlay-backed (recommended) or VLAN-backed Application Virtual Network.

vRealize Suite Lifecycle Manager must have routed access to the management VLAN through the
Tier-0 gateway in the NSX instance for the management domain.

VMware, Inc. 145


VMware Cloud Foundation Design Guide

Figure 10-2. Network Design for vRealize Suite Lifecycle Manager

Data Center Active Internet/


User Directory Enterprise
Network

Physical Management VI Workload VI Workload Management Physical


Upstream Domain Domain Domain Domain Upstream
Router vCenter Server vCenter Server vCenter Server vCenter Server Router

Management VLAN Management VLAN

Cross-Instance NSX Tier-0 Gateway

Local-Instance Local-Instance
Cross-Instance NSX Tier-1 Gateway
NSX Tier-1 Gateway NSX Tier-1 Gateway

Cross-Instance NSX Segment Cross-Instance NSX Segment


(VCF Instance A) (VCF Instance B)

APP APP
OS OS

vRealize Suite Lifecycle vRealize Suite Lifecycle


Manager Appliance Manager Appliance

VCF Instance A VCF Instance B

Data Center and Environment Design for vRealize Suite


Lifecycle Manager
To deploy vRealize Suite products by using vRealize Suite Lifecycle Manager, you configure
product support, data centers, environment structures, and product specifications.

Product Support
vRealize Suite Lifecycle Manager provides several methods to obtain and store product binaries
for the install, patch, and upgrade of the vRealize Suite products.

Table 10-2. Methods for Obtaining and Storing Product Binaries

Method Description

Product Upload n You can upload and discover product binaries to the vRealize Suite
Lifecycle Manager appliance.

VMware Customer Connect n You can integrate vvRealize Suite Lifecycle Manager with VMware
Customer Connect to access and download vRealize Suite product
entitlements from an online depot over the Internet. This method
simplifies, automates, and organizes the repository.

VMware, Inc. 146


VMware Cloud Foundation Design Guide

Data Centers and Environments


vRealize Suite Lifecycle Manager supports the deployment and upgrade of vRealize Suite
products in a logical environment grouping.

You create data centers and environments in vRealize Suite Lifecycle Manager to manage the life
cycle operations on the vRealize Suite products and to support the growth of the SDDC.

Table 10-3. vRealize Suite Lifecycle Manager Logical Constructs

Construct Definition

Datacenter Represents a geographical or logical location for


an organization. Management domain vCenter Server
instances are added to specific data centers.

Environment Is mapped to a data center object. Each environment can


contain only one instance of a vRealize Suite product.

Table 10-4. Logical Datacenter to vCenter Server Mappings in vRealize Suite Lifecycle Manager

Logical Datacenter vCenter Server Type Description

Cross-instance n Management domain vCenter Server for Supports the deployment of cross-instance
the local VMware Cloud Foundation components, such as Workspace ONE
instance. Access, vRealize Operations, and vRealize
n Management domain vCenter Server for Automation, including any per-instance
an additional VMware Cloud Foundation collector components.
instance.

Local-instance Management domain vCenter Server for the Supports the deployment of vRealize Log
local VMware Cloud Foundation instance. Insight.

VMware, Inc. 147


VMware Cloud Foundation Design Guide

Table 10-5. vRealize Suite Lifecycle Manager Environment Types

Environment Type Description

Global Environment Contains the Workspace ONE Access instance that is


required before you can deploy vRealize Automation.

VMware Cloud Foundation Mode n Infrastructure details for the deployed products,
including vCenter Server, networking, DNS and NTP
information are retrieved from the SDDC Manager
inventory.
n Successful deployment details are synced back to the
SDDC Manager inventory.
n Limited to one instance of each vRealize Suite
product.

Standalone Mode n Infrastructure details for the deployed products are


entered manually.
n Successful deployment details are not synced back to
the SDDC Manager inventory.
n Supports deployment of more than one instance of a
vRealize Suite product.

Note You can deploy new vRealize Suite products to the SDDC environment or import existing
product deployments.

Table 10-6. Environment Topologies


Environment VMware Cloud
Name Foundation Mode Logical Datacenter Product Components

Global Enabled Cross-instance Workspace ONE Access


Environment

Cross-instance Enabled Cross-instance n vRealize Operations analytics


nodes
n vRealize Operations remote
collectors
n vRealize Automation cluster

Each instance Enabled Local-instance vRealize Log Insight cluster nodes

Locker Design for vRealize Suite Lifecycle Manager


The vRealize Suite Lifecycle Manager Locker allows you to secure and manage passwords,
certificates, and licenses for vRealize Suite product solutions and integrations.

Passwords
vRealize Suite Lifecycle Manager stores passwords in the locker repository which are referenced
during life cycle operations on data centers, environments, products, and integrations.

VMware, Inc. 148


VMware Cloud Foundation Design Guide

Table 10-7. Life Cycle Operations Use of Locker Passwords in vRealize Suite Lifecycle Manager

Life Cycle Operations Element Password Use

Datacenters vCenter Server credentials for avRealize Suite Lifecycle Manager-to-


vSphere integration user.

Environments n Global environment default configuration administrator,configadmin.


n Environment password, for example, for product default admin or root
password.

Products n Product administrator password, for example, the admin password for
an individual product.
n Product appliance password, for example, the root password for an
individual product.

Certificates
vRealize Suite Lifecycle Manager stores certificates in the Locker repository which can be
referenced during product life cycle operations. Externally provided certificates, such as
Certificate Authority-signed certificates, can be imported or certificates can be generated by
the vRealize Suite Lifecycle Manager appliance.

Licenses
vRealize Suite Lifecycle Manager stores licenses in the Locker repository which can be
referenced during product life cycle operations. Licenses can be validated and added to the
repository directory or imported through an integration with VMware Customer Connect.

vRealize Suite Lifecycle Manager Design Requirements and


Recommendations for VMware Cloud Foundation
Consider the placement, networking, sizing and high availability requirements for using vRealize
Suite Lifecycle Manager for deployment and life cycle management of vRealize Suite components
in VMware Cloud Foundation. Apply similar best practices for having vRealize Suite Lifecycle
Manager operate in an optimal way.

vRealize Suite Lifecycle Manager Design Requirements


You must meet the following design requirements for standard and stretched clusters in your
vRealize Suite Lifecycle Manager design for VMware Cloud Foundation. For NSX Federation,
additional requirements exist.

VMware, Inc. 149


VMware Cloud Foundation Design Guide

Table 10-8. vRealize Suite Lifecycle Manager Design Requirements for VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-vRSLCM- Deploy a vRealize Suite Provides life cycle You must ensure that
REQD-CFG-001 Lifecycle Manager instance management operations for the required resources are
in the management domain vRealize Suite applications and available.
of each VMware Cloud Workspace ONE Access.
Foundation instance to provide
life cycle management for
vRealize Suite and Workspace
ONE Access.

VCF-vRSLCM- Deploy vRealize Suite Lifecycle n Deploys vRealize Suite None.


REQD-CFG-002 Manager by using SDDC Lifecycle Manager in
Manager. VMware Cloud Foundation
mode, which enables the
integration with the SDDC
Manager inventory for
product deployment and
life cycle management of
vRealize Suite components.
n Automatically configures
the standalone Tier-1
gateway required for
load balancing the
clustered Workspace ONE
Access and vRealize Suite
components.

VCF-vRSLCM- Allocate extra 100 GB of n Provides support for None.


REQD-CFG-003 storage to the vRealize Suite vRealize Suite product
Lifecycle Manager appliance binaries (install, upgrade,
for vRealize Suite product and patch) and content
binaries. management.
n SDDC Manager automates
the creation of storage.

VCF-vRSLCM- Place the vRealize Suite Provides a consistent You must use an
REQD-CFG-004 Lifecycle Manager appliance deployment model for implementation in NSX to
on an overlay-backed management applications. support this networking
(recommended) or VLAN- configuration.
backed NSX network segment.

VCF-vRSLCM- Import vRealize Suite product n You can review the validity, When using the API, you must
REQD-CFG-005 licenses to the Locker details, and deployment specify the Locker ID for the
repository for product life cycle usage for the license across license to be used in the JSON
operations. the vRealize Suite products. payload.
n You can reference and
use licenses during product
life cycle operations, such
as deployment and license
replacement.

VMware, Inc. 150


VMware Cloud Foundation Design Guide

Table 10-8. vRealize Suite Lifecycle Manager Design Requirements for VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-vRSLCM- Configure datacenter objects You can deploy and manage You must manage a separate
REQD-ENV-001 in vRealize Suite Lifecycle the integrated vRealize Suite datacenter object for the
Manager for local and components across the SDDC products that are specific to
cross-instance vRealize Suite as a group. each instance.
deployments and assigns the
management domain vCenter
Server instance to each data
center.

VCF-vRSLCM- If deploying vRealize Log Supports the deployment of None.


REQD-ENV-002 Insight, create a local-instance an instance of vRealize Log
environment in vRealize Suite Insight.
Lifecycle Manager.

VCF-vRSLCM- If deploying vRealize n Supports deployment and You can manage instance-
REQD-ENV-003 Operations or vRealize management of the specific components, such as
Automation, create a cross- integrated vRealize Suite remote collectors, only in
instance environment in products across VMware an environment that is cross-
vRealize Suite Lifecycle Cloud Foundation instances instance.
Manager as a group.
n Enables the deployment
of instance-specific
components, such as
vRealize Operations remote
collectors. In vRealize Suite
Lifecycle Manager, you
can deploy and manage
vRealize Operations remote
collector objects only
in an environment that
contains the associated
cross-instance components.

VMware, Inc. 151


VMware Cloud Foundation Design Guide

Table 10-8. vRealize Suite Lifecycle Manager Design Requirements for VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-vRSLCM- Use the custom vCenter Server vRealize Suite Lifecycle You must maintain the
REQD-SEC-001 role for vRealize Suite Lifecycle Manager accesses vSphere permissions required by the
Manager that has the minimum with the minimum set of custom role.
privileges required to support permissions that are required
the deployment and upgrade to support the deployment
of vRealize Suite products. and upgrade of vRealize Suite
products.
SDDC Manager automates the
creation of the custom role.

VCF-vRSLCM- Use the service account in n Provides the following n You must maintain the life
REQD-SEC-002 vCenter Server for application- access control features: cycle and availability of the
to-application communication n vvRealize Suite service account outside of
from vRealize Suite Lifecycle Lifecycle Manager SDDC manager password
Manager to vSphere. Assign accesses vSphere with rotation.
global permissions using the the minimum set of
custom role. required permissions.
n You can introduce
improved accountability
in tracking request-
response interactions
between the
components of the
SDDC.
n SDDC Manager automates
the creation of the service
account.

Table 10-9. vRealize Suite Lifecycle Manager Design Requirements for Stretched Clusters in
VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-vRSLCM- For multiple availability zones, Ensures that, by default, If vRealize Suite Lifecycle
REQD-CFG-006 add the vRealize Suite Lifecycle the vRealize Suite Lifecycle Manager is deployed after
Manager appliance to the VM Manager appliance is powered the creation of the stretched
group for the first availability on a host in the first availability management cluster, you must
zone. zone. add the vRealize Suite Lifecycle
Manager appliance to the VM
group manually.

VMware, Inc. 152


VMware Cloud Foundation Design Guide

Table 10-10. vRealize Suite Lifecycle Manager Design Requirements for NSX Federation in
VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-vRSLCM- Configure the DNS settings Improves resiliency in the event As you scale from a
REQD-CFG-007 for the vRealize Suite Lifecycle of an outage of external deployment with a single
Manager appliance to use DNS services for a VMware Cloud VMware Cloud Foundation
servers in each instance. Foundation instance. instance to one with multiple
VMware Cloud Foundation
instances, the DNS settings
of the vRealize Suite Lifecycle
Manager appliance must be
updated.

VCF-vRSLCM- Configure the NTP settings for Improves resiliency if an outage As you scale from a
REQD-CFG-008 the vRealize Suite Lifecycle of external services for a deployment with a single
Manager appliance to use NTP VMware Cloud Foundation VMware Cloud Foundation
servers in each VMware Cloud instance occurs. instance to one with multiple
Foundation instance. VMware Cloud Foundation
instances, the NTP settings
on the vRealize Suite Lifecycle
Manager appliance must be
updated.

VCF-vRSLCM- Assign the management Supports the deployment of None.


REQD-ENV-004 domain vCenter Server vRealize Operations remote
instance in the additional collectors in an additional
VMware Cloud Foundation VMware Cloud Foundation
instance to the cross-instance instance.
data center.

vRealize Suite Lifecycle Manager Design Recommendations


In your vRealize Suite Lifecycle Manager design for VMware Cloud Foundation, you can apply
certain best practices .

VMware, Inc. 153


VMware Cloud Foundation Design Guide

Table 10-11. vRealize Suite Lifecycle Manager Design Recommendations for VMware Cloud
Foundation
Recommendati
on ID Design Recommendation Justification Implication

VCF-vRSLCM- Protect vRealize Suite Lifecycle Supports the availability None.


RCMD-CFG-001 Manager by using vSphere HA. objectives for vRealize Suite
Lifecycle Manager without
requiring manual intervention
during a failure event.

VCF-vRSLCM- Obtain product binaries for n You can upgrade The site must have an Internet
RCMD-LCM-001 install, patch, and upgrade vRealize Suite products connection to use VMware
in vRealize Suite Lifecycle based on their general Customer Connect.
Manager from VMware availability and endpoint Sites without an Internet
Customer Connect. interoperability rather than connection should use the local
being listed as part of upload option instead.
VMware Cloud Foundation
bill of materials (BOM).
n You can deploy and
manage binaries in an
environment that does not
allow access to the Internet
or are dark sites.

VCF-vRSLCM- Use support packs (PSPAKS) Enables the upgrade of None.


RCMD-LCM-002 for vRealize Suite Lifecycle an existing vRealize Suite
Manager to enable upgrading Lifecycle Manager to permit
to later versions of vRealize later versions of vRealize
Suite products. Suite products without an
associated VVMware Cloud
Foundation upgrade. See
VMware Knowledge Base
article 88829

VCF-vRSLCM- Enable integration between n Enables authentication to You must deploy and configure
RCMD-SEC-001 vRealize Suite Lifecycle vRealize Suite Lifecycle Workspace ONE Access
Manager and your corporate Manager by using your to establish the integration
identity source by using corporate identity source. between vRealize Suite
the Workspace ONE Access n Enables authorization Lifecycle Manager and your
instance. through the assignment corporate identity sources.
of organization and cloud
services roles to enterprise
users and groups defined
in your corporate identity
source.

VCF-vRSLCM- Create corresponding security Streamlines the management n You must create the
RCMD-SEC-002 groups in your corporate of vRealize Suite Lifecycle security groups outside of
directory services for vRealize Manager roles for users. the SDDC stack.
Suite Lifecycle Manager roles: n You must set the desired
n VCF directory synchronization
n Content Release Manager interval in Workspace
ONE Access to ensure
n Content Developer
that changes are available
within a reasonable period.

VMware, Inc. 154


Workspace ONE Access Design
for VMware Cloud Foundation 11
Workspace ONE Access in VMware Cloud Foundation mode provides identity and access
management services to specific components in the SDDC, such as vRealize Suite.

Workspace ONE Access provides the following capabilities:

n Directory integration to authenticate users against an identity provider (IdP), such as Active
Directory or LDAP.

n Multiple authentication methods.

n Access policies that consist of rules to specify criteria that users must meet to authenticate.

The Workspace ONE Access instance that is integrated with vRealize Suite Lifecycle Manager
provides identity and access management services to vRealize Suite solutions that either run in
a VMware Cloud Foundation instance or must be available across VMware Cloud Foundation
instances.

For identity management design for a vRealize Suite product, see VMware Cloud Foundation
Validated Solutions .

For identity and access management for components other than vRealize Suite, such as NSX, you
can deploy a standalone Workspace ONE Access instance. See Identity and Access Management
for VMware Cloud Foundation.

Read the following topics next:

n Logical Design for Workspace ONE Access

n Sizing Considerations for Workspace ONE Access for VMware Cloud Foundation

n Network Design for Workspace ONE Access

n Integration Design for Workspace ONE Access with VMware Cloud Foundation

n Deployment Model for Workspace ONE Access

n Workspace ONE Access Design Requirements and Recommendations for VMware Cloud
Foundation

VMware, Inc. 155


VMware Cloud Foundation Design Guide

Logical Design for Workspace ONE Access


To provide identity and access management services to supported SDDC components, such as
vRealize Suite components, this design uses a Workspace ONE Access instance that is deployed
on an NSX network segment.

VMware, Inc. 156


VMware Cloud Foundation Design Guide

Figure 11-1. Logical Design for Standard Workspace ONE Access

VCF Instance A

Identity Provider

Directory Services
eg. AD, LDAP

Access

User Interface

Rest API

Standard
Workspace
ONE Access

Primary

Supporting Components:
Postgres

Supporting Infrastructure:
Shared Storage
DNS, NTP, SMTP

Cross- and Local-Instance


Solutions

vRealize Suite
Lifecycle Manager

vRealize Suite
Component

VMware, Inc. 157


VMware Cloud Foundation Design Guide

Table 11-1. Logical Components of Standard Workspace ONE Access


VMware Cloud Foundation Instances with a Single VMware Cloud Foundation Instances with Multiple
Availability Zone Availability Zones

n A single-node Workspace ONE Access instance n A single-node Workspace ONE Access instance
deployed on an overlay-backed (recommended) or deployed on an overlay-backed (recommended) or
VLAN-backed NSX segment. VLAN-backed NSX segment.
n SDDC solutions that are portable across VMware n SDDC solutions that are portable across VMware
Cloud Foundation instances are integrated with the Cloud Foundation instances are integrated with the
Workspace ONE Access instance in the first VMware Workspace ONE Access instance in the first VMware
Cloud Foundation instance. Cloud Foundation instance.
n A should-run-on-hosts-in-group vSphere DRS rule
ensures that, under normal operating conditions, the
Workspace ONE Access node runs on a management
ESXi host in the first availability zone.

VMware, Inc. 158


VMware Cloud Foundation Design Guide

Figure 11-2. Logical Design for Clustered Workspace ONE Access

VCF Instance A

Identity Provider

Directory Services
e.g. AD, LDAP

Access

User Interface

REST API

NSX
Load Balancer

Clustered
Workspace
ONE Access

Primary Secondary Secondary

Supporting Components:
Postgres

Supporting Infrastructure:
Shared Storage,
DNS, NTP, SMTP

Cross - and
Local-Instance Solutions

vRealize Suite
Lifecycle Manager

vRealize Suite
Component

VMware, Inc. 159


VMware Cloud Foundation Design Guide

Table 11-2. Logical Components of Clustered Workspace ONE Access


VMware Cloud Foundation Instances with a Single VMware Cloud Foundation Instances with Multiple
Availability Zone Availability Zones

n A three-node Workspace ONE Access cluster n A three-node Workspace ONE Access cluster
behind an NSX load balancer and deployed on an behind an NSX load balancer and deployed on an
overlay-backed (recommended) or VLAN-backed NSX overlay-backed (recommended) or VLAN-backed NSX
segment is deployed in the first VMware Cloud segment.
Foundation instance. n All Workspace ONE Access services and databases
n All Workspace ONE Access services and databases are configured for high availability using a native
are configured for high availability using a native cluster configuration. SDDC solutions that are portable
cluster configuration. SDDC solutions that are portable across VMware Cloud Foundation instances are
across VMware Cloud Foundation instances are integrated with this Workspace ONE Access cluster.
integrated with this Workspace ONE Access cluster. n Each node of the three-node cluster is configured as a
n Each node of the three node cluster is configured as a connector to any relevant identity providers
connector to any relevant identity providers n vSphere HA protects the Workspace ONE Access
n vSphere HA protects the Workspace ONE Access nodes.
nodes. n A vSphere DRS anti-affinity rule ensures that the
n vSphere DRS anti-affinity rules ensure that the Workspace ONE Access nodes run on different ESXi
Workspace ONE Access nodes run on different ESXi hosts.
hosts. n A should-run-on-hosts-in-group vSphere DRS rule
n Additional single-node Workspace ONE Access ensures that, under normal operating conditions, the
instance is deployed on an overlay-backed Workspace ONE Access nodes run on management
(recommended) or VLAN-backed NSX segment in all ESXi hosts in the first availability zone.
other VMware Cloud Foundation instances. n Additional single-node Workspace ONE Access
instance is deployed on an overlay-backed
(recommended) or VLAN-backed NSX segment in all
other VMware Cloud Foundation instances.

Sizing Considerations for Workspace ONE Access for


VMware Cloud Foundation
When you deploy Workspace ONE Access, you select to deploy the appliance with a size that is
suitable for the scale of your environment. The option that you select determines the number of
CPUs and the amount of memory of the appliance.

For detailed sizing based on the overall profile of the VMware Cloud Foundation instance you
plan to deploy, see VMware Cloud Foundation Planning and Preparation Workbook.

Table 11-3. Sizing Considerations for Workspace ONE Access

Workspace ONE Access Appliance Size Supported Limits

Extra Small n 3,000 users


n 30 groups

Small n 5,000 users


n 50 groups

Medium n 10,000 Users


n 100 groups
A minimum requirement for vRealize Automation

VMware, Inc. 160


VMware Cloud Foundation Design Guide

Table 11-3. Sizing Considerations for Workspace ONE Access (continued)

Workspace ONE Access Appliance Size Supported Limits

Large n 25,000 users


n 250 groups

Extra Large n 50,000 users


n 500 groups

Extra Extra Large n 100,000 users


n 1,000 groups

Network Design for Workspace ONE Access


For secure access to the UI and API of Workspace ONE Access, you deploy the nodes on an
overlay-backed or VLAN-backed NSX network segment.

Network Segment
This network design has the following features:

n All Workspace ONE Access components have routed access to the management VLAN
through the Tier-0 gateway in the NSX instance for the management domain.

n Routing to the management network and other external networks is dynamic and is based on
the Border Gateway Protocol (BGP).

VMware, Inc. 161


VMware Cloud Foundation Design Guide

Figure 11-3. Network Design for Standard Workspace ONE Access

Data Center Active Internet/


User Directory Enterprise
Network

VCF Instance A VCF Instance B

Physical Management VI Workload VI Workload Management Physical


Upstream Domain Domain Domain Domain Upstream
Router vCenter Server vCenter Server vCenter Server vCenter Server Router

Management VLAN Management VLAN

Cross-Instance NSX Tier-0 Gateway

Local-Instance Cross-Instance Local-Instance


NSX Tier-1 Gateway NSX Tier-1 Gateway NSX Tier-1 Gateway

Cross-Instance NSX Segment

APP
OS

WSA Node 1

Standard Workspace ONE Access

VCF Instance A

VMware, Inc. 162


VMware Cloud Foundation Design Guide

Figure 11-4. Network Design for Clustered Workspace ONE Access

Data Center Active Internet/


User Directory Enterprise
Network

VCF Instance A VCF Instance B

Physical Management VI Workload VI Workload Management Physical


Upstream Domain Domain Domain Domain Upstream
Router vCenter Server vCenter Server vCenter Server vCenter Server Router

Management VLAN Management VLAN

Cross-Instance NSX Tier-0 Gateway

Local-Instance Cross-Instance Local-Instance


NSX Tier-1 Gateway NSX Tier-1 Gateway NSX Tier-1 Gateway

NSX Load Balancer

Cross-Instance NSX Segment

APP APP APP


OS OS OS

WSA Node 1 WSA Node 2 WSA Node 3

Workspace ONE Access Cluster

VCF Instance A

Load Balancing
A Workspace ONE Access cluster deployment requires a load balancer to manage connections
to the Workspace ONE Access services.

Load-balancing services are provided by NSX. During the deployment of the Workspace ONE
Access cluster or scale-out of a standard deployment, vRealize Suite Lifecycle Manager and
SDDC Manager coordinate to automate the configuration of the NSX load balancer. The load
balancer is configured with the following settings:

VMware, Inc. 163


VMware Cloud Foundation Design Guide

Table 11-4. Clustered Workspace ONE Access Load Balancer Configuration

Load Balancer Element Settings

Service Monitor n Use the default intervals and timeouts:


n Monitoring interval: 3 seconds
n Idle timeout period: 10 seconds
n Rise/Fall: 3 seconds
n HTTP request:
n HTTP method: Get
n HTTP request version: 1.1
n Request URL: /SAAS/API/1.0/REST/system/health/
heartbeat.
n HTTP response:
n HTTP response code: 200
n HTTP response body: OK
n SSL configuration:
n Server SSL: Enabled
n Client certificate: Cross-instance Workspace ONE
Access cluster certificate
n SSL profile: default-balanced-server-ssl-profil.

Server Pool n LEAST_CONNECTION algorithm.


n Set the SNAT translation mode to Auto Map for the
pool.
n Static members:
n Name: host name IP:
n IP address
n Port: 443
n Weight: 1
n State: Enabled
n Set the above service monitor.

HTTP Application Profile n Timeout


n 3,600 seconds (60 minutes).
n X-Forwarded-For
n Insert

VMware, Inc. 164


VMware Cloud Foundation Design Guide

Table 11-4. Clustered Workspace ONE Access Load Balancer Configuration (continued)

Load Balancer Element Settings

Cookie Persistence Profile n Cookie name


n JSESSIONID.
n Cookie mode
n Rewrite

Virtual Server n HTTP type


n L7
n Port
n 443
n IP
n Workspace ONE Access cluster IP
n Persistence
n Above Cookie Persistence Profile.
n Application profile
n Above HTTP application profile.
n Server pool
n Above server pool

Integration Design for Workspace ONE Access with VMware


Cloud Foundation
You integrate supported SDDC components with the Workspace ONE Access cluster to enable
authentication through the identity and access management services.

After the integration, information security and access control configurations for the integrated
SDDC products can be configured.

Table 11-5. Workspace ONE Access SDDC Integration

SDDC Component Integration Considerations

vCenter Server Not Supported For directory services you must


connect vCenter Server directly to
Active Directory. See Identity and
Access Management for VMware
Cloud Foundation.

SDDC Manager Not Supported SDDC Manager uses vCenter Single


Sign-On. For directory services, you
must connect vCenter Server directly
to Active Directory

VMware, Inc. 165


VMware Cloud Foundation Design Guide

Table 11-5. Workspace ONE Access SDDC Integration (continued)

SDDC Component Integration Considerations

NSX Supported If you intend to scale out to an


environment with multiple VMware
Cloud Foundation instances, for
example, for disaster recovery, you
must deploy an additional standard
instance of Workspace ONE Access
in each VMware Cloud Foundation
instance. The Workspace ONE
Access instance that is leveraged
by components protected across
VMware Cloud Foundation instances
might fail over between physical
locations which will impact the
authentication to NSX in the first
VMware Cloud Foundation instance.
See Identity and Access Management
for VMware Cloud Foundation.

vRealize Suite Lifecycle Manager Supported None.

See VMware Cloud Foundation Validated Solutions for the design for specific vRealize Suite
components including identity management.

Deployment Model for Workspace ONE Access


Workspace ONE Access is distributed as a virtual appliance in OVA format that you can deploy
and manage from vRealize Suite Lifecycle Manager together with other vRealize Suite products.
The Workspace ONE Access appliance includes identity and access management services.

Deployment Type
You consider the deployment type, standard or cluster, according to the design objectives for
the availability and number of users that the system and integrated SDDC solutions must support.
You deploy Workspace ONE Access on the default management vSphere cluster.

VMware, Inc. 166


VMware Cloud Foundation Design Guide

Table 11-6. Topology Attributes of Workspace ONE Access

Deployment Type Description Benefit Drawbacks

Standard (Recommended) n Single node n Can be scaled out to n Does not provide high
n NSX load balancer a 3-node cluster behind availability for Identity
automatically deployed. an NSX load balancer Provider connectors.
n Can leverage vSphere
HA for recovery after a
failure occurs.
n Consumes less
resources.

Cluster n Three node clustered n Provides high n May require manual


deployment using availability for Identity intervention after a
internal PostgreSQL Provider connectors. failure occurs.
database. n Consumes additional
n NSX load balancer resources.
automatically deployed.

Workspace ONE Access Design Requirements and


Recommendations for VMware Cloud Foundation
Consider the placement, networking, sizing and high availability requirements for using
Workspace ONE Access for identity and access management of SDDC solutions on a standard
or stretched management cluster in VMware Cloud Foundation. Apply similar best practices for
having Workspace ONE Access operate in an optimal way.

Workspace ONE Access Design Requirements


You must meet the following design requirements in your Workspace ONE Access design for
VMware Cloud Foundation, considering standard or stretched clusters. For NSX Federation,
additional requirements exist.

Table 11-7. Workspace ONE Access Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-ENV-001 Create a global A global environment None.


environment in vRealize is required by vRealize
Suite Lifecycle Manager to Suite Lifecycle Manager to
support the deployment of deploy Workspace ONE
Workspace ONE Access. Access.

VCF-WSA-REQD-SEC-001 Import certificate authority- n You can reference When using the API, you
signed certificates to and use certificate must specify the Locker ID
the Locker repository authority-signed for the certificate to be
for Workspace ONE certificates during used in the JSON payload.
Access product life cycle product life cycle
operations. operations, such
as deployment and
certificate replacement.

VMware, Inc. 167


VMware Cloud Foundation Design Guide

Table 11-7. Workspace ONE Access Design Requirements for VMware Cloud Foundation
(continued)

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-CFG-001 Deploy an appropriately The Workspace ONE None.


sized Workspace ONE Access instance is
Access instance according managed by vRealize Suite
to the deployment model Lifecycle Manager and
you have selected by using imported into the SDDC
vRealize Suite Lifecycle Manager inventory.
Manager in VMware Cloud
Foundation mode.

VCF-WSA-REQD-CFG-002 Place the Workspace Provides a consistent You must use an


ONE Access appliances deployment model for implementation in NSX
on an overlay-backed or management applications to support this network
VLAN-backed NSX network in an environment with configuration.
segment. a single or multiple
VMware Cloud Foundation
instances.

VCF-WSA-REQD-CFG-003 Use the embedded Removes the need for None.


PostgreSQL database with external database services.
Workspace ONE Access.

VCF-WSA-REQD-CFG-004 Add a VM group for You can define the None.


Workspace ONE Access startup order of virtual
and set VM rules to machines regarding the
restart the Workspace service dependency. The
ONE Access VM group startup order ensures that
before any of the VMs vSphere HA powers on the
that depend on it for Workspace ONE Access
authentication. virtual machines in an
order that respects product
dependencies.

VCF-WSA-REQD-CFG-005 Connect the Workspace You can integrate your None.


ONE Access instance to enterprise directory with
a supported upstream Workspace ONE Access
Identity Provider. to synchronize users and
groups to the Workspace
ONE Access identity
and access management
services.

VMware, Inc. 168


VMware Cloud Foundation Design Guide

Table 11-7. Workspace ONE Access Design Requirements for VMware Cloud Foundation
(continued)

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-CFG-006 If using clustered Adding the additional Each of the Workspace


Workspace ONE Access, native connectors provides ONE Access cluster nodes
configure second and third redundancy and improves must be joined to the
native connectors that performance by load- Active Directory domain
correspond to the second balancing authentication to use Active Directory
and third Workspace ONE requests. with Integrated Windows
Access cluster nodes to Authentication with the
support the high availability native connector.
of directory services
access.

VCF-WSA-REQD-CFG-007 If using clustered n During the deployment You must use the load
Workspace ONE Access, of Workspace ONE balancer that is configured
use the NSX load balancer Access by using by SDDC Manager and the
that is configured by SDDC vRealize Suite Lifecycle integration with vRealize
Manager on a dedicated Manager, SDDC Suite Lifecycle Manager.
Tier-1 gateway. Manager automates the
configuration of an
NSX load balancer
for Workspace ONE
Access to facilitate
scale-out.

Table 11-8. Workspace ONE Access Design Requirements for Stretched Clusters in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-CFG-008 Add the Workspace ONE Ensures that, by default, n If the Workspace
Access appliances to the the Workspace ONE ONE Access instance
VM group for the first Access cluster nodes are is deployed after
availability zone. powered on a host in the the creation of the
first availability zone. stretched management
cluster, you must add
the appliances to the
VM group manually.
n ClusteredWorkspace
ONE Access might
require manual
intervention after a
failure of the active
availability zone occurs.

VMware, Inc. 169


VMware Cloud Foundation Design Guide

Table 11-9. Workspace ONE Access Design Requirements for NSX Federation in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-CFG-009 Configure the DNS settings Improves resiliency if None.


for Workspace ONE Access an outage of external
to use DNS servers in each services for a VMware
VMware Cloud Foundation Cloud Foundation instance
instance. occurs.

VCF-WSA-REQD-CFG-010 Configure the NTP settings Improves resiliency if If you scale from a
on Workspace ONE Access an outage of external deployment with a single
cluster nodes to use NTP services for a VMware VMware Cloud Foundation
servers in each VMware Cloud Foundation instance instance to one with
Cloud Foundation instance. occurs. multiple VMware Cloud
Foundation instances,
the NTP settings on
Workspace ONE Access
must be updated.

Workspace ONE Access Design Recommendations


In your Workspace ONE Access design for VMware Cloud Foundation, you can apply certain
best practices.

VMware, Inc. 170


VMware Cloud Foundation Design Guide

Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-CFG-001 Protect all Workspace Supports high availability None for standard
ONE Access nodes using for Workspace ONE deployments.
vSphere HA. Access. Clustered Workspace ONE
Access deployments might
require intervention if an
ESXi host failure occurs.

VCF-WSA-RCMD-CFG-002 When using Active The native (embedded) n In a multi-domain


Directory as an Identity Workspace ONE Access forest, where the
Provider, use Active connector binds to Active Workspace ONE
Directory over LDAP as Directory over LDAP Access instance
the Directory Service using a standard bind connects to a
connection option. authentication. child domain, Active
Directory security
groups must have
global scope.
Therefore, members
added to the
Active Directory global
security group must
reside within the
same Active Directory
domain.
n If authentication to
more than one Active
Directory domain is
required, additional
Workspace ONE
Access directories are
required.

VCF-WSA-RCMD-CFG-003 When using Active Provides the following n You must manage the
Directory as an Identity access control features: password life cycle of
Provider, use an Active n Workspace ONE this account.
Directory user account with Access connects to the n If authentication to
the minimum of read-only Active Directory with more than one Active
access to Base DNs for the minimum set of Directory domain is
users and groups as the required permissions to required, additional
service account for the bind and query the accounts are required
Active Directory bind. directory. for the Workspace
n You can introduce an ONE Access connector
improved accountability bind to each Active
in tracking request- Directory domain over
response interactions LDAP.
between theWorkspace
ONE Access and Active
Directory.

VMware, Inc. 171


VMware Cloud Foundation Design Guide

Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-CFG-004 Configure the directory n Limits the number You must manage
synchronization to of replicated groups the groups from
synchronize only groups required for each your enterprise
required for the integrated product. directory selected for
SDDC solutions. n Reduces the replication synchronization to
interval for group Workspace ONE Access.
information.

VCF-WSA-RCMD-CFG-005 Activate the When activated, members None.


synchronization of of the enterprise directory
enterprise directory group groups are synchronized
members when a group is to the Workspace ONE
added to the Workspace Access directory when
ONE Access directory. groups are added. When
deactivated, group names
are synchronized to the
directory, but members
of the group are not
synchronized until the
group is entitled to an
application or the group
name is added to an access
policy.

VCF-WSA-RCMD-CFG-006 Enable Workspace ONE Allows Workspace ONE Changes to group


Access to synchronize Access to update and membership are not
nested group members by cache the membership of reflected until the next
default. groups without querying synchronization event.
your enterprise directory.

VCF-WSA-RCMD-CFG-007 Add a filter to the Limits the number of To ensure that replicated
Workspace ONE Access replicated users for user accounts are managed
directory settings to Workspace ONE Access within the maximums, you
exclude users from the within the maximum scale. must define a filtering
directory replication. schema that works for your
organization based on your
directory attributes.

VMware, Inc. 172


VMware Cloud Foundation Design Guide

Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-CFG-008 Configure the mapped You can configure the User accounts in your
attributes included when minimum required and organization's enterprise
a user is added to the extended user attributes directory must have
Workspace ONE Access to synchronize directory the following required
directory. user accounts for the attributes mapped:
Workspace ONE Access n firstname, for example,
to be used as an givenname for Active
authentication source for Directory
cross-instance vRealize
n lastName, for example,
Suite solutions.
sn for Active Directory

n email, for example,


mail for Active
Directory
n userName, for
example,sAMAccountNam
e for Active Directory

n If you require users


to sign in with
an alternate unique
identifier, for example,
userPrincipalName, you
must map the
attribute and update
the identity and
access management
preferences.

VCF-WSA-RCMD-CFG-009 Configure the Workspace Ensures that any changes Schedule the
ONE Access directory to group memberships in synchronization interval
synchronization frequency the corporate directory to be longer than the
to a reoccurring schedule, are available for integrated time to synchronize from
for example, 15 minutes. solutions in a timely the enterprise directory.
manner. If users and groups
are being synchronized
to Workspace ONE
Access when the
next synchronization is
scheduled, the new
synchronization starts
immediately after the end
of the previous iteration.
With this schedule, the
process is continuous.

VMware, Inc. 173


VMware Cloud Foundation Design Guide

Table 11-10. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-SEC-001 Create corresponding Streamlines the n You must set the


security groups in management of Workspace appropriate directory
your corporate directory ONE Access roles to users. synchronization interval
services for these in Workspace ONE
Workspace ONE Access Access to ensure that
roles: changes are available
n Super Admin within a reasonable
period.
n Directory Admins
n You must create the
n ReadOnly Admin
security group outside
of the SDDC stack.

VCF-WSA-RCMD-SEC-002 Configure a password You can set a policy for You must set the policy
policy for Workspace Workspace ONE Access in accordance with your
ONE Access local local directory users that organization policies and
directory users, admin and addresses your corporate regulatory standards, as
configadmin. policies and regulatory applicable.
standards. You must apply the
The password policy is password policy on the
applicable only to the Workspace ONE Access
local directory users and cluster nodes.
does not impact your
organization directory.

VMware, Inc. 174


Life Cycle Management Design for
VMware Cloud Foundation 12
In a VMware Cloud Foundation instance, you use SDDC Manager for life cycle management of
the management components in the entire instance except for NSX Global Manager and vRealize
Suite Lifecycle Manager.

Life cycle management of a VMware Cloud Foundation instance is the process of performing
patch updates or upgrades to the underlying management components.

Table 12-1. Life Cycle Management for VMware Cloud Foundation

Component Management Domain VI Workload Domain

SDDC Manager SDDC Manager performs its own life Not applicable
cycle management.

NSX Local Manager SDDC Manager uses the NSX upgrade coordinator service in the NSX Local
Manager.

NSX Edges SDDC Manager uses the NSX upgrade coordinator service in NSX Manager.

NSX Global Manager You manually use the NSX upgrade coordinator service in the NSX Global
Manager.

vCenter Server You use SDDC Manager for life cycle management of all vCenter Server
instances.

ESXi n SDDC Manager uses vSphere n SDDC Manager uses either


Lifecycle Manager baselines and vSphere Lifecycle Manager
baseline groups to update and baselines and baseline groups
upgrade the ESXi hosts in the or vSphere Lifecycle Manager
management domain. images to update and upgrade
n vSphere Lifecycle Manager the ESXi hosts.
images are not supported in the n Custom vendor ISOs are
management domain. supported and might be required
n Custom vendor ISOs are depending on the ESXi hardware
supported and might be required in use.
depending on the ESXi hardware
in use.

vRealize Suite Lifecycle Manager vRealize Suite Lifecycle Manager Not applicable
performs its own life cycle
management.

VMware, Inc. 175


VMware Cloud Foundation Design Guide

VMware Cloud Foundation Life Cycle Management


Requirements
Consider the design requirements for automated and centralized life cycle management in the
context of the entire VMware Cloud Foundation environment.

Table 12-2. Life Cycle Management Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-LCM-REQD-001 Use SDDC Manager to Because the deployment The operations team must
perform the life cycle scope of SDDC Manager understand and be aware
management of the covers the full VMware of the impact of a
following components: Cloud Foundation stack, patch, update, or upgrade
n SDDC Manager SDDC Manager performs operation by using SDDC
patching, update, or Manager.
n NSX Manager
upgrade of these
n NSX Edges
components across all
n vCenter Server workload domains.
n ESXi

VCF-LCM-REQD-002 Use vRealize Suite Lifecycle vRealize Suite Lifecycle n You must deploy
Manager to manage the Manager automates the vRealize Suite Lifecycle
lifecycle of the following life cycle of vRealize Suite Manager by using
components: Lifecycle Manager and SDDC Manager.
n vRealize Suite Lifecycle Workspace ONE Access. n You must manually
Manager apply patches, updates,
n Workspace ONE and hot fixes
Access for Workspace ONE
Access. Patches,
updates, and hotfixes
for Workspace ONE
Access are not
generally managed by
vRealize Suite Lifecycle
Manager.

VMware, Inc. 176


VMware Cloud Foundation Design Guide

Table 12-3. Lifecycle Management Design Requirements for NSX Federation in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-LCM-REQD-003 Use the upgrade The version of SDDC n You must explicitly
coordinator in NSX Manager in this design is plan upgrades of the
to perform life cycle not currently capable of life NSX Global Manager
management on the NSX cycle operations (patching, nodes. An upgrade
Global Manager appliances. update, or upgrade) for of the NSX Global
NSX Global Manager. Manager nodes might
require a cascading
upgrade of the NSX
Local Manager nodes
and underlying SDDC
Manager infrastructure
prior to the upgrade
of the NSX Global
Manager nodes.
n You must always align
the version of the NSX
Global Manager nodes
with the rest of the
SDDC stack in VMware
Cloud Foundation.

VCF-LCM-REQD-004 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
any workload domain, the compatible with each other. using a runbook or
impact of any version Because SDDC Manager automated process to
upgrades is evaluated does not provide life ensure a fully supported
in relation to the need cycle operations (patching, and compliant bill of
to upgrade NSX Global update, or upgrade) materials prior to any
Manager. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.

VCF-LCM-REQD-005 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
the NSX Global Manager, compatible with each other. using a runbook or
the impact of any Because SDDC Manager automated process to
version change is evaluated does not provide life ensure a fully supported
against the existing NSX cycle operations (patching, and compliant bill of
Local Manager nodes and update, or upgrade) materials prior to any
workload domains. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.

VMware, Inc. 177


Logging and Monitoring Design
for VMware Cloud Foundation 13
By using VMware or third-party components, collect log data from all SDDC management
components in your VMware Cloud Foundation environment in a central place. You can use
vRealize Log Insight as the central platform because of its native integration with vRealize Suite
Lifecycle Manager.

After you deploy vRealize Log Insight by using vRealize Suite Lifecycle Manager in VMware
Cloud Foundation mode, SDDC Manager configures vRealize Suite Lifecycle Manager logging to
vRealize Log Insight over the log ingestion API. For information about on-premises vRealize Log
Insight logging in VMware Cloud Foundation, see Intelligent Logging and Analytics for VMware
Clod Foundation.

VMware, Inc. 178


Information Security Design for
VMware Cloud Foundation 14
You design management of access controls, certificates and accounts for VMware Cloud
Foundation according to the requirements of your organization.

Read the following topics next:

n Access Management for VMware Cloud Foundation

n Account Management Design for VMware Cloud Foundation

n Certificate Management for VMware Cloud Foundation

Access Management for VMware Cloud Foundation


You design access management for VMware Cloud Foundation according to industry standards
and the requirements of your organization.

Component Access Method Additional Information

SDDC Manager n UI SSH is active by default. root user


n API access is deactivated.

n SSH

NSX Local Manager n UI SSH is deactivated by default.


n API
n SSH

NSX Edges n API SSH is deactivated by default.


n SSH

NSX Global Manager n UI SSH setting is defined during


n API deployment.

n SSH

vCenter Server n UI SSH is active by default.


n API
n SSH
n VAMI

VMware, Inc. 179


VMware Cloud Foundation Design Guide

Component Access Method Additional Information

ESXi n Direct Console User Interface SSH and ESXi shell are deactivated
(DCUI) by default.
n ESXi Shell
n SSH
n VMware Host Client

vRealize Suite Lifecycle Manager n UI SSH is active by default.


n API
n SSH

Workspace ONE Access n UI SSH is active by default.


n API
n SSH

Account Management Design for VMware Cloud Foundation


You design account management for VMware Cloud Foundation according to industry standards
and the requirements of your organization.

Password Management Methods


SDDC Manager manages the life cycle of passwords for the components that are part of the
VMware Cloud Foundation instance. Multiple methods for managing password life cycle are
supported.

Table 14-1. Password Management Methods in VMware Cloud Foundation

Method Description

Rotate Update one or more accounts with an auto-generated


password

Update Update password for a single account with a manually


entered password

Remediate Reconcile a single account with a password that has been


set manually at the component.

Schedule Schedule auto-rotation for one or more selected


accounts.

Manual Update a password manually directly in the component.

Account and Password Management


VMware Cloud Foundation comprises multiple types of interactive, local, and service accounts.
Each account has different attributes and can be managed in the following ways:

For more information on password complexity, account lockout or integration with additional
Identity Providers, refer to the Identity and Access Management for VMware Cloud Foundation.

VMware, Inc. 180


VMware Cloud Foundation Design Guide

Table 14-2. Account and Password Management in VMware Cloud Foundation

Component User Account Password Management Additional Information

SDDC Manager admin@local n Manual by using the n Local appliance account


SDDC Manager API n API access (break-glass
n Default Expiry: Never account)

vcf n Manual by using the OS n Local appliance account


n Default Expiry: 365 n OS level access
days

root n Manual by using the OS n Local appliance account


n Default Expiry: 90 days n OS level access

backup n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the
SDDC Manager UI or
API
n Default Expiry: 365
days

administrator@vsphere.loca n Rotate, n vCenter Singe Sign-On


l update,remediate or account.
schedule by using the n Application and API
SDDC Manager UI or access.
API n Additional
n Default Expiry: 90 days VMware Cloud
FoundationAdmin
account required
to perform manual
password rotation.

NSX Local Manager admin n Rotate, n Local appliance account


update,remediate or n OS level, API, and
schedule by using the application access
SDDC Manager UI or
API
n Default Expiry: 90 days

root n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the
SDDC Manager UI or
API
n Default Expiry: 90 days

audit n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the n Read-only application
SDDC Manager UI or level access
API
n Default Expiry: 90 days

VMware, Inc. 181


VMware Cloud Foundation Design Guide

Table 14-2. Account and Password Management in VMware Cloud Foundation (continued)

Component User Account Password Management Additional Information

NSX Edges admin n Rotate, n Local appliance account


update,remediate or n OS level, API, and
schedule by using the application access
SDDC Manager UI or
API
n Default Expiry: 90 days

root n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the
SDDC Manager UI or
API
n Default Expiry: 90 days

audit n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the n Read-only application
SDDC Manager UI or level access
API
n Default Expiry: 90 days

NSX Global Manager admin n Manual by using the n Local appliance account
NSX Global Manager UI n OS level, API, and
or API application access
n Default Expiry: 90 days

root n Manual by using each n Local appliance account


NSX Global Manager n OS level access
appliance
n Default Expiry: 90 days

audit n Manual by using the n Local appliance account


NSX Global Manager UI n OS level access
or API n Read-only application
n Default Expiry: 90 days level access

vCenter Server root n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the n VAMI access
SDDC Manager UI or
API
n Default Expiry: 90 days

administrator@isolatedsso.l n Rotate, n vCenter Single Sign-On


ocal update,remediate or account.
schedule by using the n Application and API
SDDC Manager UI or access.
API n Relevant to isolated
n Default Expiry: 90 days workload domain

VMware, Inc. 182


VMware Cloud Foundation Design Guide

Table 14-2. Account and Password Management in VMware Cloud Foundation (continued)

Component User Account Password Management Additional Information

svc-nsx-manager- n System managed. Service account between


hostname-vcenter-server- n Automatically rotated NSX Manager and vCenter
hostname every 30 days by Server
default
n Default Expiry: None

svc-vrslcm-hostname- n System managed Service account between


vccenter-server-hostname n Automatically rotated vRealize Suite Lifecycle
every 30 days by Manager and vCenter
default Server

n Default Expiry: None

ESXi root n Rotate, Manual


update,remediate or
schedule by using the
SDDC Manager UI or
API
n Default Expiry: 99999
(never)

svc-vcf-esxi-hostname n Rotate, Service account between


update,remediate or SDDC Manager and the
schedule by using the ESXi host
SDDC Manager UI or
API
n Default Expiry: 99999
(never)

vRealize Suite Lifecycle vcfadmin@local n Rotate, API and application access


Manager update,remediate or
schedule by using the
SDDC Manager UI or
API
n Default Expiry: Never

root n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the
SDDC Manager UI or
API
n Default Expiry: 365
days

Workspace One Access root n Rotate, n Local appliance account


update,remediate or n OS level access
schedule by using the
SDDC Manager UI or
API
n Default Expiry: 60 days

sshuser n Managed by vRealize n Local appliance account


Suite Lifecycle Manager n OS level access
n Default Expiry: 60 days

VMware, Inc. 183


VMware Cloud Foundation Design Guide

Table 14-2. Account and Password Management in VMware Cloud Foundation (continued)

Component User Account Password Management Additional Information

admin (port 8443) Managed by vRealize Suite System Admin


Lifecycle Manager

Admin (port 443) n Rotate, Default application


update,remediate or administrator
schedule by using the
SDDC Manager UI or
API
n Default Expiry: Never

configadmin n You must use Application configuration


both Workspace ONE administrator
Access and vRealize
Suite Lifecycle Manager
to manage the
password rotation
schedule of the
configadmin user.
n Default Expiry: Never

Account Management Design Recommendations


In your account management design, you can apply certain best practices.

Table 14-3. Design Requirements for Account and Password Management for VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-ACTMGT-REQD- Enable scheduled n Increases the security You must retrieve new
SEC-001 password rotation in SDDC posture of your SDDC. passwords by using the API
Manager for all accounts n Simplifies password if you must use accounts
supporting scheduled management interactively.
rotation. across your
SDDC management
components.

VCF-ACTMGT-REQD- Establish operational Rotates passwords and None.


SEC-003 practice to rotate automatically remediates
passwords using SDDC SDDC Manager databases
Manager on components for those user accounts.
that do not support
scheduled rotation in SDDC
Manager.

VCF-ACTMGT-REQD- Establish operational Maintains password policies None.


SEC-003 practice to manually rotate across components not
passwords on components handled by SDDC Manager
that cannot be rotated by password management.
SDDC Manager.

VMware, Inc. 184


VMware Cloud Foundation Design Guide

Certificate Management for VMware Cloud Foundation


You design certificate management for VMware Cloud Foundation according to industry
standards and the requirements of your organization.

Access to all management component interfaces must be over a Secure Socket Layer (SSL)
connection. During deployment, each component is assigned a certificate from a default signing
CA. To provide secure access to each component, replace the default certificate with a trusted
enterprise CA-signed certificate.

Table 14-4. Certificate Management in VMware Cloud Foundation


Life cycle for Enterprise CA-Signed
Component Default Signing CA Certificates

SDDC Manager Management domain VMCA Using SDDC Manager

NSX Local Manager Management domain VMCA Using SDDC Manager

NSX Edges Not applicable Not applicable

NSX Global Manager Self Signed Manual

vCenter Server Local workload domain VMCA Using SDDC Manager

ESXi Local workload domain VMCA Manual*

vRealize Suite Lifecycle Manager Management domain VMCA Using SDDC Manager

Note * To use enterprise CA-Signed certificates with ESXi, the initial deployment of VMware
Cloud Foundation must be done using the API providing the Trusted Root certificate.

Table 14-5. Certificate Management Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-SDDC-RCMD-SEC-001 Replace the default VMCA- Ensures that the Replacing the default
signed certificate on communication to all certificates with trusted
all management virtual management components CA-signed certificates from
appliances with a certificate is secure. a certificate authority might
that is signed by an internal increase the deployment
certificate authority. preparation time because
you must generate and
submit certificate requests.

VCF-SDDC-RCMD-SEC-002 Use a SHA-2 algorithm The SHA-1 algorithm is Not all certificate
or higher for signed considered less secure and authorities support SHA-2
certificates. has been deprecated. or higher.

VCF-SDDC-RCMD-SEC-003 Perform SSL certificate life SDDC Manager supports Certificate management
cycle management for all automated SSL certificate for NSX Global Manager
management appliances by lifecycle management instances must be done
using SDDC Manager. rather than requiring a manually.
series of manual steps.

VMware, Inc. 185


Appendix: Design Elements for
VMware Cloud Foundation 15
The appendix aggregates all design requirements and recommendations in the design guidance
for VMware Cloud Foundation. You can use this list for reference related to the end state of your
platform and potentially to track your level of adherence to the design and any justification for
deviation.

Read the following topics next:

n Architecture Design Elements for VMware Cloud Foundation

n Workload Domain Design Elements for VMware Cloud Foundation

n External Services Design Elements for VMware Cloud Foundation

n Physical Network Design Elements for VMware Cloud Foundation

n vSAN Design Elements for VMware Cloud Foundation

n ESXi Design Elements for VMware Cloud Foundation

n vCenter Server Design Elements for VMware Cloud Foundation

n vSphere Cluster Design Elements for VMware Cloud Foundation

n vSphere Networking Design Elements for VMware Cloud Foundation

n NSX Design Elements for VMware Cloud Foundation

n SDDC Manager Design Elements for VMware Cloud Foundation

n vRealize Suite Lifecycle Manager Design Elements for VMware Cloud Foundation

n Workspace ONE Access Design Elements for VMware Cloud Foundation

n Life Cycle Management Design Elements for VMware Cloud Foundation

n Information Security Design Elements for VMware Cloud Foundation

Architecture Design Elements for VMware Cloud Foundation


Use this list of requirements for reference related to using the standard or consolidated
architecture of VMware Cloud Foundation.

For full design details, see Architecture Models and Workload Domain Types in VMware Cloud
Foundation.

VMware, Inc. 186


VMware Cloud Foundation Design Guide

Table 15-1. Architecture Model Recommendations for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-ARCH-RCMD-CFG-001 Use the standard n Aligns with the Requires additional


architecture model of VMware best hardware.
VMware Cloud Foundation. practice of separating
management workloads
from customer
workloads.
n Provides better long-
term flexibility and
expansion options.

Workload Domain Design Elements for VMware Cloud


Foundation
Use this list of requirements and recommendations for reference related to the types of virtual
infrastructure (VI) workload domains in a VMware Cloud Foundation environment.

For full design details, see Workload Domain Types.

Table 15-2. Workload Domain Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-WLD-RCMD-CFG-001 Use VI workload domains n Aligns with the Requires additional


or isolated VI workload VMware best hardware.
domains for customer practice of separating
workloads. management workloads
from customer
workloads.
n Provides better long
term flexibility and
expansion options.

External Services Design Elements for VMware Cloud


Foundation
Use this list of requirements for reference related to the configuration of external infrastructure
services in an environment with a single or multiple VMware Cloud Foundation instances. The
requirements define IP address allocation, name resolution, and time synchronization.

For full design details, see Chapter 4 External Services Design for VMware Cloud Foundation.

VMware, Inc. 187


VMware Cloud Foundation Design Guide

Table 15-3. External Services Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-EXT-REQD-NET-001 Allocate statically assigned Ensures stability across the You must provide precise
IP addresses and host VMware Cloud Foundation IP address management.
names for all workload instance, and makes it
domain components. simpler to maintain, track,
and implement a DNS
configuration.

VCF-EXT-REQD-NET-002 Configure forward and Ensures that all You must provide
reverse DNS records components are accessible DNS records for each
for all workload domain by using a fully qualified component.
components. domain name instead of by
using IP addresses only. It
is easier to remember and
connect to components
across the VMware Cloud
Foundation instance.

VCF-EXT-REQD-NET-003 Configure time Ensures that all An operational NTP service


synchronization by using an components are must be available in the
internal NTP time source synchronized with a valid environment.
for all workload domain time source.
components.

VCF-EXT-REQD-NET-004 Set the NTP service Ensures that the None.


for all workload domain NTP service remains
components to start synchronized after you
automatically. restart a component.

Physical Network Design Elements for VMware Cloud


Foundation
Use this design decision list for reference related to the configuration of the physical network in
an environment with a single or multiple VMware Cloud Foundation instances. The design also
considers if an instance contains a single or multiple availability zones.

For full design details, see Chapter 5 Physical Network Infrastructure Design for VMware Cloud
Foundation.

VMware, Inc. 188


VMware Cloud Foundation Design Guide

Table 15-4. Leaf-Spine Physical Network Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NET-REQD-CFG-001 Do not use EtherChannel n Simplifies configuration None.


(LAG, LACP, or vPC) of top of rack switches.
configuration for ESXi host n Teaming options
uplinks. available with vSphere
Distributed Switch
provide load balancing
and failover.
n EtherChannel
implementations might
have vendor-specific
limitations.

VCF-NET-REQD-CFG-002 Use VLANs to separate n Supports physical Requires uniform


physical network functions. network connectivity configuration and
without requiring many presentation on all the
NICs. trunks that are made
n Isolates the different available to the ESXi hosts.
network functions in
the SDDC so that you
can have differentiated
services and prioritized
traffic as needed.

VCF-NET-REQD-CFG-003 Configure the VLANs as All VLANs become Optionally, the


members of a 802.1Q trunk. available on the same management VLAN can act
physical network adapters as the native VLAN.
on the ESXi hosts.

VCF-NET-REQD-CFG-004 Set the MTU size to at least n Improves traffic When adjusting the MTU
1,700 bytes (recommended throughput. packet size, you must
9,000 bytes for jumbo n Supports Geneve by also configure the entire
frames) on the physical increasing the MTU size network path (VMkernel
switch ports, vSphere to a minimum of 1,600 network adapters, virtual
Distributed Switches, bytes. switches, physical switches,
vSphere Distributed Switch and routers) to support the
n Geneve is an extensible
port groups, and N-VDS same MTU packet size.
protocol. The MTU
switches that support the In an environment with
size might increase
following traffic types: multiple availability zones,
with future capabilities.
n Overlay (Geneve) While 1,600 bytes the MTU must be
n vSAN is sufficient, an MTU configured on the entire
size of 1,700 bytes network path between the
n vSphere vMotion
provides more room for zones.
increasing the Geneve
MTU size without the
need to change the
MTU size of the
physical infrastructure.

VMware, Inc. 189


VMware Cloud Foundation Design Guide

Table 15-5. Leaf-Spine Physical Network Design Requirements for NSX Federation in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NET-REQD-CFG-005 Set the MTU size to at least n Jumbo frames are When adjusting the MTU
1,500 bytes (1,700 bytes not required between packet size, you must
preferred; 9,000 bytes VMware Cloud also configure the entire
recommended for jumbo Foundation instances. network path, that is,
frames) on the components However, increased virtual interfaces, virtual
of the physical network MTU improves traffic switches, physical switches,
between the VMware throughput. and routers to support the
Cloud Foundation instances n Increasing the RTEP same MTU packet size.
for the following traffic MTU to 1,700
types. bytes minimizes
n NSX Edge RTEP fragmentation for
standard-size workload
packets between
VMware Cloud
Foundation instances.

VCF-NET-REQD-CFG-006 Ensure that the latency A latency lower than 150 None.
between VMware Cloud ms is required for the
Foundation instances that following features:
are connected in an NSX n Cross vCenter Server
Federation is less than 150 vMotion
ms. n NSX Federation

VCF-NET-REQD-CFG-007 Provide a routed Configuring NSX You must assign unique


connection between the Federation requires routable IP addresses for
NSX Manager clusters in connectivity between the each fault domain.
VMware Cloud Foundation NSX Global Manager
instances that are instances, NSX Local
connected in an NSX Manager instances, and
Federation. NSX Edge clusters.

VMware, Inc. 190


VMware Cloud Foundation Design Guide

Table 15-6. Leaf-Spine Physical Network Design Recommendations for VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NET-RCMD-CFG-001 Use two ToR switches for Supports the use of Requires two ToR switches
each rack. two 10-GbE (25-GbE or per rack which might
greater recommended) increase costs.
links to each server,
provides redundancy and
reduces the overall design
complexity.

VCF-NET-RCMD-CFG-002 Implement the following n Provides availability n Might limit the


physical network during a switch failure. hardware choices.
architecture: n Provides support for n Requires dynamic
n One 25-GbE (10-GbE BGP dynamic routing routing protocol
minimum) port on each protocol configuration in the
ToR switch for ESXi physical network.
host uplinks (Host to
ToR).
n Layer 3 device that
supports BGP.

VCF-NET-RCMD-CFG-003 Use a physical network n Supports design Requires BGP configuration


that is configured for BGP flexibility for routing in the physical network.
routing adjacency. multi-site and multi-
tenancy workloads.
n BGP is the only
dynamic routing
protocol that is
supported for NSX
Federation.
n Supports failover
between ECMP Edge
uplinks.

VCF-NET-RCMD-CFG-004 Assign persistent IP n Ensures that endpoints Requires a DHCP server


configurations for NSX have a persistent TEP availability on the TEP
tunnel endpoints (TEPs) IP address. VLAN.
that use dynamic IP n In VMware Cloud
allocation rather than static Foundation, TEP IP
IP pool addressing. assignment over DHCP
is required for
advanced deployment
topologies such as a
management domain
with multiple availability
zones.
n Static pools in NSX
Manager support only a
single availability zone.

VMware, Inc. 191


VMware Cloud Foundation Design Guide

Table 15-6. Leaf-Spine Physical Network Design Recommendations for VMware Cloud
Foundation (continued)

Recommendation ID Design Recommendation Justification Implication

VCF-NET-RCMD-CFG-005 Set the lease duration for The IP addresses of the Requires configuration and
the DHCP scope for the host overlay VMkernel management of a DHCP
host overlay network to at ports are assigned by using server.
least 7 days. a DHCP server.
n Host overlay VMkernel
ports do not have an
administrative endpoint.
As a result, they can
use DHCP for automatic
IP address assignment.
If you must change
or expand the subnet,
changing the DHCP
scope is simpler than
creating an IP pool and
assigning it to the ESXi
hosts.
n DHCP simplifies the
configuration of a
default gateway for
host overlay VMkernel
ports if hosts within
same cluster are on
separate Layer 2
domains.

VCF-NET-RCMD-CFG-006 Designate the trunk ports Reduces the time to Although this design
connected to ESXi NICs as transition ports over to the does not use the STP,
trunk PortFast. forwarding state. switches usually have STP
configured by default.

VCF-NET-RCMD-CFG-007 Configure VRRP, HSRP, or Ensures that the VLANs Requires configuration of a
another Layer 3 gateway that are stretched high availability technology
availability method for between availability zones for the Layer 3 gateways in
these networks. are connected to a the data center.
n Management highly- available gateway.
Otherwise, a failure in the
n Edge overlay
Layer 3 gateway will cause
disruption in the traffic in
the SDN setup.

VMware, Inc. 192


VMware Cloud Foundation Design Guide

Table 15-7. Leaf-Spine Physical Network Design Recommendations for Stretched Clusters with
VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NET-RCMD-CFG-008 Provide high availability for Ensures that a failover Requires configuration of
the DHCP server or DHCP operation of a single highly available DHCP
Helper used for VLANs availability zone will not server.
requiring DHCP services impact DHCP availability.
that are stretched between
availability zones.

Table 15-8. Leaf-Spine Physical Network Design Recommendations for NSX Federation in
VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NET-RCMD-CFG-009 Provide BGP routing BGP is the supported None.


between all VMware Cloud routing protocol for NSX
Foundation instances that Federation.
are connect in an NSX
Federation setup.

vSAN Design Elements for VMware Cloud Foundation


Use this list of requirements and recommendations for reference related to shared storage, vSAN
principal storage, and NFS supplemental storage in an environment with a single or multiple
VMware Cloud Foundation instances. The design also considers whether an instance contains a
single or multiple availability zones.

After you set up the physical storage infrastructure, the configuration tasks for most design
decisions are automated in VMware Cloud Foundation. You must perform the configuration
manually only for a limited number of design elements as noted in the design implication.

For full design details, see Chapter 6 vSAN Design for VMware Cloud Foundation.

Table 15-9. vSAN Design Requirements for VMware Cloud Foundation


Requireme
nt ID Design Requirement Justification Implication

VCF- Provide sufficient raw Ensures that sufficient None.


VSAN- capacity to meet resources are present to
REQD- the initial needs of create the workload domain
CFG-001 the workload domain cluster.
cluster.

VCF- Provide at least Satisfies the requirements for None.


VSAN- the required minimum storage availability.
REQD- number of hosts
CFG-002 according to the
cluster type.

VMware, Inc. 193


VMware Cloud Foundation Design Guide

Table 15-10. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
Requireme
nt ID Design Requirement Justification Implication

VCF- Add the following Provides the necessary You might need additional policies if third-
VSAN- setting to the default protection for virtual party virtual machines are to be hosted in
REQD- vSAN storage policy: machines in each availability these clusters because their performance or
CFG-003 Site disaster tolerance zone, with the ability to availability requirements might differ from
= Site mirroring - recover from an availability what the default VMware vSAN policy
stretched cluster zone outage. supports.

VCF- Configure two fault Fault domains are mapped to You must provide additional raw storage when
VSAN- domains, one for availability zones to provide the site mirroring - stretched cluster option is
REQD- each availability zone. logical host separation and selected and fault domains are enabled.
CFG-004 Assign each host ensure a copy of vSAN
to their respective data is always available even
availability zone fault when an availability zone
domain. goes offline.

VCF- Deploy a vSAN witness Ensures availability of vSAN You must provide a third physically-separate
VSAN- appliance in a location witness components in the location that runs a vSphere environment.
WTN- that is not local to the event of a failure of one of You might use a VMware Cloud Foundation
REQD- ESXi hosts in any of the availability zones. instance in a separate physical location.
CFG-001 the availability zones.

VCF- Deploy a witness Ensures the witness The vSphere environment at the witness
VSAN- appliance of a size appliance is sized to support location must satisfy the resource
WTN- that corresponds to the projected workload requirements of the witness appliance.
REQD- the required capacity storage consumption.
CFG-002 of the cluster.

VCF- Connect the first Enables connecting the The management networks in both availability
VSAN- VMkernel adapter of witness appliance to the zones must be routed to the management
WTN- the vSAN witness workload domain vCenter network in the witness site.
REQD- appliance to the Server.
CFG-003 management network
in the witness site.

VCF- Allocate a statically Simplifies maintenance and Requires precise IP address management.
VSAN- assigned IP address tracking, and implements a
WTN- and host name to the DNS configuration.
REQD- management adapter
CFG-004 of the vSAN witness
appliance.

VCF- Configure forward Enables connecting the vSAN You must provide DNS records for the vSAN
VSAN- and reverse DNS witness appliance to the witness appliance.
WTN- records for the vSAN workload domain vCenter
REQD- witness appliance for Server by FQDN instead of IP
CFG-005 the VMware Cloud address.
Foundation instance.

VMware, Inc. 194


VMware Cloud Foundation Design Guide

Table 15-10. vSAN Design Requirements for Stretched Clusters with VMware Cloud Foundation
(continued)
Requireme
nt ID Design Requirement Justification Implication

VCF- Configure time Prevents any failures n An operational NTP service must be
VSAN- synchronization by in the stretched cluster available in the environment.
WTN- using an internal NTP configuration that are caused n All firewalls between the vSAN witness
REQD- time for the vSAN by time mismatch between appliance and the NTP servers must allow
CFG-006 witness appliance. the vSAN witness appliance NTP traffic on the required network ports.
and the ESXi hosts in
both availability zones and
workload domain vCenter
Server.

VCF- Configure an individual The vSAN storage policy of You must configure additional vSAN storage
VSAN- vSAN storage policy a stretched cluster cannot be policies.
WTN- for each stretched shared with other clusters.
REQD- cluster.
CFG-007

Table 15-11. vSAN Design Recommendations for VMware Cloud Foundation


Recommen
dation ID Design Recommendation Justification Implication

VCF- Ensure that the storage I/O Storage controllers with lower queue depths can Limits the
VSAN- controller that is running the vSAN cause performance and stability problems when number of
RCMD- disk groups is capable and has a running vSAN. compatible
CFG-001 minimum queue depth of 256 set. vSAN ReadyNode servers are configured with the I/O
right queue depths for vSAN. controllers
that can be
used for
storage.

VCF- Do not use the storage I/O Running non-vSAN disks, for example, VMFS, on a If non-vSAN
VSAN- controllers that are running vSAN storage I/O controller that is running a vSAN disk disks are
RCMD- disk groups for another purpose. group can impact vSAN performance. required in
CFG-002 ESXi hosts,
you must
have an
additional
storage I/O
controller in
the host.

VCF- Configure vSAN in all-flash Meets the performance needs of the default All vSAN
VSAN- configuration in the default workload domain cluster. disks must
RCMD- workload domain cluster. be flash
CFG-003 disks, which
might cost
more than
magnetic
disks.

VMware, Inc. 195


VMware Cloud Foundation Design Guide

Table 15-11. vSAN Design Recommendations for VMware Cloud Foundation (continued)
Recommen
dation ID Design Recommendation Justification Implication

VCF- Provide sufficient raw capacity to Ensures that sufficient resources are present in the None.
VSAN- meet the planned needs of the workload domain cluster, preventing the need to
RCMD- workload domain cluster. expand the vSAN datastore in the future.
CFG-004

VCF- On the vSAN datastore, ensure that This free space, called slack space, is set aside Increases
VSAN- at least 30% of free space is always for host maintenance mode data evacuation, the amount
RCMD- available. component rebuilds, rebalancing operations, and of available
CFG-005 VM snapshots. storage
needed.

VCF- Configure vSAN with a minimum of Reduces the size of the fault domain and Using
VSAN- two disk groups per ESXi host. spreads the I/O load over more disks for better multiple disk
RCMD- performance. groups
CFG-006 requires
more disks
in each ESXi
host.

VCF- For the cache tier in each disk Provides enough cache for both hybrid or all-flash Using larger
VSAN- group, use a flash-based drive that vSAN configurations to buffer I/O and ensure disk flash disks
RCMD- is at least 600 GB large. group performance. can increase
CFG-007 Additional space in the cache tier does not the initial
increase performance. host cost.

VCF- Use the default VMware vSAN n Provides the level of redundancy that is You might
VSAN- storage policy. needed in the workload domain cluster. need
RCMD- n Provides the level of performance that is additional
CFG-008 enough for the individual workloads. policies
for third-
party virtual
machines
hosted in
these
clusters
because
their
performanc
e or
availability
requirement
s might
differ from
what the
default
VMware
vSAN policy
supports.

VMware, Inc. 196


VMware Cloud Foundation Design Guide

Table 15-11. vSAN Design Recommendations for VMware Cloud Foundation (continued)
Recommen
dation ID Design Recommendation Justification Implication

VCF- Leave the default virtual machine Sparse virtual swap files consume capacity on None.
VSAN- swap file as a sparse object on vSAN only as they are accessed. As a result,
RCMD- vSAN. you can reduce the consumption on the vSAN
CFG-009 datastore if virtual machines do not experience
memory over-commitment which would require
the use of the virtual swap file.

VCF- Use the existing vSphere Provides guaranteed performance for vSAN traffic All traffic
VSAN- Distributed Switch instance in the in a connection-free network by using existing paths are
RCMD- workload domain cluster. networking components. shared over
CFG-010 common
uplinks.

VCF- Configure jumbo frames on the n Simplifies configuration because jumbo frames Every
VSAN- VLAN for vSAN traffic. are also used to improve the performance of device in
RCMD- vSphere vMotion and NFS storage traffic. the network
CFG-011 n Reduces the CPU overhead resulting high must
network usage. support
jumbo
frames.

Table 15-12. vSAN Design Recommendations for Stretched Clusters with VMware Cloud
Foundation
Recommen
dation ID Design Recommendation Justification Implication

VCF- Configure the vSAN witness Removes the requirement to have static routes on The
VSAN- appliance to use the first VMkernel the witness appliance as witness traffic is routed managemen
WTN- adapter, that is the management over the management network. t networks
RCMD- interface, for vSAN witness traffic. in both
CFG-001 availability
zones must
be routed to
the
managemen
t network in
the witness
site.

VCF- Place witness traffic on the Separates the witness traffic from the vSAN data The
VSAN- management VMkernel adapter of traffic. Witness traffic separation provides the managemen
WTN- all the ESXi hosts in the workload following benefits: t networks
RCMD- domain. n Removes the requirement to have static routes in both
CFG-002 from the vSAN networks in both availability availability
zones to the witness site. zones must
be routed to
n Removes the requirement to have jumbo
the
frames enabled on the path between each
managemen
availability zone and the witness site because
t network in
witness traffic can use a regular MTU size of
the witness
1500 bytes.
site.

VMware, Inc. 197


VMware Cloud Foundation Design Guide

ESXi Design Elements for VMware Cloud Foundation


Use this list of requirements and recommendations for reference related to the ESXi host
configuration in an environment with a single or multiple VMware Cloud Foundation instances.
The design elements determine the ESXi hardware configuration, networking, life cycle
management and remote access.

For full design details, see ESXi Design for VMware Cloud Foundation.

Table 15-13. Design Requirements for ESXi Server Hardware

Requirement ID Design Requirement Requirement Justification Requirement Implication

VCF-ESX-REQD-CFG-001 Install no less than n Ensures availability None.


the minimum number of requirements are met.
ESXi hosts required for n If one of the hosts is
the cluster type being not available because
deployed. of a failure or
maintenance event, the
CPU overcommitment
ratio becomes 2:1.

VCF-ESX-REQD-CFG-002 Ensure each ESXi host n Ensures workloads Assemble the server
matches the required will run without specification and number
CPU, memory and storage contention even during according to the sizing in
specification. failure and maintenance VMware Cloud Foundation
conditions. Planning and Preparation
Workbook which is based
on projected deployment
size.

VCF-ESX-REQD-NET-001 Place the ESXi hosts Reduces the number of Separation of the physical
in each management VLANs needed because VLAN between ESXi hosts
domain cluster on the a single VLAN can be and other management
VLAN-backed management allocated to both the ESXi components for security
network segment for hosts, vCenter Server, and reasons is missing.
vCenter Server, and management components
management components for NSX.
for NSX.

VCF-ESX-REQD-NET-002 Place the ESXi hosts Physical VLAN security A new VLAN and a new
in each VI workload separation between subnet are required for
domain cluster on a VI workload domain the VI workload domain
VLAN-backed management ESXi hosts and other management network.
network segment other management components
than that used by the in the management domain
management domain. is achieved.

VCF-ESX-REQD-SEC-001 Regenerate the certificate Establishes a secure You must manually


of each ESXi host after connection with VMware regenerate the certificates
assigning the host an Cloud Builder during the of the ESXi hosts before
FQDN. deployment of a workload the deployment of a
domain and prevents workload domain.
man-in-the-middle (MiTM)
attacks.

VMware, Inc. 198


VMware Cloud Foundation Design Guide

Table 15-14. Design Recommendations for ESXi Server Hardware

Recommendation ID Recommendation Justification Implication

VCF-ESX-RCMD-CFG-001 Use vSAN ReadyNodes Your management domain Hardware choices might be
with vSAN storage for is fully compatible with limited.
each ESXi host in the vSAN at deployment. If you plan to use a
management domain. For information about the server configuration that is
models of physical servers not a vSAN ReadyNode,
that are vSAN-ready, see your CPU, disks and I/O
vSAN Compatibility Guide modules must be listed on
for vSAN ReadyNodes. the VMware Compatibility
Guide under CPU Series
and vSAN Compatibility List
aligned to the ESXi version
specified in VMware Cloud
Foundation 5.0 Release
Notes.

VCF-ESX-RCMD-CFG-002 Allocate hosts with uniform A balanced cluster has You must apply vendor
configuration across the these advantages: sourcing, budgeting,
default management n Predictable and procurement
vSphere cluster. performance even considerations for uniform
during hardware server nodes on a per
failures cluster basis.

n Minimal impact of
resynchronization or
rebuild operations on
performance

VCF-ESX-RCMD-CFG-003 When sizing CPU, do Although multithreading Because you must provide
not consider multithreading technologies increase more physical CPU
technology and associated CPU performance, the cores, costs increase and
performance gains. performance gain depends hardware choices become
on running workloads and limited.
differs from one case to
another.

VCF-ESX-RCMD-CFG-004 Install and configure all Provides hosts that have None
ESXi hosts in the default large memory, that is,
management cluster to greater than 512 GB,
boot using a 128 GB device with enough space for
or greater. the scratch partition when
using vSAN.

VCF-ESX-RCMD-CFG-005 Use the default n If a failure in the None


configuration for the vSAN cluster occurs,
scratch partition on all the ESXi hosts remain
ESXi hosts in the default responsive and log
management cluster. information is still
accessible.
n It is not possible to use
vSAN datastore for the
scratch partition.

VMware, Inc. 199


VMware Cloud Foundation Design Guide

Table 15-14. Design Recommendations for ESXi Server Hardware (continued)

Recommendation ID Recommendation Justification Implication

VCF-ESX-RCMD-CFG-006 For workloads running in Simplifies the configuration Increases the amount
the default management process. of replication traffic for
cluster, save the virtual management workloads
machine swap file at the that are recovered as part
default location. of the disaster recovery
process.

VCF-ESX-RCMD-NET-001 Place ESXi Hosts for Physical VLAN security A new workload domain
each VI Workload Domain separation between ESXi management and subnet
on separate VLAN-backed hosts in different VI are required for each new
management network workload domains is VI workload domain.
segments achieved.

VCF-ESX-RCMD-SEC-001 Deactivate SSH access on Ensures compliance with You must activate SSH
all ESXi hosts in the the vSphere Security access manually for
management domain by Configuration Guide and troubleshooting or support
having the SSH service with security best activities as VMware Cloud
stopped and using the practices. Foundation deactivates
default SSH service policy Disabling SSH access SSH on ESXi hosts
Start and stop manually . reduces the risk of security after workload domain
attacks on the ESXi hosts deployment.
through the SSH interface.

VCF-ESX-RCMD-SEC-002 Set the advanced setting n Ensures compliance You must suppress
UserVars.SuppressShellWa with the vSphere SSH enablement warning
rning to 0 across all ESXi Security Configuration messages manually when
hosts in the management Guide and with security performing troubleshooting
domain. best practices or support activities.
n Turns off the warning
message that appears
in the vSphere Client
every time SSH access
is activated on an ESXi
host.

vCenter Server Design Elements for VMware Cloud


Foundation
Use this list of requirements and recommendations for reference related to the vCenter Server
configuration in an environment with a single or multiple VMware Cloud Foundation instances.
The design elements also consider if an instance contains a single or multiple availability zones.
The vCenter Server design also includes the configuration of the default management cluster.

The configuration tasks for most design requirements and recommendations are automated
in VMware Cloud Foundation. You must perform the configuration manually only for a limited
number of decisions as noted in the design implication.

For full design details, see vCenter Server Design for VMware Cloud Foundation.

VMware, Inc. 200


VMware Cloud Foundation Design Guide

vCenter Server Design Elements


Table 15-15. vCenter Server Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VCS-REQD-CFG-001 Deploy a dedicated n Isolates vCenter Server failures Requires a


vCenter Server appliance to management or customer separate license
for the management workloads. for the vCenter
domain of the VMware n Isolates vCenter Server Server instance in
Cloud Foundation instance. operations between the management
management and customers. domain
n Supports a scalable cluster
design where you can reuse
the management components as
more customer workloads are
added to the SDDC.
n Simplifies capacity planning
for customer workloads
because you do not consider
management workloads for the
VI workload domain vCenter
Server.
n Improves the ability to upgrade
the vSphere environment and
related components by enabling
for explicit separation of
maintenance windows:
n Management workloads
remain available while you
are upgrading the tenant
workloads
n Customer workloads remain
available while you are
upgrading the management
nodes
n Supports clear separation of
roles and responsibilities to
ensure that only administrators
with granted authorization
can control the management
workloads.
n Facilitates quicker
troubleshooting and problem
resolution.
n Simplifies disaster recovery
operations by supporting a clear
separation between recovery of
the management components
and tenant workloads.

VMware, Inc. 201


VMware Cloud Foundation Design Guide

Table 15-15. vCenter Server Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

n Provides isolation of potential


network issues by introducing
network separation of the
clusters in the SDDC.

VCF-VCS-REQD-NET-001 Place all workload domain Reduces the number of required None.
vCenters Server appliances VLANs because a single VLAN can
on the management VLAN be allocated to both, vCenter Server
network segment of the and NSX management components.
management domain.

Table 15-16. vCenter Server Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-VCS-RCMD-CFG-001 Deploy an appropriately Ensures resource The default size for a


sized vCenter Server availability and usage management domain is
appliance for each efficiency per workload Small and for VI workload
workload domain. domain. domains is Medium. To
override these values you
must use the Cloud Builder
API and the SDDC Manager
API.

VCF-VCS-RCMD-CFG-002 Deploy a vCenter Server Ensures resource The default size for a
appliance with the availability and usage management domain is
approproate storage size. efficiency per workload Small and for VI Workload
domain. Domains is Medium. To
override these values you
must use the API.

VCF-VCS-RCMD-CFG-003 Protect workload domain vSphere HA is the only vCenter Server becomes
vCenter Server appliances supported method to unavailable during a
by using vSphere HA. protect vCenter Server vSphere HA failover.
availability in VMware
Cloud Foundation.

VCF-VCS-RCMD-CFG-004 In vSphere HA, set the vCenter Server is the If the restart priority for
restart priority policy management and control another virtual machine
for the vCenter Server plane for physical and is set to highest, the
appliance to high. virtual infrastructure. In a connectivity delay for the
vSphere HA event, to management components
ensure the rest of the will be longer.
SDDC management stack
comes up faultlessly, the
workload domain vCenter
Server must be available
first, before the other
management components
come online.

VMware, Inc. 202


VMware Cloud Foundation Design Guide

Table 15-17. vCenter Server Design Recommendations for vSAN Stretched Clusters with VMware
Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-VCS-RCMD-CFG-005 Add the vCenter Server Ensures that, by default, None.


appliance to the virtual the vCenter Server
machine group for the first appliance is powered on a
availability zone. host in the first availability
zone.

vCenter Single Sign-On Design Elements


Table 15-18. Design Requirements for the Multiple vCenter Server Instance - Single vCenter
Single Sign-on Domain Topology for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VCS-REQD-SSO- Join all vCenter Server When all vCenter Server n Only one vCenter Single
STD-001 instances within aVMware instances are in the Sign-On domain exists.
Cloud Foundation instance same vCenter Single Sign- n The number of
to a single vCenter Single On domain, they can linked vCenter Server
Sign-On domain. share authentication and instances in the
license data across all same vCenter Single
components. Sign-On domain is
limited to 15 instances.
Because each workload
domain uses a
dedicated vCenter
Server instance, you
can deploy up to
15 domains within
each VMware Cloud
Foundation instance.

VCF-VCS-REQD-SSO- Create a ring topology By default, one vCenter None.


STD-002 between the vCenter Server instance replicates
Server instances within the only with another vCenter
VMware Cloud Foundation Server instance. This setup
instance. creates a single point
of failure for replication.
A ring topology ensures
that each vCenter Server
instance has two replication
partners and removes any
single point of failure.

VMware, Inc. 203


VMware Cloud Foundation Design Guide

Table 15-19. Design Requirements for Multiple vCenter Server Instance - Multiple vCenter Single
Sign-On Domain Topology for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-VCS-REQD-SSO- Create all vCenter Server n Enables isolation at n Each vCenter server
ISO-001 instances within a VMware the vCenter Single instance is managed
Cloud Foundation instance Sign-On domain layer through its own pane of
in their own unique vCenter for increased security glass using a different
Single Sign-On domains. separation. set of administrative
n Supports up to 25 credentials.
workload domains. n You must manage
password rotation
for each vCenter
Single Sign-On domain
separately.

vSphere Cluster Design Elements for VMware Cloud


Foundation
Use this list of requirements and recommendations for reference related to the vSphere cluster
configuration in an environment with a single or multiple VMware Cloud Foundation instances.
The design elements also consider if an instance contains a single or multiple availability zones.

For full design details, see Logical vSphere Cluster Design for VMware Cloud Foundation.

Table 15-20. vSphere Cluster Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-CLS-REQD-CFG-001 Create a cluster in each n Simplifies configuration Management of multiple


workload domain for the by isolating clusters and vCenter
initial set of ESXi hosts. management from Server instances increases
customer workloads. operational overhead.
n Ensures that customer
workloads have no
impact on the
management stack.

VCF-CLS-REQD-CFG-002 Allocate a minimum n Ensures correct level of To support redundancy,


number of ESXi hosts redundancy to protect you must allocate
according to the cluster against host failure in additional ESXi host
type being deployed. the cluster. resources.

VMware, Inc. 204


VMware Cloud Foundation Design Guide

Table 15-20. vSphere Cluster Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-CLS-REQD-CFG-003 If using a consolidated n Ensures sufficient You must manage the


workload domain, resources for vSphere resource pool
configure the following the management settings over time.
vSphere resource pools to components.
control resource usage by
management and customer
workloads.
n cluster-name-rp-sddc-
mgmt
n cluster-name-rp-sddc-
edge
n cluster-name-rp-user-
edge
n cluster-name-rp-user-
vm

VCF-CLS-REQD-CFG-004 Configure the vSAN Allows vSphere HA to None.


network gateway IP validate if a host is isolated
address as the isolation from the vSAN network.
address for the cluster.

VCF-CLS-REQD-CFG-005 Set the advanced cluster Ensures that vSphere None.


setting HA uses the manual
das.usedefaultisolationa isolation addresses instead
ddress to false. of the default management
network gateway address.

VCF-CLS-REQD-LCM-001 Use baselines as the life vSphere Lifecycle Manager You must manage the
cycle management method images are not supported life cycle of firmware and
for the management for the management vendor add-ons manually.
domain. domain.

Table 15-21. vSphere Cluster Design Requirements for vSAN Stretched Clusters with VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-CLS-REQD-CFG-006 Configure the vSAN Allows vSphere HA to None.


network gateway IP validate if a host is isolated
addresses for the second from the vSAN network for
availability zone as hosts in both availability
an additional isolation zones.
addresses for the cluster.

VCF-CLS-REQD-CFG-007 Enable the Override Enables routing the vSAN vSAN networks across
default gateway for this data traffic through the availability zones must
adapter setting on the vSAN network gateway have a route to each other.
vSAN VMkernel adapters rather than through the
on all ESXi hosts. management gateway.

VMware, Inc. 205


VMware Cloud Foundation Design Guide

Table 15-21. vSphere Cluster Design Requirements for vSAN Stretched Clusters with VMware
Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-CLS-REQD-CFG-008 Create a host group for Makes it easier to manage You must create and
each availability zone and which virtual machines run maintain VM-Host DRS
add the ESXi hosts in in which availability zone. group rules.
the zone to the respective
group.

VCF-CLS-REQD-LCM-002 Create workload domains vSphere Lifecycle Manager You must manage the
selecting baselines as the images are not supported life cycle of firmware and
life cycle management for vSAN stretched vendor add-ons manually.
method. clusters.

Table 15-22. vSphere Cluster Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-CLS-RCMD-CFG-001 Use vSphere HA to protect vSphere HA supports a You must provide sufficient
all virtual machines against robust level of protection resources on the remaining
failures. for both ESXi host and hosts so that virtual
virtual machine availability. machines can be restarted
on those hosts in the event
of a host outage.

VCF-CLS-RCMD-CFG-002 Set host isolation response vSAN requires that the If a false positive event
to Power Off and restart host isolation response be occurs, virtual machines are
VMs in vSphere HA. set to Power Off and to powered off and an ESXi
restart virtual machines on host is declared isolated
available ESXi hosts. incorrectly.

VCF-CLS-RCMD-CFG-003 Configure admission Using the percentage- In a cluster of 4 ESXi hosts,


control for 1 ESXi host based reservation works the resources of only 3
failure and percentage- well in situations where ESXi hosts are available for
based failover capacity. virtual machines have use.
varying and sometimes
significant CPU or memory
reservations.
vSphere automatically
calculates the reserved
percentage according to
the number of ESXi host
failures to tolerate and the
number of ESXi hosts in the
cluster.

VMware, Inc. 206


VMware Cloud Foundation Design Guide

Table 15-22. vSphere Cluster Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-CLS-RCMD-CFG-004 Enable VM Monitoring for VM Monitoring provides None.


each cluster. in-guest protection for
most VM workloads. The
application or service
running on the virtual
machine must be capable
of restarting successfully
after a reboot or the
virtual machine restart is
not sufficient.

VCF-CLS-RCMD-CFG-005 Set the advanced Enables triggering a restart If you want to


cluster setting of a management appliance specifically enable I/O
das.iostatsinterval to 0 when an OS failure occurs monitoring, then configure
to deactivate monitoring and heartbeats are not the das.iostatsinterval
the storage and network received from VMware advanced setting.
I/O activities of the Tools instead of waiting
management appliances. additionally for the I/O
check to complete.

VCF-CLS-RCMD-CFG-006 Enable vSphere DRS on all Provides the best If a vCenter Server outage
clusters, using the default trade-off between load occurs, the mapping from
fully automated mode with balancing and unnecessary virtual machines to ESXi
medium threshold. migrations with vSphere hosts might be difficult to
vMotion. determine.

VCF-CLS-RCMD-CFG-007 Enable Enhanced vMotion Supports cluster upgrades You must enable EVC only
Compatibility (EVC) on all without virtual machine if the clusters contain hosts
clusters in the management downtime. with CPUs from the same
domain. vendor.
You must enable EVC on
the default management
domain cluster during
bringup.

VCF-CLS-RCMD-CFG-008 Set the cluster EVC mode Supports cluster upgrades None.
to the highest available without virtual machine
baseline that is supported downtime.
for the lowest CPU
architecture on the hosts in
the cluster.

VCF-CLS-RCMD-LCM-001 Use images as the life cycle vSphere Lifecycle Manager You cannot add any
management method for VI images simplify the stretched cluster to a
workload domains that do management of firmware VI workload domains that
not require vSAN stretched and vendor add-ons uses vSphere Lifecycle
clusters. manually. Manager images.

VMware, Inc. 207


VMware Cloud Foundation Design Guide

Table 15-23. vSphere Cluster Design Recommendations for vSAN Stretched Clusters with
VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-CLS-RCMD-CFG-009 Increase admission control Allocating only half of a In a cluster of 8 ESXi hosts,
percentage to the half stretched cluster ensures the resources of only 4
of the ESXi hosts in the that all VMs have enough ESXi hosts are available for
cluster. resources if an availability use.
zone outage occurs. If you add more ESXi hosts
to the default management
cluster, add them in pairs,
one per availability zone.

VCF-CLS-RCMD-CFG-010 Create a virtual machine Ensures that virtual You must add virtual
group for each availability machines are located machines to the allocated
zone and add the VMs in only in the assigned group manually.
the zone to the respective availability zone to
group. avoid unnecessary vSphere
vMotion migrations.

VCF-CLS-RCMD-CFG-011 Create a should-run-on- Ensures that virtual You must manually create
hosts-in-group VM-Host machines are located the rules.
affinity rule to run each only in the assigned
group of virtual machines availability zone to
on the respective group avoid unnecessary vSphere
of hosts in the same vMotion migrations.
availability zone.

vSphere Networking Design Elements for VMware Cloud


Foundation
Use this list of recommendations for reference related to the configuration of the vSphere
Distributed Switch instances and VMkernel adapters in a VMware Cloud Foundation environment.

For full design details, see vSphere Networking Design for VMware Cloud Foundation.

Table 15-24. vSphere Networking Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-VDS-RCMD-CFG-001 Use a single vSphere n Reduces the complexity Increases the number
Distributed Switch per of the network design. of vSphere Distributed
cluster. n Reduces the size of the Switches that must be
fault domain. managed.

VCF-VDS-RCMD-CFG-002 Configure the MTU size n Supports the MTU size When adjusting the MTU
of the vSphere Distributed required by system packet size, you must
Switch to 9000 for jumbo traffic types. also configure the entire
frames. n Improves traffic network path (VMkernel
throughput. ports, virtual switches,
physical switches, and
routers) to support the
same MTU packet size.

VMware, Inc. 208


VMware Cloud Foundation Design Guide

Table 15-24. vSphere Networking Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-VDS-RCMD-DPG-001 Use ephemeral port binding Using ephemeral port Port-level permissions and
for the management port binding provides the option controls are lost across
group. for recovery of the vCenter power cycles, and no
Server instance that is historical context is saved.
managing the distributed
switch.

VCF-VDS-RCMD-DPG-002 Use static port binding for Static binding ensures a None.
all non-management port virtual machine connects
groups. to the same port on
the vSphere Distributed
Switch. This allows for
historical data and port
level monitoring.

VCF-VDS-RCMD-DPG-003 Use the Route based Reduces the complexity of None.


on physical NIC load the network design and
teaming algorithm for the increases resiliency and
management port group. performance.

VCF-VDS-RCMD-DPG-004 Use the Route based on Reduces the complexity of None.


physical NIC load teaming the network design and
algorithm for the vSphere increases resiliency and
vMotion port group. performance.

VCF-VDS-RCMD-NIO-001 Enable Network I/O Control Increases resiliency and If configured incorrectly,
on vSphere Distributed performance of the Network I/O Control
Switch of the management network. might impact network
domain cluster performance for critical
traffic types.

VCF-VDS-RCMD-NIO-002 Set the share value for By keeping the default None.
management traffic to setting of Normal,
Normal. management traffic is
prioritized higher than
vSphere vMotion but
lower than vSAN traffic.
Management traffic is
important because it
ensures that the hosts
can still be managed
during times of network
contention.

VCF-VDS-RCMD-NIO-003 Set the share value for During times of network During times of network
vSphere vMotion traffic to contention, vSphere contention, vMotion takes
Low. vMotion traffic is not longer than usual to
as important as virtual complete.
machine or storage traffic.

VMware, Inc. 209


VMware Cloud Foundation Design Guide

Table 15-24. vSphere Networking Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-VDS-RCMD-NIO-004 Set the share value for Virtual machines are the None.
virtual machines to High. most important asset in the
SDDC. Leaving the default
setting of High ensures that
they always have access to
the network resources they
need.

VCF-VDS-RCMD-NIO-005 Set the share value for During times of None.


vSAN traffic to High. network contention,
vSAN traffic needs
guaranteed bandwidth to
support virtual machine
performance.

VCF-VDS-RCMD-NIO-006 Set the share value for By default, VMware Cloud None.
other traffic types to Low. Foundation does not use
other traffic types, like
vSphere FT traffic. Hence,
these traffic types can be
set the lowest priority.

NSX Design Elements for VMware Cloud Foundation


Use this list of requirements and recommendations for reference related to the configuration of
NSX in an environment with a single or multiple VMware Cloud Foundation instances. The design
also considers if an instance contains a single or multiple availability zones.

For full design details, see Chapter 8 NSX Design for VMware Cloud Foundation.

VMware, Inc. 210


VMware Cloud Foundation Design Guide

NSX Manager Design Elements


Table 15-25. NSX Manager Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-LM-REQD- Place the appliances n Provides direct secure None.


CFG-001 of the NSX Manager connection to the ESXi
cluster on the management hosts and vCenter
VLAN network in the Server for edge
management domain. node management and
distributed network
services.
n Reduces the number
of required VLANs
because a single VLAN
can be allocated to
both, vCenter Server
and NSX.

VCF-NSX-LM-REQD- Deploy three NSX Manager Supports high availability of You must have sufficient
CFG-002 nodes in the default the NSX manager cluster. resources in the default
vSphere cluster in the cluster of the management
management domain for domain to run three NSX
configuring and managing Manager nodes.
the network services for
the workload domain.

Table 15-26. NSX Manager Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-LM-RCMD- Deploy appropriately sized Ensures resource The default size for
CFG-001 nodes in the NSX Manager availability and usage a management domain
cluster for the workload efficiency per workload is Medium, and for VI
domain. domain. workload domains is Large.

VCF-NSX-LM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-002 address for the NSX the user interface and API feature provides high
Manager cluster for the of NSX Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Manager nodes must
be deployed on the
same Layer 2 network.

VMware, Inc. 211


VMware Cloud Foundation Design Guide

Table 15-26. NSX Manager Design Recommendations for VMware Cloud Foundation (continued)

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-LM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Manager You must allocate at
CFG-003 rules in vSphere Distributed appliances running on least four physical hosts
Resource Scheduler different ESXi hosts for so that the three
(vSphere DRS) to the NSX high availability. NSX Manager appliances
Manager appliances. continue running if an ESXi
host failure occurs.

VCF-NSX-LM-RCMD- In vSphere HA, set the n NSX Manager If the restart priority
CFG-004 restart priority policy implements the control for another management
for each NSX Manager plane for virtual appliance is set to highest,
appliance to high. network segments. the connectivity delay for
vSphere HA restarts management appliances
the NSX Manager will be longer.
appliances first so that
other virtual machines
that are being powered
on or migrated by using
vSphere vMotion while
the control plane is
offline lose connectivity
only until the control
plane quorum is re-
established.
n Setting the restart
priority to high reserves
the highest priority for
flexibility for adding
services that must be
started before NSX
Manager.

Table 15-27. NSX Manager Design Recommendations for Stretched Clusters in VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-LM-RCMD- Add the NSX Manager Ensures that, by default, Done automatically by
CFG-006 appliances to the virtual the NSX Manager VMware Cloud Foundation
machine group for the first appliances are powered when configuring a vSAN
availability zone. on a host in the primary stretched cluster.
availability zone.

NSX Global Manager Design Elements


You must perform manually the configuration tasks for the design requirements and
recommendations for NSX Global Manager.

VMware, Inc. 212


VMware Cloud Foundation Design Guide

Table 15-28. NSX Global Manager Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-GM-REQD- Place the appliances of Reduces the number of None.


CFG-001 the NSX Global Manager required VLANs because
cluster on the management a single VLAN can be
VLAN network in each allocated to both vCenter
VMware Cloud Foundation Server and NSX.
instance.

Table 15-29. NSX Global Manager Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-GM-RCMD- Deploy three NSX Global Provides high availability You must have sufficient
CFG-001 Manager nodes for the for the NSX Global resources in the default
workload domain to Manager cluster. cluster of the management
support NSX Federation domain to run three NSX
across VMware Cloud Global Manager nodes.
Foundation instances.

VCF-NSX-GM-RCMD- Deploy appropriately sized Ensures resource The recommended size


CFG-002 nodes in the NSX Global availability and usage for a management domain
Manager cluster for the efficiency per workload is Medium and for VI
workload domain. domain. workload domains is Large.

VCF-NSX-GM-RCMD- Create a virtual IP (VIP) Provides high availability of n The VIP address
CFG-003 address for the NSX Global the user interface and API feature provides high
Manager cluster for the of NSX Global Manager. availability only. It
workload domain. does not load-balance
requests across the
cluster.
n When using the VIP
address feature, all NSX
Global Manager nodes
must be deployed on
the same Layer 2
network.

VCF-NSX-GM-RCMD- Apply VM-VM anti-affinity Keeps the NSX Global You must allocate at
CFG-004 rules in vSphere DRS to Manager appliances least four physical hosts
the NSX Global Manager running on different ESXi so that the three
appliances. hosts for high availability. NSX Manager appliances
continue running if an ESXi
host failure occurs.

VMware, Inc. 213


VMware Cloud Foundation Design Guide

Table 15-29. NSX Global Manager Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-GM-RCMD- In vSphere HA, set the n NSX Global Manager n Management of NSX
CFG-005 restart priority policy for implements the global components will
each NSX Global Manager management plane for be unavailable until the
appliance to medium. global segments and NSX Global Manager
firewalls. virtual machines restart.
n The NSX Global
NSX Global Manager is
Manager cluster is
not required for control
deployed in the
plane and data plane
management domain,
connectivity.
where the total number
n Setting the restart
of virtual machines
priority to medium
is limited and where
reserves the high
it competes with
priority for services that
other management
impact the NSX control
components for restart
or data planes.
priority.

VCF-NSX-GM-RCMD- Deploy an additional NSX Enables recoverability of Requires additional NSX


CFG-006 Global Manager Cluster in NSX Global Manager in Global Manager nodes in
the second VMware Cloud the second VMware Cloud the second VMware Cloud
Foundation instance. Foundation instance if a Foundation instance.
failure in the first VMware
Cloud Foundation instance
occurs.

VCF-NSX-GM-RCMD- Set the NSX Global Enables recoverability of Must be done manually.
CFG-007 Manager cluster in the NSX Global Manager in
second VMware Cloud the second VMware Cloud
Foundation instance as Foundation instance if a
standby for the workload failure in the first instance
domain. occurs.

VCF-NSX-GM-RCMD- Establish an operational Ensures secured The administrator must


SEC-001 practice to capture and connectivity between the establish and follow an
update the thumbprint of NSX Manager instances. operational practice by
the NSX Local Manager Each certificate has its using a runbook or
certificate on NSX Global own unique thumbprint. automated process to
Manager every time the NSX Global Manager stores ensure that the thumbprint
certificate is updated by the unique thumbprint of up-to-date.
using SDDC Manager. the NSX Local Manager
instances for enhanced
security.
If an authentication failure
between NSX Global
Manager and NSX Local
Manager occurs, objects
that are created from NSX
Global Manager will not be
propagated on to the SDN.

VMware, Inc. 214


VMware Cloud Foundation Design Guide

Table 15-30. NSX Global Manager Design Recommendations for Stretched Clusters in VMware
Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-GM-RCMD- Add the NSX Global Ensures that, by default, Done automatically by
CFG-008 Manager appliances to the the NSX Global Manager VMware Cloud Foundation
virtual machine group for appliances are powered when stretching a cluster.
the first availability zone. on a host in the primary
availability zone.

NSX Edge Design Elements


Table 15-31. NSX Edge Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-EDGE-REQD- Connect the management Provides connection from None.


CFG-001 interface of each NSX Edge the NSX Manager cluster to
node to the management the NSX Edge.
VLAN.

VCF-NSX-EDGE-REQD- n Connect the fp-eth0 n Because VLAN trunk None.


CFG-002 interface of each NSX port groups pass
Edge appliance to a traffic for all VLANs,
VLAN trunk port group VLAN tagging can
pinned to physical NIC occur in the NSX
0 of the host, with Edge node itself for
the ability to failover to easy post-deployment
physical NIC 1. configuration.
n Connect the fp-eth1 n By using two separate
interface of each NSX VLAN trunk port
Edge appliance to a groups, you can direct
VLAN trunk port group traffic from the edge
pinned to physical NIC node to a particular
1 of the host, with the host network interface
ability to failover to and top of rack switch
physical NIC 0. as needed.
n Leave the fp-eth2 n In the event of failure of
interface of each NSX the top of rack switch,
Edge appliance unused. the VLAN trunk port
group will failover to
the other physical NIC
and to ensure both fp-
eth0 and fp-eth1 are
available.

VMware, Inc. 215


VMware Cloud Foundation Design Guide

Table 15-31. NSX Edge Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-EDGE-REQD- Use a dedicated VLAN A dedicated edge overlay n You must have routing
CFG-003 for edge overlay that is network provides support between the VLANs for
different from the host for edge mobility in edge overlay and host
overlay VLAN. support of advanced overlay.
deployments such as n You must allocate
multiple availability zones another VLAN in
or multi-rack clusters. the data center
infrastructure for edge
overlay.

VCF-NSX-EDGE-REQD- Create one uplink profile n An NSX Edge node None.


CFG-004 for the edge nodes with that uses a single N-
three teaming policies. VDS can have only one
n Default teaming policy uplink profile.
of load balance source n For increased resiliency
with both active uplinks and performance,
uplink1 and uplink2. supports the
n Named teaming policy concurrent use of both
of failover order edge uplinks through
with a single active both physical NICs on
uplink uplink1 without the ESXi hosts.
standby uplinks. n The default teaming
n Named teaming policy policy increases overlay
of failover order performance and
with a single active availability by using
uplink uplink2 without multiple TEPs, and
standby uplinks. balancing of overlay
traffic.
n By using named
teaming policies, you
can connect an edge
uplink to a specific host
uplink and from there
to a specific top of
rack switch in the data
center.
n Enables ECMP because
the NSX Edge nodes
can uplink to the
physical network over
two different VLANs.

VMware, Inc. 216


VMware Cloud Foundation Design Guide

Table 15-32. NSX Edge Design Requirements for NSX Federation in VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-EDGE-REQD- Allocate a separate VLAN The RTEP network must be You must allocate another
CFG-005 for edge RTEP overlay that on a VLAN that is different VLAN in the data center
is different from the edge from the edge overlay infrastructure.
overlay VLAN. VLAN. This is an NSX
requirement that provides
support for configuring
different MTU size per
network.

Table 15-33. NSX Edge Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implications

VCF-NSX-EDGE-RCMD- Use appropriately sized Ensures resource You must provide sufficient
CFG-001 NSX Edge virtual availability and usage compute resources to
appliances. efficiency per workload support the chosen
domain. appliance size.

VCF-NSX-EDGE-RCMD- Deploy the NSX Edge n Simplifies the Workloads and NSX Edges
CFG-002 virtual appliances to the configuration and share the same compute
default vSphere cluster minimizes the number resources.
of the workload domain, of ESXi hosts required
sharing the cluster between for initial deployment.
the workloads and the n Keeps the management
edge appliances. components located in
the same domain and
cluster, isolated from
customer workloads.

VCF-NSX-EDGE-RCMD- Deploy two NSX Edge Creates the NSX Edge None.
CFG-003 appliances in an edge cluster for satisfying the
cluster in the default requirements for availability
vSphere cluster of the and scale.
workload domain.

VCF-NSX-EDGE-RCMD- Apply VM-VM anti-affinity Keeps the NSX Edge nodes None.
CFG-004 rules for vSphere DRS to running on different ESXi
the virtual machines of the hosts for high availability.
NSX Edge cluster.

VMware, Inc. 217


VMware Cloud Foundation Design Guide

Table 15-33. NSX Edge Design Recommendations for VMware Cloud Foundation (continued)

Recommendation ID Design Recommendation Justification Implications

VCF-NSX-EDGE-RCMD- In vSphere HA, set the n The NSX Edge nodes If the restart priority
CFG-005 restart priority policy for are part of the for another management
each NSX Edge appliance north-south data path appliance is set to highest,
to high. for overlay segments. the connectivity delays
vSphere HA restarts the for management appliances
NSX Edge appliances will be longer .
first so that other
virtual machines that
are being powered on
or migrated by using
vSphere vMotion while
the edge nodes are
offline lose connectivity
only for a short time.
n Setting the restart
priority to high reserves
highest for future
needs.

VCF-NSX-EDGE-RCMD- Configure all edge nodes as Enables the participation of None.


CFG-006 transport nodes. edge nodes in the overlay
network for delivery of
services to the SDDC
management components
such as routing and load
balancing.

VCF-NSX-EDGE-RCMD- Create an NSX n Satisfies the availability None.


CFG-007 Edge cluster with requirements by
the default Bidirectional default.
Forwarding Detection n Edge nodes must
(BFD) configuration remain available to
between the NSX Edge create services such
nodes in the cluster. as NAT, routing to
physical networks, and
load balancing.

VCF-NSX-EDGE-RCMD- Use a single N-VDS in the n Simplifies deployment None.


CFG-008 NSX Edge nodes. of the edge nodes.
n The same N-VDS switch
design can be used
regardless of edge
form factor.
n Supports multiple TEP
interfaces in the edge
node.
n vSphere Distributed
Switch is not supported
in the edge node.

VMware, Inc. 218


VMware Cloud Foundation Design Guide

Table 15-34. NSX Edge Design Recommendations for Stretched Clusters in VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implications

VCF-NSX-EDGE-RCMD- Add the NSX Edge Ensures that, by default, None.


CFG-009 appliances to the virtual the NSX Edge appliances
machine group for the first are powered on upon
availability zone. a host in the primary
availability zone.

BGP Routing Design Elements for VMware Cloud Foundation


Table 15-35. BGP Routing Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- To enable ECMP between Supports multiple equal- Additional VLANs are
CFG-001 the Tier-0 gateway and cost routes on the Tier-0 required.
the Layer 3 devices gateway and provides
(ToR switches or upstream more resiliency and better
devices), create two bandwidth use in the
VLANs . network.
The ToR switches or
upstream Layer 3 devices
have an SVI on one of the
two VLANS and each Edge
node in the cluster has an
interface on each VLAN.

VCF-NSX-BGP-REQD- Assign a named teaming Pins the VLAN traffic on None.


CFG-002 policy to the VLAN each segment to its target
segments to the Layer 3 edge node interface. From
device pair. there the traffic is directed
to the host physical NIC
that is connected to the
target top of rack switch.

VCF-NSX-BGP-REQD- Create a VLAN transport Enables the configuration Additional VLAN transport
CFG-003 zone for edge uplink traffic. of VLAN segments on the zones are required if
N-VDS in the edge nodes. the edge nodes are not
connected to the same top
of rack switch pair.

VMware, Inc. 219


VMware Cloud Foundation Design Guide

Table 15-35. BGP Routing Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Deploy a Tier-1 gateway Creates a two-tier routing A Tier-1 gateway can only
CFG-004 and connect it to the Tier-0 architecture. be connected to a single
gateway. Abstracts the NSX logical Tier-0 gateway.
components which interact In cases where multiple
with the physical data Tier-0 gateways are
center from the logical required, you must create
components which provide multiple Tier-1 gateways.
SDN services.

VCF-NSX-BGP-REQD- Deploy a Tier-1 gateway to Enables stateful services, None.


CFG-005 the NSX Edge cluster. such as load balancers
and NAT, for SDDC
management components.
Because a Tier-1 gateway
always works in active-
standby mode, the
gateway supports stateful
services.

VMware, Inc. 220


VMware Cloud Foundation Design Guide

Table 15-36. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Extend the uplink VLANs Because the NSX Edge You must configure a
CFG-006 to the top of rack switches nodes will fail over stretched Layer 2 network
so that the VLANs are between the availability between the availability
stretched between both zones, ensures uplink zones by using physical
availability zones. connectivity to the top network infrastructure.
of rack switches in
both availability zones
regardless of the zone
the NSX Edge nodes are
presently in.

VCF-NSX-BGP-REQD- Provide this SVI Enables the communication You must configure a
CFG-007 configuration on the top of of the NSX Edge nodes to stretched Layer 2 network
the rack switches. the top of rack switches in between the availability
n In the second both availability zones over zones by using the physical
availability zone, the same uplink VLANs. network infrastructure.
configure the top
of rack switches or
upstream Layer 3
devices with an SVI on
each of the two uplink
VLANs.
n Make the top of
rack switch SVI in
both availability zones
part of a common
stretched Layer 2
network between the
availability zones.

VCF-NSX-BGP-REQD- Provide this VLAN Supports multiple equal- n Extra VLANs are
CFG-008 configuration: cost routes on the Tier-0 required.
n Use two VLANs to gateway, and provides n Requires stretching
enable ECMP between more resiliency and better uplink VLANs between
the Tier-0 gateway and bandwidth use in the availability zones
the Layer 3 devices network.
(top of rack switches or
Leaf switches).
n The ToR switches
or upstream Layer 3
devices have an SVI to
one of the two VLANS
and each NSX Edge
node has an interface
to each VLAN.

VMware, Inc. 221


VMware Cloud Foundation Design Guide

Table 15-36. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Create an IP prefix list Used in a route map to You must manually create
CFG-009 that permits access to prepend a path to one or an IP prefix list that is
route advertisement by any more autonomous system identical to the default one.
network instead of using (AS-path prepend) for BGP
the default IP prefix list. neighbors in the second
availability zone.

VCF-NSX-BGP-REQD- Create a route map-out n Used for configuring You must manually create
CFG-010 that contains the custom neighbor relationships the route map.
IP prefix list and an AS- with the Layer 3 The two NSX Edge nodes
path prepend value set to devices in the second will route north-south
the Tier-0 local AS added availability zone. traffic through the second
twice. n Ensures that all ingress availability zone only if
traffic passes through the connection to their
the first availability BGP neighbors in the first
zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.

VCF-NSX-BGP-REQD- Create an IP prefix list that Used in a route map to You must manually create
CFG-011 permits access to route configure local-reference an IP prefix list that is
advertisement by network on learned default-route identical to the default one.
0.0.0.0/0 instead of using for BGP neighbors in the
the default IP prefix list. second availability zone.

VMware, Inc. 222


VMware Cloud Foundation Design Guide

Table 15-36. BGP Routing Design Requirements for Stretched Clusters in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Apply a route map-in that n Used for configuring You must manually create
CFG-012 contains the IP prefix list for neighbor relationships the route map.
the default route 0.0.0.0/0 with the Layer 3 The two NSX Edge nodes
and assign a lower local- devices in the second will route north-south
preference , for example, availability zone. traffic through the second
80, to the learned default n Ensures that all egress availability zone only if
route and a lower local- traffic passes through the connection to their
preference, for example, 90 the first availability BGP neighbors in the first
any routes learned. zone. availability zone is lost, for
example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.

VCF-NSX-BGP-REQD- Configure the neighbors of Makes the path in and The two NSX Edge nodes
CFG-013 the second availability zone out of the second will route north-south
to use the route maps as In availability zone less traffic through the second
and Out filters respectively. preferred because the AS availability zone only if
path is longer and the the connection to their
local preference is lower. BGP neighbors in the first
As a result, all traffic passes availability zone is lost, for
through the first zone. example, if a failure of the
top of the rack switch pair
or in the availability zone
occurs.

VMware, Inc. 223


VMware Cloud Foundation Design Guide

Table 15-37. BGP Routing Design Requirements for NSX Federation in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Extend the Tier-0 gateway n Supports ECMP north- The Tier-0 gateway
CFG-014 to the second VMware south routing on all deployed in second
Cloud Foundation instance. nodes in the NSX Edge instance is removed
cluster.
n Enables support for
cross-instance Tier-1
gateways and cross-
instance network
segments.

VCF-NSX-BGP-REQD- Set the Tier-0 gateway n In NSX Federation, None.


CFG-015 as primary for all a Tier-0 gateway
VMware Cloud Foundation lets egress traffic
instances. from connected Tier-1
gateways only in its
primary locations.
n Local ingress and
egress traffic
is controlled
independently at
the Tier-1 level.
No segments are
provisioned directly to
the Tier-0 gateway.
n A mixture of network
spans (local to
a VMware Cloud
Foundation instance
or spanning multiple
instances) is enabled
without requiring
additional Tier-0
gateways and hence
edge nodes.
n If a failure in a VMware
Cloud Foundation
instance occurs,
the local-instance
networking in the
other instance remains
available without
manual intervention.

VMware, Inc. 224


VMware Cloud Foundation Design Guide

Table 15-37. BGP Routing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- From the global Tier-0 n Enables the learning None.


CFG-016 gateway, establish BGP and advertising of
neighbor peering to the routes in the
ToR switches connected to second VMware Cloud
the second VMware Cloud Foundation instance.
Foundation instance. n Facilitates a potential
automated failover
of networks from
the first to the
second VMware Cloud
Foundation instance.

VCF-NSX-BGP-REQD- Use a stretched Tier-1 n Enables network span None.


CFG-017 gateway and connect it between the VMware
to the Tier-0 gateway for Cloud Foundation
cross-instance networking. instances because NSX
network segments
follow the span of
the gateway they are
attached to.
n Creates a two-tier
routing architecture.

VMware, Inc. 225


VMware Cloud Foundation Design Guide

Table 15-37. BGP Routing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables cross-instance You must manually fail over
CFG-018 cluster in each VMware network span between and fail back the cross-
Cloud Foundation instance the first and instance network from
to the stretched second VMware Cloud the standby NSX Global
Tier-1 gateway. Set Foundation instances. Manager.
the firstVMware Cloud n Enables deterministic
Foundation instance as ingress and egress
primary and the second traffic for the cross-
instance as secondary. instance network.
n If a VMware Cloud
Foundation instance
failure occurs, enables
deterministic failover of
the Tier-1 traffic flow.
n During the
recovery of the
inaccessibleVMware
Cloud Foundation
instance, enables
deterministic failback of
the Tier-1 traffic flow,
preventing unintended
asymmetrical routing.
n Eliminates the need
to use BGP attributes
in the first and
second VMware Cloud
Foundation instances
to influence location
preference and failover.

VCF-NSX-BGP-REQD- Assign the NSX Edge n Enables instance- You can use the
CFG-019 cluster in each VMware specific networks to be service router that is
Cloud Foundation instance isolated to their specific created for the Tier-1
to the local Tier-1 gateway instances. gateway for networking
for that VMware Cloud n Enables deterministic services. However, such
Foundation instance. flow of ingress and configuration is not
egress traffic for required for network
the instance-specific connectivity.
networks.

VCF-NSX-BGP-REQD- Set each local Tier-1 Prevents the need to use None.
CFG-020 gateway only as primary in BGP attributes in primary
that instance. Avoid setting and secondary instances
the gateway as secondary to influence the instance
in the other instances. ingress-egress preference.

VMware, Inc. 226


VMware Cloud Foundation Design Guide

Table 15-38. BGP Routing Design Recommendations for VMware Cloud Foundation
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication

VCF-NSX-BGP-RCMD- Deploy an active-active Supports ECMP north-south Active-active Tier-0


CFG-001 Tier-0 gateway. routing on all Edge nodes gateways cannot provide
in the NSX Edge cluster. stateful services such as
NAT.

VCF-NSX-BGP-RCMD- Configure the BGP Keep Provides a balance By using longer timers to
CFG-002 Alive Timer to 4 and Hold between failure detection detect if a router is not
Down Timer to 12 or lower between the top of responding, the data about
between the top of tack rack switches and the such a router remains in
switches and the Tier-0 Tier-0 gateway, and the routing table longer. As
gateway. overburdening the top of a result, the active router
rack switches with keep- continues to send traffic to
alive traffic. a router that is down.
These timers must be
aligned with the data
center fabric design of your
organization.

VCF-NSX-BGP-RCMD- Do not enable Graceful Avoids loss of traffic. None.


CFG-003 Restart between BGP On the Tier-0 gateway,
neighbors. BGP peers from all the
gateways are always
active. On a failover, the
Graceful Restart capability
increases the time a remote
neighbor takes to select an
alternate Tier-0 gateway.
As a result, BFD-based
convergence is delayed.

VCF-NSX-BGP-RCMD- Enable helper mode for Avoids loss of traffic. None.


CFG-004 Graceful Restart mode During a router restart,
between BGP neighbors. helper mode works
with the graceful restart
capability of upstream
routers to maintain the
forwarding table which in
turn will forward packets
to a down neighbor even
after the BGP timers have
expired causing loss of
traffic.

VCF-NSX-BGP-RCMD- Enable Inter-SR iBGP In the event that an None.


CFG-005 routing. edge node has all of its
northbound eBGP sessions
down, north-south traffic
will continue to flow by
routing traffic to a different
edge node.

VMware, Inc. 227


VMware Cloud Foundation Design Guide

Table 15-38. BGP Routing Design Recommendations for VMware Cloud Foundation (continued)
Recommendation Recommendation
Recommendation ID Design Recommendation Justification Implication

VCF-NSX-BGP-RCMD- Deploy a Tier-1 gateway Ensures that after a failed None.


CFG-006 in non-preemptive failover NSX Edge transport node
mode. is back online, it does
not take over the gateway
services thus preventing a
short service outage.

VCF-NSX-BGP-RCMD- Enable standby relocation Ensures that if an edge None.


CFG-007 of the Tier-1 gateway. failure occurs, a standby
Tier-1 gateway is created
on another edge node.

Table 15-39. BGP Routing Design Recommendations for NSX Federation in VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-BGP-RCMD- Use Tier-1 gateways to Enables a mixture of To control location span,


CFG-008 control the span of network spans (isolated a Tier-1 gateway must
networks and ingress and to a VMware Cloud be assigned to an edge
egress traffic in the Foundation instance cluster and hence has the
VMware Cloud Foundation or spanning multiple Tier-1 SR component. East-
instances. instances) without requiring west traffic between Tier-1
additional Tier-0 gateways gateways with SRs need to
and hence edge nodes. physically traverse an edge
node.

VCF-NSX-BGP-RCMD- Allocate a Tier-1 gateway in n Creates a two-tier None.


CFG-009 each instance for instance- routing architecture.
specific networks and n Enables local-instance
connect it to the stretched networks that are
Tier-0 gateway. not to span between
the VMware Cloud
Foundation instances.
n Guarantees that local-
instance networks
remain available if
a failure occurs in
another VMware Cloud
Foundation instance.

VMware, Inc. 228


VMware Cloud Foundation Design Guide

Overlay Design Elements for VMware Cloud Foundation


Table 15-40. Overlay Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-OVERLAY-REQD- Configure all ESXi hosts in Enables distributed routing, None.


CFG-001 the workload domain as logical segments, and
transport nodes in NSX. distributed firewall.

VCF-NSX-OVERLAY-REQD- Configure each ESXi host n Enables the You must configure each
CFG-002 as a transport node participation of ESXi transport node with an
without using transport hosts and the virtual uplink profile individually.
node profiles. machines on them in
NSX overlay and VLAN
networks.
n Transport node profiles
can only be applied
at the cluster
level. Because in
an environment with
multiple availability
zones each availability
zone is connected to a
different set of VLANs,
you cannot use a
transport node profile.

VCF-NSX-OVERLAY-REQD- To provide virtualized n Creates isolated, multi- Requires configuring


CFG-003 network capabilities to tenant broadcast transport networks with an
workloads, use overlay domains across data MTU size of at least 1,600
networks with NSX Edge center fabrics to deploy bytes.
nodes and distributed elastic, logical networks
routing. that span physical
network boundaries.
n Enables advanced
deployment topologies
by introducing Layer
2 abstraction from the
data center networks.

VCF-NSX-OVERLAY-REQD- Create a single overlay n Ensures that overlay All clusters in all workload
CFG-004 transport zone for all segments are domains that share the
overlay traffic across the connected to an same NSX Manager share
workload domain and NSX NSX Edge node for the same transport zone.
Edge nodes. services and north-
south routing.
n Ensures that all
segments are available
to all ESXi hosts
and NSX Edge nodes
configured as transport
nodes.

VMware, Inc. 229


VMware Cloud Foundation Design Guide

Table 15-40. Overlay Design Requirements for VMware Cloud Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-OVERLAY-REQD- Create a VLAN transport Ensures that uplink VLAN If VLAN segments are
CFG-005 zone for uplink VLAN traffic segments are configured required on hosts, you
that is applied only to NSX on the NSX Edge transport must create another VLAN
Edge nodes. nodes. transport zone for the host
transport nodes only.

VCF-NSX-OVERLAY-REQD- Create an uplink profile For increased resiliency None.


CFG-006 with a load balance source and performance, supports
teaming policy with two the concurrent use of both
active uplinks for ESXi physical NICs on the ESXi
hosts. hosts that are configured
as transport nodes.

Table 15-41. Overlay Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-OVERLAY-RCMD- Use DHCP to assign IP Required for deployments DHCP server is required for
CFG-001 addresses to the host TEP where a cluster spans the host overlay VLANs.
interfaces. Layer 3 network domains
such as multiple availability
zones and management
clusters that span Layer 3
domains.

VCF-NSX-OVERLAY-RCMD- Use hierarchical two-tier Hierarchical two-tier None.


CFG-002 replication on all overlay replication is more efficient
segments. because it reduced the
number of ESXi hosts
the source ESXi host
must replicate traffic to
if hosts have different
TEP subnets. This is
typically the case with
more than one cluster and
will improve performance in
that scenario.

VMware, Inc. 230


VMware Cloud Foundation Design Guide

Application Virtual Network Design Elements for VMware Cloud


Foundation
Table 15-42. Application Virtual Network Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-AVN-REQD- Create one cross-instance Prepares the environment Each NSX segment requires
CFG-001 NSX segment for the for the deployment of a unique IP address space.
components of a vRealize solutions on top of VMware
Suite application or Cloud Foundation, such as
another solution that vRealize Suite, without a
requires mobility between complex physical network
VMware Cloud Foundation configuration.
instances. The components of
the vRealize Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.

VCF-NSX-AVN-REQD- Create one or more local- Prepares the environment Each NSX segment requires
CFG-002 instance NSX segments for the deployment of a unique IP address space.
for the components of a solutions on top of VMware
vRealize Suite application Cloud Foundation, such as
or another solution that vRealize Suite, without a
are assigned to a specific complex physical network
VMware Cloud Foundation configuration.
instance.

VMware, Inc. 231


VMware Cloud Foundation Design Guide

Table 15-43. Application Virtual Network Design Requirements for NSX Federation in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-AVN-REQD- Extend the cross-instance Enables workload mobility Each NSX segment requires
CFG-003 NSX segment to the without a complex physical a unique IP address space.
second VMware Cloud network configuration.
Foundation instance. The components of
a vRealize Suite
application must be
easily portable between
VMware Cloud Foundation
instances without requiring
reconfiguration.

VCF-NSX-AVN-REQD- In each VMware Cloud Enables workload mobility Each NSX segment requires
CFG-004 Foundation instance, create within a VMware a unique IP address space.
additional local-instance Cloud Foundation instance
NSX segments. without complex physical
network configuration.
Each VMware Cloud
Foundation instance should
have network segments to
support workloads which
are isolated to that VMware
Cloud Foundation instance.

VCF-NSX-AVN-REQD- In each VMware Configures local-instance Requires an individual Tier-1


CFG-005 Cloud Foundation NSX segments at required gateway for local-instance
instance, connect or sites only. segments.
migrate the local-instance
NSX segments to
the corresponding local-
instance Tier-1 gateway.

Table 15-44. Application Virtual Network Design Recommendations for VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-NSX-AVN-RCMD- Use overlay-backed NSX n Supports expansion to Using overlay-backed NSX


CFG-001 segments. deployment topologies segments requires routing,
for multiple VMware eBGP recommended,
Cloud Foundation between the data center
instances. fabric and edge nodes.
n Limits the number of
VLANs required for the
data center fabric.

VMware, Inc. 232


VMware Cloud Foundation Design Guide

Load Balancing Design Elements for VMware Cloud Foundation


Table 15-45. Load Balancing Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-LB-REQD- Deploy a standalone Provides independence You must add a separate


CFG-001 Tier-1 gateway to support between north-south Tier-1 Tier-1 gateway.
advanced stateful services gateways to support
such as load balancing advanced deployment
for other management scenarios.
components.

VCF-NSX-LB-REQD- When creating load Provides load balancing to You must connect
CFG-002 balancing services for applications connected to the gateway to each
Application Virtual the cross-instance network. network that requires load
Networks, connect the balancing.
standalone Tier-1 gateway
to the cross-instance NSX
segments.

VCF-NSX-LB-REQD- Configure a default static Because the Tier-1 gateway None.


CFG-003 route on the standalone is standalone, it does not
Tier-1 gateway with a next auto-configure its routes.
hop the Tier-1 gateway for
the segment to provide
connectivity to the load
balancer.

VMware, Inc. 233


VMware Cloud Foundation Design Guide

Table 15-46. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-NSX-LB-REQD- Deploy a standalone Tier-1 Provides a cold-standby n You must add


CFG-004 gateway in the second non-global service router a separate Tier-1
VMware Cloud Foundation instance for the second gateway.
instance. VMware Cloud Foundation n You must manually
instance to support configure any services
services on the cross- and synchronize them
instance network which between the non-global
require advanced services service router instances
not currently supported as in the first and
NSX global objects. second VMware Cloud
Foundation instances.
n To avoid a network
conflict between the
two VMware Cloud
Foundation instances,
make sure that the
primary and standby
networking services are
not both active at the
same time.

VCF-NSX-LB-REQD- Connect the standalone Provides load balancing to You must connect
CFG-005 Tier-1 gateway in the applications connected to the gateway to each
second VMware Cloud the cross-instance network network that requires load
Foundationinstance to in the second VMware balancing.
the cross-instance NSX Cloud Foundation instance.
segment.

VMware, Inc. 234


VMware Cloud Foundation Design Guide

Table 15-46. Load Balancing Design Requirements for NSX Federation in VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-NSX-LB-REQD- Configure a default static Because the Tier-1 gateway None.


CFG-006 route on the standalone is standalone, it does not
Tier-1 gateway in the autoconfigure its routes.
second VMware Cloud
Foundation instance with
a next hop as the Tier-1
gateway for the segment
it connects with to provide
connectivity to the load
balancers.

VCF-NSX-LB-REQD- Establish a process to Keeps the network service n Because of incorrect


CFG-007 ensure any changes made in the failover load configuration between
on to the load balancer balancer instance ready the VMware Cloud
instance in the first VMware for activation if a failure Foundation instances,
Cloud Foundationinstance in the first VMware the load balancer
are manually applied to the Cloud Foundation instance service in the second
disconnected load balancer occurs. instance might come
in the second instance. Because network services online with an
are not supported as global invalid or incomplete
objects, you must configure configuration.
them manually in each n If both VMware Cloud
VMware Cloud Foundation Foundation instances
instance. The load balancer are online and active
service in one instance at the same time,
must be connected and a conflict between
active, while the service in services could occur
the other instance must be resulting in a potential
disconnected and inactive. outage.
n The administrator
must establish and
follow an operational
practice by using a
runbook or automated
process to ensure that
configuration changes
are reproduced in
each VMware Cloud
Foundation instance.

SDDC Manager Design Elements for VMware Cloud


Foundation
Use this list of requirements and recommendations for reference related to SDDC Manager in an
environment with a single or multiple VMware Cloud Foundation instances.

For full design details, see Chapter 9 SDDC Manager Design for VMware Cloud Foundation.

VMware, Inc. 235


VMware Cloud Foundation Design Guide

Table 15-47. SDDC Manager Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-SDDCMGR-REQD- Deploy an SDDC Manager SDDC Manager is required None.


CFG-001 system in the first to perform VMware Cloud
availability zone of the Foundation capabilities,
management domain. such as provisioning of
VI workload domains,
deployment of solutions,
patching and upgrade, and
others.

VCF-SDDCMGR-REQD- Deploy SDDC Manager with The configuration of None.


CFG-002 its default configuration. SDDC Manager is not
configurable and should
not be changed from its
defaults.

VCF-SDDCMGR-REQD- Place the SDDC Reduces the number None.


CFG-003 Manager appliance on of VLANs. You allocate
the management VLAN a single VLAN to
network segment. vCenter Server, NSX, SDDC
Manager, and other SDDC
management components.

Table 15-48. SDDC Manager Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-SDDCMGR-RCMD- Connect SDDC Manager SDDC Manager must be The rules of your
CFG-001 to the Internet for able to download install organization might not
downloading software and upgrade software permit direct access to the
bundles. bundles for deployment of Internet. In this case, you
VI workload domains and must download software
solutions, and for upgrade bundles for SDDC Manager
from a repository. manually.

VCF-SDDCMGR-RCMD- Configure a network proxy To protect SDDC Manager The proxy must not
CFG-002 to connect SDDC Manager against external attacks use authentication because
to the Internet. from the Internet. SDDC Manager does
not support proxy with
authentication.

VMware, Inc. 236


VMware Cloud Foundation Design Guide

Table 15-48. SDDC Manager Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-SDDCMGR-RCMD- To check for and Software bundles for Requires the use of a
CFG-003 download software VMware Cloud Foundation VMware Customer Connect
bundles, configure SDDC are stored in a repository user account with access to
Manager with a VMware that is secured with access VMware Cloud Foundation
Customer Connect account controls. licensing.
with VMware Cloud Sites without internet
Foundation entitlement. connection can use local
upload option instead.

VCF-SDDCMGR-RCMD- Configure SDDC Manager Provides increased security An external certificate


CFG-004 with an external certificate by implementing signed authority, such as Microsoft
authority that is responsible certificate generation and CA, must be locally
for providing signed replacement across the available.
certificates. management components.

vRealize Suite Lifecycle Manager Design Elements for


VMware Cloud Foundation
Use this list of requirements and recommendations for reference related to vRealize Suite
Lifecycle Manager in an environment with a single or multiple VMware Cloud Foundation
instances.

For full design details, see Chapter 10 vRealize Suite Lifecycle Manager Design for VMware Cloud
Foundation.

VMware, Inc. 237


VMware Cloud Foundation Design Guide

Table 15-49. vRealize Suite Lifecycle Manager Design Requirements for VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-vRSLCM- Deploy a vRealize Suite Provides life cycle You must ensure that
REQD-CFG-001 Lifecycle Manager instance management operations for the required resources are
in the management domain vRealize Suite applications and available.
of each VMware Cloud Workspace ONE Access.
Foundation instance to provide
life cycle management for
vRealize Suite and Workspace
ONE Access.

VCF-vRSLCM- Deploy vRealize Suite Lifecycle n Deploys vRealize Suite None.


REQD-CFG-002 Manager by using SDDC Lifecycle Manager in
Manager. VMware Cloud Foundation
mode, which enables the
integration with the SDDC
Manager inventory for
product deployment and
life cycle management of
vRealize Suite components.
n Automatically configures
the standalone Tier-1
gateway required for
load balancing the
clustered Workspace ONE
Access and vRealize Suite
components.

VCF-vRSLCM- Allocate extra 100 GB of n Provides support for None.


REQD-CFG-003 storage to the vRealize Suite vRealize Suite product
Lifecycle Manager appliance binaries (install, upgrade,
for vRealize Suite product and patch) and content
binaries. management.
n SDDC Manager automates
the creation of storage.

VCF-vRSLCM- Place the vRealize Suite Provides a consistent You must use an
REQD-CFG-004 Lifecycle Manager appliance deployment model for implementation in NSX to
on an overlay-backed management applications. support this networking
(recommended) or VLAN- configuration.
backed NSX network segment.

VCF-vRSLCM- Import vRealize Suite product n You can review the validity, When using the API, you must
REQD-CFG-005 licenses to the Locker details, and deployment specify the Locker ID for the
repository for product life cycle usage for the license across license to be used in the JSON
operations. the vRealize Suite products. payload.
n You can reference and
use licenses during product
life cycle operations, such
as deployment and license
replacement.

VMware, Inc. 238


VMware Cloud Foundation Design Guide

Table 15-49. vRealize Suite Lifecycle Manager Design Requirements for VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-vRSLCM- Configure datacenter objects You can deploy and manage You must manage a separate
REQD-ENV-001 in vRealize Suite Lifecycle the integrated vRealize Suite datacenter object for the
Manager for local and components across the SDDC products that are specific to
cross-instance vRealize Suite as a group. each instance.
deployments and assigns the
management domain vCenter
Server instance to each data
center.

VCF-vRSLCM- If deploying vRealize Log Supports the deployment of None.


REQD-ENV-002 Insight, create a local-instance an instance of vRealize Log
environment in vRealize Suite Insight.
Lifecycle Manager.

VCF-vRSLCM- If deploying vRealize n Supports deployment and You can manage instance-
REQD-ENV-003 Operations or vRealize management of the specific components, such as
Automation, create a cross- integrated vRealize Suite remote collectors, only in
instance environment in products across VMware an environment that is cross-
vRealize Suite Lifecycle Cloud Foundation instances instance.
Manager as a group.
n Enables the deployment
of instance-specific
components, such as
vRealize Operations remote
collectors. In vRealize Suite
Lifecycle Manager, you
can deploy and manage
vRealize Operations remote
collector objects only
in an environment that
contains the associated
cross-instance components.

VMware, Inc. 239


VMware Cloud Foundation Design Guide

Table 15-49. vRealize Suite Lifecycle Manager Design Requirements for VMware Cloud
Foundation (continued)

Requirement ID Design Requirement Justification Implication

VCF-vRSLCM- Use the custom vCenter Server vRealize Suite Lifecycle You must maintain the
REQD-SEC-001 role for vRealize Suite Lifecycle Manager accesses vSphere permissions required by the
Manager that has the minimum with the minimum set of custom role.
privileges required to support permissions that are required
the deployment and upgrade to support the deployment
of vRealize Suite products. and upgrade of vRealize Suite
products.
SDDC Manager automates the
creation of the custom role.

VCF-vRSLCM- Use the service account in n Provides the following n You must maintain the life
REQD-SEC-002 vCenter Server for application- access control features: cycle and availability of the
to-application communication n vvRealize Suite service account outside of
from vRealize Suite Lifecycle Lifecycle Manager SDDC manager password
Manager to vSphere. Assign accesses vSphere with rotation.
global permissions using the the minimum set of
custom role. required permissions.
n You can introduce
improved accountability
in tracking request-
response interactions
between the
components of the
SDDC.
n SDDC Manager automates
the creation of the service
account.

Table 15-50. vRealize Suite Lifecycle Manager Design Requirements for Stretched Clusters in
VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-vRSLCM- For multiple availability zones, Ensures that, by default, If vRealize Suite Lifecycle
REQD-CFG-006 add the vRealize Suite Lifecycle the vRealize Suite Lifecycle Manager is deployed after
Manager appliance to the VM Manager appliance is powered the creation of the stretched
group for the first availability on a host in the first availability management cluster, you must
zone. zone. add the vRealize Suite Lifecycle
Manager appliance to the VM
group manually.

VMware, Inc. 240


VMware Cloud Foundation Design Guide

Table 15-51. vRealize Suite Lifecycle Manager Design Requirements for NSX Federation in
VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-vRSLCM- Configure the DNS settings Improves resiliency in the event As you scale from a
REQD-CFG-007 for the vRealize Suite Lifecycle of an outage of external deployment with a single
Manager appliance to use DNS services for a VMware Cloud VMware Cloud Foundation
servers in each instance. Foundation instance. instance to one with multiple
VMware Cloud Foundation
instances, the DNS settings
of the vRealize Suite Lifecycle
Manager appliance must be
updated.

VCF-vRSLCM- Configure the NTP settings for Improves resiliency if an outage As you scale from a
REQD-CFG-008 the vRealize Suite Lifecycle of external services for a deployment with a single
Manager appliance to use NTP VMware Cloud Foundation VMware Cloud Foundation
servers in each VMware Cloud instance occurs. instance to one with multiple
Foundation instance. VMware Cloud Foundation
instances, the NTP settings
on the vRealize Suite Lifecycle
Manager appliance must be
updated.

VCF-vRSLCM- Assign the management Supports the deployment of None.


REQD-ENV-004 domain vCenter Server vRealize Operations remote
instance in the additional collectors in an additional
VMware Cloud Foundation VMware Cloud Foundation
instance to the cross-instance instance.
data center.

Table 15-52. vRealize Suite Lifecycle Manager Design Recommendations for VMware Cloud
Foundation
Recommendati
on ID Design Recommendation Justification Implication

VCF-vRSLCM- Protect vRealize Suite Lifecycle Supports the availability None.


RCMD-CFG-001 Manager by using vSphere HA. objectives for vRealize Suite
Lifecycle Manager without
requiring manual intervention
during a failure event.

VCF-vRSLCM- Obtain product binaries for n You can upgrade The site must have an Internet
RCMD-LCM-001 install, patch, and upgrade vRealize Suite products connection to use VMware
in vRealize Suite Lifecycle based on their general Customer Connect.
Manager from VMware availability and endpoint Sites without an Internet
Customer Connect. interoperability rather than connection should use the local
being listed as part of upload option instead.
VMware Cloud Foundation
bill of materials (BOM).
n You can deploy and
manage binaries in an
environment that does not
allow access to the Internet
or are dark sites.

VMware, Inc. 241


VMware Cloud Foundation Design Guide

Table 15-52. vRealize Suite Lifecycle Manager Design Recommendations for VMware Cloud
Foundation (continued)
Recommendati
on ID Design Recommendation Justification Implication

VCF-vRSLCM- Use support packs (PSPAKS) Enables the upgrade of None.


RCMD-LCM-002 for vRealize Suite Lifecycle an existing vRealize Suite
Manager to enable upgrading Lifecycle Manager to permit
to later versions of vRealize later versions of vRealize
Suite products. Suite products without an
associated VVMware Cloud
Foundation upgrade. See
VMware Knowledge Base
article 88829

VCF-vRSLCM- Enable integration between n Enables authentication to You must deploy and configure
RCMD-SEC-001 vRealize Suite Lifecycle vRealize Suite Lifecycle Workspace ONE Access
Manager and your corporate Manager by using your to establish the integration
identity source by using corporate identity source. between vRealize Suite
the Workspace ONE Access n Enables authorization Lifecycle Manager and your
instance. through the assignment corporate identity sources.
of organization and cloud
services roles to enterprise
users and groups defined
in your corporate identity
source.

VCF-vRSLCM- Create corresponding security Streamlines the management n You must create the
RCMD-SEC-002 groups in your corporate of vRealize Suite Lifecycle security groups outside of
directory services for vRealize Manager roles for users. the SDDC stack.
Suite Lifecycle Manager roles: n You must set the desired
n VCF directory synchronization
n Content Release Manager interval in Workspace
ONE Access to ensure
n Content Developer
that changes are available
within a reasonable period.

Workspace ONE Access Design Elements for VMware Cloud


Foundation
Use this list of requirements and recommendations for reference related toWorkspace ONE
Access in an environment with a single or multiple VMware Cloud Foundation instances. The
design elements also considers whether the management domain has a single or multiple
availability zones.

For full design details, see Chapter 11 Workspace ONE Access Design for VMware Cloud
Foundation.

VMware, Inc. 242


VMware Cloud Foundation Design Guide

Table 15-53. Workspace ONE Access Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-ENV-001 Create a global A global environment None.


environment in vRealize is required by vRealize
Suite Lifecycle Manager to Suite Lifecycle Manager to
support the deployment of deploy Workspace ONE
Workspace ONE Access. Access.

VCF-WSA-REQD-SEC-001 Import certificate authority- n You can reference When using the API, you
signed certificates to and use certificate must specify the Locker ID
the Locker repository authority-signed for the certificate to be
for Workspace ONE certificates during used in the JSON payload.
Access product life cycle product life cycle
operations. operations, such
as deployment and
certificate replacement.

VCF-WSA-REQD-CFG-001 Deploy an appropriately The Workspace ONE None.


sized Workspace ONE Access instance is
Access instance according managed by vRealize Suite
to the deployment model Lifecycle Manager and
you have selected by using imported into the SDDC
vRealize Suite Lifecycle Manager inventory.
Manager in VMware Cloud
Foundation mode.

VCF-WSA-REQD-CFG-002 Place the Workspace Provides a consistent You must use an


ONE Access appliances deployment model for implementation in NSX
on an overlay-backed or management applications to support this network
VLAN-backed NSX network in an environment with configuration.
segment. a single or multiple
VMware Cloud Foundation
instances.

VCF-WSA-REQD-CFG-003 Use the embedded Removes the need for None.


PostgreSQL database with external database services.
Workspace ONE Access.

VCF-WSA-REQD-CFG-004 Add a VM group for You can define the None.


Workspace ONE Access startup order of virtual
and set VM rules to machines regarding the
restart the Workspace service dependency. The
ONE Access VM group startup order ensures that
before any of the VMs vSphere HA powers on the
that depend on it for Workspace ONE Access
authentication. virtual machines in an
order that respects product
dependencies.

VMware, Inc. 243


VMware Cloud Foundation Design Guide

Table 15-53. Workspace ONE Access Design Requirements for VMware Cloud Foundation
(continued)

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-CFG-005 Connect the Workspace You can integrate your None.


ONE Access instance to enterprise directory with
a supported upstream Workspace ONE Access
Identity Provider. to synchronize users and
groups to the Workspace
ONE Access identity
and access management
services.

VCF-WSA-REQD-CFG-006 If using clustered Adding the additional Each of the Workspace


Workspace ONE Access, native connectors provides ONE Access cluster nodes
configure second and third redundancy and improves must be joined to the
native connectors that performance by load- Active Directory domain
correspond to the second balancing authentication to use Active Directory
and third Workspace ONE requests. with Integrated Windows
Access cluster nodes to Authentication with the
support the high availability native connector.
of directory services
access.

VCF-WSA-REQD-CFG-007 If using clustered n During the deployment You must use the load
Workspace ONE Access, of Workspace ONE balancer that is configured
use the NSX load balancer Access by using by SDDC Manager and the
that is configured by SDDC vRealize Suite Lifecycle integration with vRealize
Manager on a dedicated Manager, SDDC Suite Lifecycle Manager.
Tier-1 gateway. Manager automates the
configuration of an
NSX load balancer
for Workspace ONE
Access to facilitate
scale-out.

VMware, Inc. 244


VMware Cloud Foundation Design Guide

Table 15-54. Workspace ONE Access Design Requirements for Stretched Clusters in VMware
Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-CFG-008 Add the Workspace ONE Ensures that, by default, n If the Workspace
Access appliances to the the Workspace ONE ONE Access instance
VM group for the first Access cluster nodes are is deployed after
availability zone. powered on a host in the the creation of the
first availability zone. stretched management
cluster, you must add
the appliances to the
VM group manually.
n ClusteredWorkspace
ONE Access might
require manual
intervention after a
failure of the active
availability zone occurs.

Table 15-55. Workspace ONE Access Design Requirements for NSX Federation in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-WSA-REQD-CFG-009 Configure the DNS settings Improves resiliency if None.


for Workspace ONE Access an outage of external
to use DNS servers in each services for a VMware
VMware Cloud Foundation Cloud Foundation instance
instance. occurs.

VCF-WSA-REQD-CFG-010 Configure the NTP settings Improves resiliency if If you scale from a
on Workspace ONE Access an outage of external deployment with a single
cluster nodes to use NTP services for a VMware VMware Cloud Foundation
servers in each VMware Cloud Foundation instance instance to one with
Cloud Foundation instance. occurs. multiple VMware Cloud
Foundation instances,
the NTP settings on
Workspace ONE Access
must be updated.

VMware, Inc. 245


VMware Cloud Foundation Design Guide

Table 15-56. Workspace ONE Access Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-CFG-001 Protect all Workspace Supports high availability None for standard
ONE Access nodes using for Workspace ONE deployments.
vSphere HA. Access. Clustered Workspace ONE
Access deployments might
require intervention if an
ESXi host failure occurs.

VCF-WSA-RCMD-CFG-002 When using Active The native (embedded) n In a multi-domain


Directory as an Identity Workspace ONE Access forest, where the
Provider, use Active connector binds to Active Workspace ONE
Directory over LDAP as Directory over LDAP Access instance
the Directory Service using a standard bind connects to a
connection option. authentication. child domain, Active
Directory security
groups must have
global scope.
Therefore, members
added to the
Active Directory global
security group must
reside within the
same Active Directory
domain.
n If authentication to
more than one Active
Directory domain is
required, additional
Workspace ONE
Access directories are
required.

VCF-WSA-RCMD-CFG-003 When using Active Provides the following n You must manage the
Directory as an Identity access control features: password life cycle of
Provider, use an Active n Workspace ONE this account.
Directory user account with Access connects to the n If authentication to
the minimum of read-only Active Directory with more than one Active
access to Base DNs for the minimum set of Directory domain is
users and groups as the required permissions to required, additional
service account for the bind and query the accounts are required
Active Directory bind. directory. for the Workspace
n You can introduce an ONE Access connector
improved accountability bind to each Active
in tracking request- Directory domain over
response interactions LDAP.
between theWorkspace
ONE Access and Active
Directory.

VMware, Inc. 246


VMware Cloud Foundation Design Guide

Table 15-56. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-CFG-004 Configure the directory n Limits the number You must manage
synchronization to of replicated groups the groups from
synchronize only groups required for each your enterprise
required for the integrated product. directory selected for
SDDC solutions. n Reduces the replication synchronization to
interval for group Workspace ONE Access.
information.

VCF-WSA-RCMD-CFG-005 Activate the When activated, members None.


synchronization of of the enterprise directory
enterprise directory group groups are synchronized
members when a group is to the Workspace ONE
added to the Workspace Access directory when
ONE Access directory. groups are added. When
deactivated, group names
are synchronized to the
directory, but members
of the group are not
synchronized until the
group is entitled to an
application or the group
name is added to an access
policy.

VCF-WSA-RCMD-CFG-006 Enable Workspace ONE Allows Workspace ONE Changes to group


Access to synchronize Access to update and membership are not
nested group members by cache the membership of reflected until the next
default. groups without querying synchronization event.
your enterprise directory.

VCF-WSA-RCMD-CFG-007 Add a filter to the Limits the number of To ensure that replicated
Workspace ONE Access replicated users for user accounts are managed
directory settings to Workspace ONE Access within the maximums, you
exclude users from the within the maximum scale. must define a filtering
directory replication. schema that works for your
organization based on your
directory attributes.

VMware, Inc. 247


VMware Cloud Foundation Design Guide

Table 15-56. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-CFG-008 Configure the mapped You can configure the User accounts in your
attributes included when minimum required and organization's enterprise
a user is added to the extended user attributes directory must have
Workspace ONE Access to synchronize directory the following required
directory. user accounts for the attributes mapped:
Workspace ONE Access n firstname, for example,
to be used as an givenname for Active
authentication source for Directory
cross-instance vRealize
n lastName, for example,
Suite solutions.
sn for Active Directory

n email, for example,


mail for Active
Directory
n userName, for
example,sAMAccountNam
e for Active Directory

n If you require users


to sign in with
an alternate unique
identifier, for example,
userPrincipalName, you
must map the
attribute and update
the identity and
access management
preferences.

VCF-WSA-RCMD-CFG-009 Configure the Workspace Ensures that any changes Schedule the
ONE Access directory to group memberships in synchronization interval
synchronization frequency the corporate directory to be longer than the
to a reoccurring schedule, are available for integrated time to synchronize from
for example, 15 minutes. solutions in a timely the enterprise directory.
manner. If users and groups
are being synchronized
to Workspace ONE
Access when the
next synchronization is
scheduled, the new
synchronization starts
immediately after the end
of the previous iteration.
With this schedule, the
process is continuous.

VMware, Inc. 248


VMware Cloud Foundation Design Guide

Table 15-56. Workspace ONE Access Design Recommendations for VMware Cloud Foundation
(continued)

Recommendation ID Design Recommendation Justification Implication

VCF-WSA-RCMD-SEC-001 Create corresponding Streamlines the n You must set the


security groups in management of Workspace appropriate directory
your corporate directory ONE Access roles to users. synchronization interval
services for these in Workspace ONE
Workspace ONE Access Access to ensure that
roles: changes are available
n Super Admin within a reasonable
period.
n Directory Admins
n You must create the
n ReadOnly Admin
security group outside
of the SDDC stack.

VCF-WSA-RCMD-SEC-002 Configure a password You can set a policy for You must set the policy
policy for Workspace Workspace ONE Access in accordance with your
ONE Access local local directory users that organization policies and
directory users, admin and addresses your corporate regulatory standards, as
configadmin. policies and regulatory applicable.
standards. You must apply the
The password policy is password policy on the
applicable only to the Workspace ONE Access
local directory users and cluster nodes.
does not impact your
organization directory.

Life Cycle Management Design Elements for VMware Cloud


Foundation
Use this list of requirements for reference related to life cycle management in a VMware Cloud
Foundation environment.

For full design details, see Chapter 12 Life Cycle Management Design for VMware Cloud
Foundation.

VMware, Inc. 249


VMware Cloud Foundation Design Guide

Table 15-57. Life Cycle Management Design Requirements for VMware Cloud Foundation

Requirement ID Design Requirement Justification Implication

VCF-LCM-REQD-001 Use SDDC Manager to Because the deployment The operations team must
perform the life cycle scope of SDDC Manager understand and be aware
management of the covers the full VMware of the impact of a
following components: Cloud Foundation stack, patch, update, or upgrade
n SDDC Manager SDDC Manager performs operation by using SDDC
patching, update, or Manager.
n NSX Manager
upgrade of these
n NSX Edges
components across all
n vCenter Server workload domains.
n ESXi

VCF-LCM-REQD-002 Use vRealize Suite Lifecycle vRealize Suite Lifecycle n You must deploy
Manager to manage the Manager automates the vRealize Suite Lifecycle
lifecycle of the following life cycle of vRealize Suite Manager by using
components: Lifecycle Manager and SDDC Manager.
n vRealize Suite Lifecycle Workspace ONE Access. n You must manually
Manager apply patches, updates,
n Workspace ONE and hot fixes
Access for Workspace ONE
Access. Patches,
updates, and hotfixes
for Workspace ONE
Access are not
generally managed by
vRealize Suite Lifecycle
Manager.

VMware, Inc. 250


VMware Cloud Foundation Design Guide

Table 15-58. Lifecycle Management Design Requirements for NSX Federation in VMware Cloud
Foundation

Requirement ID Design Requirement Justification Implication

VCF-LCM-REQD-003 Use the upgrade The version of SDDC n You must explicitly
coordinator in NSX Manager in this design is plan upgrades of the
to perform life cycle not currently capable of life NSX Global Manager
management on the NSX cycle operations (patching, nodes. An upgrade
Global Manager appliances. update, or upgrade) for of the NSX Global
NSX Global Manager. Manager nodes might
require a cascading
upgrade of the NSX
Local Manager nodes
and underlying SDDC
Manager infrastructure
prior to the upgrade
of the NSX Global
Manager nodes.
n You must always align
the version of the NSX
Global Manager nodes
with the rest of the
SDDC stack in VMware
Cloud Foundation.

VCF-LCM-REQD-004 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
any workload domain, the compatible with each other. using a runbook or
impact of any version Because SDDC Manager automated process to
upgrades is evaluated does not provide life ensure a fully supported
in relation to the need cycle operations (patching, and compliant bill of
to upgrade NSX Global update, or upgrade) materials prior to any
Manager. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.

VCF-LCM-REQD-005 Establish an operations The versions of NSX Global The administrator must
practice to ensure that Manager and NSX Local establish and follow an
prior to the upgrade of Manager nodes must be operations practice by
the NSX Global Manager, compatible with each other. using a runbook or
the impact of any Because SDDC Manager automated process to
version change is evaluated does not provide life ensure a fully supported
against the existing NSX cycle operations (patching, and compliant bill of
Local Manager nodes and update, or upgrade) materials prior to any
workload domains. for the NSX Global upgrade operation.
Manager nodes, upgrade
to an unsupported version
cannot be prevented.

VMware, Inc. 251


VMware Cloud Foundation Design Guide

Information Security Design Elements for VMware Cloud


Foundation
Use this list of requirements and recommendations for reference related to the management of
access controls, certificates and accounts in a VMware Cloud Foundation environment.

For full design details, see Chapter 14 Information Security Design for VMware Cloud Foundation.

Table 15-59. Design Requirements for Account and Password Management for VMware Cloud
Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-ACTMGT-REQD- Enable scheduled n Increases the security You must retrieve new
SEC-001 password rotation in SDDC posture of your SDDC. passwords by using the API
Manager for all accounts n Simplifies password if you must use accounts
supporting scheduled management interactively.
rotation. across your
SDDC management
components.

VCF-ACTMGT-REQD- Establish operational Rotates passwords and None.


SEC-003 practice to rotate automatically remediates
passwords using SDDC SDDC Manager databases
Manager on components for those user accounts.
that do not support
scheduled rotation in SDDC
Manager.

VCF-ACTMGT-REQD- Establish operational Maintains password policies None.


SEC-003 practice to manually rotate across components not
passwords on components handled by SDDC Manager
that cannot be rotated by password management.
SDDC Manager.

VMware, Inc. 252


VMware Cloud Foundation Design Guide

Table 15-60. Certificate Management Design Recommendations for VMware Cloud Foundation

Recommendation ID Design Recommendation Justification Implication

VCF-SDDC-RCMD-SEC-001 Replace the default VMCA- Ensures that the Replacing the default
signed certificate on communication to all certificates with trusted
all management virtual management components CA-signed certificates from
appliances with a certificate is secure. a certificate authority might
that is signed by an internal increase the deployment
certificate authority. preparation time because
you must generate and
submit certificate requests.

VCF-SDDC-RCMD-SEC-002 Use a SHA-2 algorithm The SHA-1 algorithm is Not all certificate
or higher for signed considered less secure and authorities support SHA-2
certificates. has been deprecated. or higher.

VCF-SDDC-RCMD-SEC-003 Perform SSL certificate life SDDC Manager supports Certificate management
cycle management for all automated SSL certificate for NSX Global Manager
management appliances by lifecycle management instances must be done
using SDDC Manager. rather than requiring a manually.
series of manual steps.

VMware, Inc. 253

You might also like